The Bitmovin API enables you to build cutting edge video workflows while leveraging emmy-award winning encoding solutions, that automatically optimize your catalog for high quality and cost efficient video streaming.

Quick Start Guide

This Quick Start Video will demonstrate 3 different approaches to setup an encoding workflow using either a

Get Started with your own Project

The following steps will walk you through adding the SDK to an new or existing Project step by step.

Overview

Starting an encoding triggers a request for encoding instances. Once available, the input file will be downloaded, divided into chunks, and distributed between all of them. Those instances transcode each chunk into segments for all configured video/audio representations (muxings) in parallel and write them to the designated output destination. This approach makes Bitmovin's encoding the fastest in the world.

In this tutorial you will implement a simple VOD online streaming scenario, where an input file containing a video stream and stereo audio stream is transcoded into ...

  • a fixed ladder of 3 video representations at different resolutions and bitrates using the H264 codec, and
  • an audio rendition with the AAC codec.

Each individual stream is wrapped into fragmented MP4 (fMP4) containers. HLS and DASH manifests are generated so the encoded content is stream- and playable on the majority of modern devices and browsers out there today.

Step 1: Add Bitmovin SDK to Your Project

To get started add the Bitmovin SDK to your project.
For the latest version please refer to the GitHub repository or check the changelog of the Bitmovin API. Open API SDK versions and the Bitmovin API share the same versioning.

//with Package Manager
Install-Package Bitmovin.Api.Sdk

//with .NET CLI
dotnet add package Bitmovin.Api.Sdk

Step 2: Setup a Bitmovin API Client Instance

The Bitmovin API client instance is used for all communication with the Bitmovin REST API, pass your API key to authenticate with the API. You can find your API_KEY in the dashboard in your Account Settings.

using Bitmovin.Api.Sdk;
using Bitmovin.Api.Sdk.Common.Logging;

var bitmovinApi = BitmovinApi.Builder
    .WithApiKey(API_KEY)
    .WithLogger(new ConsoleLogger())
    .Build();

Step 3: Create an Input

Create Input

An input holds the configuration to access a specific file storage, from which the encoder will download the source file.

We support various types of input sources including HTTP(S), (S)FTP, Google Cloud Storage (GCS), Amazon Simple Storage Service (S3), Azure Blob Storage, and all cloud storage vendors that offer an S3 compatible interface.

Below we are creating an input. You can change the input type on the top right of the code panel.

var input = await bitmovinApi.Encoding.Inputs.Https.CreateAsync(new HttpsInput()
{
    Name = "<INPUT_NAME>",
    Host = "<HTTPS_INPUT_HOST>"
});

Reuse Inputs

Inputs refer to a location, not a specific file.
This makes them easily reusable. Create an input once and reuse it for all your encodings.

Hint: All resources in our API are identified by a unique id which enables you to reuse many resources. So you only have to create one input for each location and can reuse it for all encodings.

var input = await bitmovinApi.Encoding.Inputs.Https.GetAsync("<INPUT_ID>");

Step 4: Create an Output

Create Output

Outputs define where the manifests and encoded segments will be written to.
Like inputs, outputs represent a set of information needed to access a specific storage. From this storage players can access the content for playback.

var output = await bitmovinApi.Encoding.Outputs.Gcs.CreateAsync(new GcsOutput
{
    AccessKey = "<GCS_ACCESS_KEY>",
    SecretKey = "<GCS_SECRET_KEY>",
    BucketName = "<GCS_BUCKET_NAME>",
    Name = "<GCS_OUTPUT_NAME>"
});
string outputId = output.Id;

Reuse Output

Similar to inputs, outputs also refer to a location, not a specific file.
So they're exactly as reusable as inputs. Create it once and reuse it for all your encodings.

var output = await bitmovinApi.Encoding.Outputs.Gcs.GetAsync("<OUTPUT_ID>");

Step 5: Create Codec Configurations

Codec Configurations

Codec configurations specify the codec and its parameters (eg. media codec, bitrate, frame rate, sampling rate).
They are used to encode the input stream coming from your input file.

Video Codec Configurations

Below we are creating video codec configurations. We support a multitude of codecs, including H264, H265, VP9, AV1 and many more.
For simplicity, we use H264 now, but you can easily change that after finishing the getting started.

var codecConfigVideoName1 = "Getting Started H264 Codec Config 1";
var codecConfigVideoBitrate1 = 1500000;
var videoCodecConfiguration1 = await bitmovinApi.Encoding.Configurations.Video.H264.CreateAsync(new H264VideoConfiguration
{
    Name = codecConfigVideoName1,
    PresetConfiguration = PresetConfiguration.VOD_STANDARD,
    Width = 1024,
    Bitrate = codecConfigVideoBitrate1,
    Description = codecConfigVideoName1 + "_" + codecConfigVideoBitrate1
});

var codecConfigVideoName2 = "Getting Started H264 Codec Config 2";
var codecConfigVideoBitrate2 = 1000000;
var videoCodecConfiguration2 = await bitmovinApi.Encoding.Configurations.Video.H264.CreateAsync(new H264VideoConfiguration
{
    Name = codecConfigVideoName2,
    PresetConfiguration = PresetConfiguration.VOD_STANDARD,
    Width = 768,
    Bitrate = codecConfigVideoBitrate2,
    Description = codecConfigVideoName2 + "_" + codecConfigVideoBitrate2
});

var codecConfigVideoName3 = "Getting Started H264 Codec Config 3";
var codecConfigVideoBitrate3 = 750000;
var videoCodecConfiguration3 = await bitmovinApi.Encoding.Configurations.Video.H264.CreateAsync(new H264VideoConfiguration
{
    Name = codecConfigVideoName3,
    PresetConfiguration = PresetConfiguration.VOD_STANDARD,
    Width = 640,
    Bitrate = codecConfigVideoBitrate3,
    Description = codecConfigVideoName3 + "_" + codecConfigVideoBitrate3
});

Audio Codec Configurations

As audio codec configuration we are creating one AAC configuration.
We also support many more audio codecs including Opus, Vorbis, AC3, E-AC3 and MP2.

var codecConfigAudio = await bitmovinApi.Encoding.Configurations.Audio.Aac.CreateAsync(new AacAudioConfiguration
{
    Name = "Getting Started Audio Codec Config",
    Bitrate = 128000
});

Reusing Codec Configurations

As inputs and outputs, codec configurations can also be reused.
To use existing video or audio codec configurations, loading them with their id.

var codecConfigVideo = await bitmovinApi.Encoding.Configurations.Video.H264.GetAsync("<H264_CC_ID>");
var codecConfigAudio = await bitmovinApi.Encoding.Configurations.Audio.Aac.GetAsync("<AAC_CC_ID>");

Step 6: Create & Configure an Encoding

Create Encoding

An encoding is a collection of resources (inputs, outputs, codec configurations etc.), mapped to each other.

The cloud region defines in which cloud and region your encoding will be started.
If you don't set it, our encoder takes the region closest to your input and output.

Inputs, outputs and codec configurations can be reused among many encodings. Other resources are specific to one encoding, like streams, which we'll create below.

var encoding = await bitmovinApi.Encoding.Encodings.CreateAsync(new Encoding
{
    Name = "Getting Started Encoding",
    CloudRegion = CloudRegion.GOOGLE_EUROPE_WEST_1
});

Streams

A stream maps an audio or video input stream of your input file (called input stream) to one audio or video codec configuration.
The following example uses the selection mode AUTO, which will select the first available video input stream of your input file.
You can choose an input file from 3 predefined examples on the top right of the code panel.

Video Stream

string inputPath = "<INPUT_PATH>";

var videoStreamInput = new StreamInput
{
  InputId = input.Id,
  InputPath = inputPath,
  SelectionMode = StreamSelectionMode.AUTO
};

var videoStream1 = await bitmovinApi.Encoding.Encodings.Streams.CreateAsync(encoding.Id, new Stream
{
  InputStreams = new List<StreamInput>
  {
    videoStreamInput
  },
  CodecConfigId = videoCodecConfiguration1.Id
});

var videoStream2 = await bitmovinApi.Encoding.Encodings.Streams.CreateAsync(encoding.Id, new Stream
{
  InputStreams = new List<StreamInput>
  {
    videoStreamInput
  },
  CodecConfigId = videoCodecConfiguration2.Id
});

var videoStream3 = await bitmovinApi.Encoding.Encodings.Streams.CreateAsync(encoding.Id, new Stream
{
  InputStreams = new List<StreamInput>
  {
    videoStreamInput
  },
  CodecConfigId = videoCodecConfiguration3.Id
});

Audio Stream

var inputStreamAudio = await bitmovinApi.Encoding.Encodings.Streams.CreateAsync(encoding.Id, new Stream
{
    InputStreams = new List<StreamInput>
    {
        new StreamInput
        {
            InputId = input.Id,
            InputPath = inputPath,
            SelectionMode = StreamSelectionMode.AUTO
        }
    },
    CodecConfigId = codecConfigAudio.Id
});

Step 7: Create a Muxing

Muxings

A muxing defines which container format will be used for the encoded video or audio files (segmented TS, progressive TS, MP4, FMP4, ...). It requires a stream, an output, and the output path, where the generated segments will be written to.

FMP4 muxings are used for DASH manifest representations, TS muxings for HLS playlist representations.

FMP4

var aclEntry = new AclEntry { Permission = AclPermission.PUBLIC_READ };

double segmentLength = 4;
string outputPath = "<OUTPUT_PATH>";
string initSegmentName = "init.mp4";
string segmentNaming = "seg_%number%.m4s";


var videoMuxingOutput1 = new EncodingOutput
{
    Acl = new List<AclEntry> { aclEntry },
    OutputId = outputId,
    OutputPath = outputPath + "/video/1024_1500000/fmp4"
};

var videoMuxing1 = new Fmp4Muxing
{
    Outputs = new List<EncodingOutput> { videoMuxingOutput1 },
    Streams = new List<MuxingStream> { new MuxingStream { StreamId = videoStream1.Id } },
    SegmentLength = segmentLength,
    InitSegmentName = initSegmentName,
    SegmentNaming = segmentNaming
};
videoMuxing1 = await bitmovinApi.Encoding.Encodings.Muxings.Fmp4.CreateAsync(encoding.Id, videoMuxing1);

var videoMuxingOutput2 = new EncodingOutput
{
    Acl = new List<AclEntry> { aclEntry },
    OutputId = outputId,
    OutputPath = outputPath + "/video/768_1000000/fmp4"
};

var videoMuxing2 = new Fmp4Muxing
{
    Outputs = new List<EncodingOutput> { videoMuxingOutput2 },
    Streams = new List<MuxingStream> { new MuxingStream { StreamId = videoStream2.Id } },
    SegmentLength = segmentLength,
    InitSegmentName = initSegmentName,
    SegmentNaming = segmentNaming
};
videoMuxing2 = await bitmovinApi.Encoding.Encodings.Muxings.Fmp4.CreateAsync(encoding.Id, videoMuxing2);

var videoMuxingOutput3 = new EncodingOutput
{
    Acl = new List<AclEntry> { aclEntry },
    OutputId = outputId,
    OutputPath = outputPath + "/video/640_750000/fmp4"
};

var videoMuxing3 = new Fmp4Muxing
{
    Outputs = new List<EncodingOutput> { videoMuxingOutput3 },
    Streams = new List<MuxingStream> { new MuxingStream { StreamId = videoStream3.Id } },
    SegmentLength = segmentLength,
    InitSegmentName = initSegmentName,
    SegmentNaming = segmentNaming
};
videoMuxing3 = await bitmovinApi.Encoding.Encodings.Muxings.Fmp4.CreateAsync(encoding.Id, videoMuxing3);

Audio Muxings

var audioMuxingOutput = new EncodingOutput
{
    Acl = new List<AclEntry> { aclEntry },
    OutputId = outputId,
    OutputPath = outputPath + "/audio/128000/fmp4"
};

var audioMuxing = new Fmp4Muxing
{
    Outputs = new List<EncodingOutput> { audioMuxingOutput },
    Streams = new List<MuxingStream> { new MuxingStream { StreamId = inputStreamAudio.Id } },
    SegmentLength = segmentLength,
    InitSegmentName = initSegmentName,
    SegmentNaming = segmentNaming
};
audioMuxing = await bitmovinApi.Encoding.Encodings.Muxings.Fmp4.CreateAsync(encoding.Id, audioMuxing);

Hint: To optimize your content for a specific use-case you can also provide a custom segment length. Further, you can also customize the naming pattern for the resulting segments and the initialization segment as well.

Step 8: Create a Manifest

DASH & HLS Manifests

Its recommended to use Default Manifests, one each for DASH (API-Reference) and HLS (API-Reference). They are a construct that leaves it to the encoder to determine how to best generate the manifest based on what resources have been created by the encoding. This simplifies the code greatly for simple scenarios like this guide is about.

Create a DASH Manifest

var encodingOutput = new EncodingOutput()
{
    OutputPath = outputPath,
    OutputId = outputId,
    Acl = new List<AclEntry>() { new AclEntry()
      {
          Permission = AclPermission.PUBLIC_READ
      }
    }
};

var dashManifest = new DashManifestDefault()
{
    EncodingId = encoding.Id,
    ManifestName = "stream.mpd",
    Version = DashManifestDefaultVersion.V1,
    Outputs = new List<EncodingOutput>() { encodingOutput }
};

var dashDefaultManifest = await bitmovinApi.Encoding.Manifests.Dash.Default.CreateAsync(dashManifest);

Create a HLS manifest

var hlsManifest = new HlsManifestDefault()
{
    EncodingId = encoding.Id,
    Name = "stream.m3u8",
    Version = HlsManifestDefaultVersion.V1,
    Outputs = new List<EncodingOutput>() { encodingOutput }
};

var hlsDefaultManifest = await bitmovinApi.Encoding.Manifests.Hls.Default.CreateAsync(hlsManifest);

Step 9: Start an Encoding

Let's recap - We have defined:

  • an input, specifying the input file host
  • an output, specifying where the manifest and the encoded files will be written to
  • codec configurations, specifying how the input file will be encoded
  • streams, specifying which input stream will be encoded using what codec configuration
  • muxings, specifying the containerization of the streams
  • manifests, to play the encoded content using the HLS or DASH streaming format.

Now we can start the encoding with one API call, including additional configuration so the DASH and HLS Default-Manifests are created as part of the encoding process too:

var startEncodingRequest = new StartEncodingRequest()
{
    ManifestGenerator = ManifestGenerator.V2,
    VodDashManifests = new List<ManifestResource>() { new ManifestResource() {
         ManifestId = dashDefaultManifest.Id
      }
    },
    VodHlsManifests = new List<ManifestResource>() { new ManifestResource() {
         ManifestId = hlsDefaultManifest.Id
      }
    }
};
await bitmovinApi.Encoding.Encodings.StartAsync(encoding.Id, startEncodingRequest);

What happens next: After the successful start of the encoding, it will change its state from CREATED to STARTED, and shortly after that to QUEUED. RUNNING it will be in as soon as the input file starts to get downloaded and processed. Eventually FINISHED will indicate a successful encoding and upload of the content to the provided Output destination. More details about each encoding status can be found here.

Step 10: Final Review

Congratulations! You successfully started your first encoding :) and this is only the beginning. The Bitmovin Encoding API is extremely flexible and allows many customizations and provides with access to the latest codecs and video streaming optimizations and technologies out there. Have a look at the API Reference to learn more.

More Examples

You can find many more fully functional code examples in our Github Repository: GitHub