AI at Bitmovin
AI at Bitmovin
Bitmovin integrates AI across the streaming workflow — from understanding what's in your video, to answering questions about your infrastructure, to playing back streams right inside your AI tools.
This page gives you a single place to discover what's available today.
AI Scene Analysis
AI Scene Analysis (AISA) is Bitmovin's flagship AI product. It transforms video content into rich, structured metadata by analyzing every scene for context, themes, and visual characteristics — making every content minute discoverable, reusable, and advertisable.
AISA runs as part of your encoding workflow and produces per-scene metadata including:
- Scene boundaries with start/end timestamps
- Summaries and descriptions of each scene
- Visual elements — objects, brands, settings, locations, characters
- Mood and atmosphere — lighting, time of day, weather, sentiment
- IAB taxonomy classifications and keywords for contextual ad targeting
- Sensitive topic flags and content ratings
- Multi-language support — analyze in any language, output in multiple languages
Use cases
- Intelligent ad placement — automatic SCTE marker insertion and keyframe placement at natural scene boundaries, with contextual ad matching via IAB taxonomies. Integrates with AWS MediaTailor, Broadpeak, and SpringServe.
- Content discovery — feed scene-level metadata into recommendation engines and search.
- Operational automation — highlights extraction, metadata generation, subtitle and caption workflows.
- Player enhancements — contextual overlays and interactive features powered by scene metadata.
- Performance analytics — correlate viewer engagement with scene context to identify what works.
To get started, see the full AI Scene Analysis documentation.
Bitmovin Assistant
The Bitmovin Assistant is a chat-based AI assistant built into the Bitmovin Dashboard. It gives you a single conversation surface for navigating the product, inspecting your encodings, searching documentation, finding SDK examples, and querying your Observability data.
No setup required — sign in and start asking questions at dashboard.bitmovin.com/assistant.
MCP servers — bring Bitmovin into your own AI tools
We expose our agents over the Model Context Protocol (MCP), so you can connect them to Claude, ChatGPT, Cursor, or any other MCP-compatible client and use Bitmovin's capabilities alongside your other tooling.
Public MCP servers
Our production, public-facing MCP endpoints live at agentic.bitmovin.com. They require no account for the docs and player servers; the encoding server needs your Bitmovin API key.
| Server | Endpoint | Auth | Purpose |
|---|---|---|---|
| Documentation MCP | https://agentic.bitmovin.com/documentation/mcp | None | Ask questions about Bitmovin products, APIs, SDKs; search SDK example repos. |
| Encoding MCP | https://agentic.bitmovin.com/encoding/mcp | x-api-key header(Oauth coming soon) | Inspect and debug your encodings. Asks: "list my recent encodings", "why did encoding X fail". |
| Player MCP | https://agentic.bitmovin.com/player/mcp | None | Render a Bitmovin Player inline in your chat client and play any HLS/DASH/MP4 stream. |
Product-specific MCP servers (hosted separately) are listed under Related MCP servers below.
Quick start
The fastest way to get started is to add the Documentation MCP to your preferred client.
Claude Code
claude mcp add bitmovin-docs --transport http https://agentic.bitmovin.com/documentation/mcpClaude Desktop / Cursor / Windsurf
Add to your MCP config (claude_desktop_config.json, Cursor MCP settings, etc.):
{
"mcpServers": {
"bitmovin-docs": {
"url": "https://agentic.bitmovin.com/documentation/mcp"
}
}
}ChatGPT
- Open chatgpt.com and go to Settings → Connectors.
- Click Add connector and select MCP.
- Enter the URL:
https://agentic.bitmovin.com/documentation/mcp - Click Save.
The Bitmovin documentation tools will now appear in any new conversation.
Then ask your assistant things like:
- "How do I configure Widevine DRM for the Bitmovin Web Player?"
- "Show me an example of creating a live encoding with the Java SDK."
- "What's the difference between CMAF and fragmented MP4 in our encoding output?"
Related MCP servers
Some Bitmovin products ship their own MCP servers, hosted separately from agentic.bitmovin.com:
| Server | Endpoint | Auth | Docs |
|---|---|---|---|
| Observability MCP | https://analytics.mcp.bitmovin.com/ | x-api-key + x-tenant-org-id | Observability MCP Server |
| Stream Lab MCP | https://streamlab.mcp.bitmovin.com/ | x-api-key (+ optional x-tenant-org-id) | Stream Lab MCP Server |
More servers are on the roadmap — the single URL to bookmark is agentic.bitmovin.com, where future agents will appear.
Under the hood
- AI Scene Analysis runs during encoding and produces structured metadata — no separate service to manage.
- The MCP servers do not ship any AI model. They are tool servers — your chosen MCP client's model does the reasoning. The Observability and Stream Lab MCPs are thin wrappers over the Bitmovin REST API; the agentic MCPs at
agentic.bitmovin.comwrap agents that orchestrate retrieval, docs search, and product calls. - The Bitmovin Assistant uses its own LLM behind the scenes — see Bitmovin Assistant for details.
Data handling
- AI Scene Analysis processes your video content during encoding. The generated metadata is stored with your encoding output and is accessible via the Bitmovin API.
- The Documentation MCP uses only public Bitmovin documentation and public SDK example repositories.
- The Encoding, Observability, and Stream Lab MCPs scope every call to the API key you provide — they can only see data your key already has access to.
- The Player MCP is stateless — it accepts a stream URL and player config from the chat client and returns a rendered player; it does not persist your streams.
Updated about 3 hours ago