Callaba Engine documentation

Callaba Engine is the API you use to run a live video workflow end to end: accept contribution over SRT servers and RTMP servers, work with NDI discovered devices and NDI adapters, start Restreams and Recordings, publish to Web players, create Video calls, and attach Storages for durable output.

The fastest way to use these docs is to follow the same path your operators follow during setup and incident response: where does the signal enter, how does it move, what state is it in now, where does it go next, and what must be saved. The live telemetry block below matters because contribution quality is not static. SRT bitrate, loss, buffering, and timing are the signals that tell you whether a problem starts at the sender, on the network path, or inside the receiving workflow.

Live statistics

See live SRT stats as a moving chart

This demo shows the kind of live statistics you can watch during a real contribution: bitrate, buffer delay, packet flow, receive capacity, and active streams. It is connected to the public demo endpoint at demo.callaba.io and updates from the same live event stream used by the product.

ConnectionConnecting
Last updateWaiting for the first packet
Active streams
Live bitrate
How much video data is currently arriving into the SRT server in real time.
Mbps

Follow the workflow

Most integrations touch several modules. Start at the first boundary that already exists in your system, then move downstream.

  1. Ingest: provision entry points for field encoders and partner feeds with SRT servers or RTMP servers. If the source already lives on an NDI network, begin with NDI discovered devices.
  2. Route or transform: use SRT routes when you need controlled SRT forwarding; use Restreams when you need republishing, protocol bridging, transcoding, overlays, or fan-out to multiple destinations. Use NDI adapters when the workflow must re-expose a managed source back into an NDI production environment.
  3. Persist or deliver: use Recordings to create files from live sources and Storages to define where those files live. Use Web players and Web player groups when the output is browser playback. Use Video calls when viewers are participants who need to join a live room rather than watch a stream.

Choose the right first module quickly

If you are onboarding a new system, the usual sequence is installation or instant launch, authorization, then the module at your workflow boundary.

How the module model maps to production work

The API is easier to understand if you treat modules as different kinds of operational objects rather than as one flat resource list.

  • Boundary resources define where media can enter or exit. Examples: SRT servers and RTMP servers.
  • Live jobs do work over time and usually have runtime state worth monitoring. Examples: Restreams and Recordings.
  • Observed state represents what the system sees rather than what you first create. Example: NDI discovered devices.
  • Bridge layers connect two production domains. Example: NDI adapters for handing managed sources back to NDI.
  • Environment settings control platform behavior across workflows. Example: NDI configuration.
  • Delivery surfaces expose output to people or applications. Examples: Web players, Web player groups, and Video calls.
  • Persistence targets determine where recorded assets are stored and retrieved. Example: Storages.

This distinction matters in automation. You typically provision boundary resources once, operate live jobs per event or channel, read observed state continuously, and wire delivery and storage according to your distribution and retention requirements.

Why the live telemetry block matters

For SRT contribution, a green status alone is not enough. Operators need to see whether bitrate is collapsing, receive buffer pressure is rising, packet delivery is uneven, or timing is drifting. Those signals tell you whether to tune the sender, inspect the network path, increase resiliency, or protect downstream Restreams, Recordings, and Web players from an unstable source.

Use the telemetry view as an early warning layer: it helps teams separate transport problems from application problems before viewers notice or archived files are affected.