Find the Perfect Latency for Your SRT Setup
If your SRT stream looks unstable, the problem is often not bitrate first. It is latency. When the latency value is too low for the real network path, packets arrive too late to be recovered, and the result shows up as blocky video, pauses, or a feed that feels unreliable even though the encoder itself is fine.
This guide shows how to spot an SRT latency problem, how to use RTT to calculate a strong starting value, and where to apply that value in OBS or vMix. The goal is not to guess the lowest possible latency. The goal is to find the lowest latency that still keeps the stream stable.
See the two signals that drive SRT latency decisions
Start with the live relationship between incoming bitrate and network RTT. These two signals usually tell you whether the current path still has enough timing headroom or whether your SRT latency budget is about to become too aggressive.
Start with the live widget before you move into settings and formulas. It keeps the page focused on the two signals that matter most when teams tune SRT in production: incoming bitrate and network RTT. When RTT rises, the same latency value can suddenly stop giving the stream enough room to recover late packets. That is why latency should be chosen against the real path, not against an ideal lab connection.
What SRT latency actually does
SRT latency is the amount of time the receiver keeps in reserve so late or lost packets still have a chance to arrive and be recovered. If the value is too low, the stream does not have enough buffer to survive real packet timing variation. If the value is too high, the stream stays stable but adds more delay than necessary.
In practice, good SRT tuning is about finding the point where the stream remains clean under the actual network conditions you have, not the conditions you wish you had.
Spot when latency is too low
Teams often describe low SRT latency as a “quality issue” or “random buffering problem,” but the symptoms are usually more specific:
- blocky or damaged video frames
- short freezes or repeated pauses
- audio/video instability under otherwise normal load
If you are watching the stream in production, these are your first visual clues. If you also monitor statistics, you can confirm the problem more precisely.
In Callaba, the first place to look is the SRT statistics view.

Packet drops visible in the SRT graph.

Detailed stream-level SRT statistics.
The most useful warning signs are:
- Packet drops: a rising count means packets are being lost or arriving too late to be used.
- Arrived too late: packets are reaching the destination, but not in time for recovery.
- Re-sent packets: a high count means the path needs recovery more often than your current latency budget comfortably allows.
Use RTT to calculate a good starting latency
The simplest and most practical place to start is RTT, or Round Trip Time. RTT measures how long it takes for a packet to travel from sender to receiver and back again.

A strong default rule is:
Latency = RTT Ă— 3
This gives the stream enough room to recover late packets on a path that is not perfectly clean. It is not the only possible formula, but it is a very good operational starting point because it is simple, defensible, and usually much better than guessing.
When to go higher than RTT Ă— 3
Use RTT Ă— 3 as your first value, then watch the stream and the stats. If you still see packet drops, late packets, or repeated retransmissions under real load, move higher. This is common when:
- the stream crosses long-distance public internet routes
- the network has unstable jitter, not just stable delay
- you are working from a venue, hotel, or mobile uplink
- the contribution feed is too important to optimize for “minimum delay at any cost”
If the stream is stable and the latency feels unnecessarily high, you can tune downward carefully in small steps. The safe direction is always to prove stability first, then reduce delay.
Apply the latency value in OBS
Once you have the latency value you want to test, apply it in the sender settings. In OBS Studio, open the SRT output configuration and set the latency value there.

After changing the value, start the stream again and check whether the visible problems are reduced and whether packet-related warnings in the SRT statistics calm down.
Apply the latency value in vMix
If you send from vMix, apply the same tested value in the SRT sender configuration.

This is why teams should treat latency as part of the sender configuration, not as an abstract number in a dashboard. The correct value only matters when it is actually used by the live sender.
What to check after you change latency
- Is the visible stream cleaner than before?
- Did packet drops stop climbing as quickly?
- Did the “arrived too late” or retransmission indicators settle down?
- Is the added delay still acceptable for the workflow?
If the stream still fails with a more realistic latency value, the bottleneck may no longer be latency alone. At that point, check bitrate, network congestion, firewall behavior, or the sender configuration itself.
Common mistakes
Choosing the lowest possible latency because it sounds better
In production, the best latency is not the smallest number. It is the smallest number that still keeps the stream stable on the real network path.
Changing latency without checking RTT first
RTT gives you a grounded starting point. Without it, teams often tune by intuition and spend more time oscillating between “too low” and “too high.”
Assuming every issue is a bitrate problem
Bitrate problems and latency problems can look similar on screen. If the network path is inconsistent, latency is often the more important control to validate first.