Table of Contents
Low latency has been a hot topic in video streaming for a while now. It’s the trendy keyword you hear at every trade show throughout the year; it even ranks high in our annual Video Developer Report as one of the biggest headaches for brands to achieve or the one they are very interested in deploying.
However, despite the huge amount of conversations low latency generates, it’s also one of the most difficult terms to define. This is because each use case could require playback delay to be in the hundreds of milliseconds (ultra-low latency) to a span of 1-5 seconds (low latency), meaning your perception of what low latency is can differ significantly from another’s point of view. Additionally, low latency is limiting because there is a high probability you’re sacrificing quality for a fast video startup time. You are also likely to pay higher prices and work with multiple vendors to get the specific hardware or software you need to facilitate each step of the video workflow.
Furthermore, not every video streaming service needs low latency, even though it’s constantly requested by startups to enterprise-level businesses. A better question may not be “How do I minimize my live stream’s delay?” but “What is the target latency I want my audience to have?” Target latency is a feature not mentioned often and one we will explore in this blog, as it can make a world of difference to the playback experience you’re offering your viewers.
Back to basics – What are Low and Target Latency, and how are they achieved?
If you are unfamiliar with low latency, it essentially refers to minimizing the delay between a live on-site production of an event and a specific viewer watching it over the Internet. Standard HLS and DASH streams have a delay of 8 to 30 seconds, depending on stream settings and a particular viewer’s streaming environment (e.g., the protocol used, buffer size, bandwidth connection, device, and location). For a stream to be considered low latency, it can’t have more than 5 seconds of broadcast delay, with some workflows needing as low as a few hundred milliseconds for ultra-low latency, as stated above. There are several ways to achieve this very low broadcast delay, each with its benefits and costs. However, all methods available in the market today are not standardized, and they all require each piece of your video supply chain to support a chosen low-latency streaming technology, from the live encoder and packager to the CDN and player. This is important as it drives costs and limits your flexibility in selecting a best-of-breed technology stack.
On the other hand, target latency is a predefined time delay so the entire audience can watch the same stream simultaneously. The stream is not affected by the likely differences between individual viewers’ circumstances, meaning that everyone in that group can experience the same live event at the same time or very close to it. This stream synchronization can be achieved by choosing a specific buffer size across the target audience and managing playback to a target delay while attempting to cater to viewers who represent the lowest common denominator (e.g., slowest to fill buffer). You can set the target latency directly in the Bitmovin Player using the targetLatency property, enabling you to design the user experience as you want.
How do both affect the viewer experience?
The benefits of low latency revolve around getting viewers their content fast, similar to broadcast speeds, which helps make them feel more connected with the live event. Live sports is an excellent example of where low latency plays a prominent role in the viewing experience. It helps combat the “Noisy Neighbor Effect,” where your audience can be negatively affected when seeing notifications or hearing cheers from neighbors when something happens before they see it on their screen. This also applies to real-time betting, which requires a stream to be available in ultra-low latency to see real-time results. Low latency is also critical for live seminars, esports, fitness classes, and many other live interactive use cases to help keep your audience engaged and up-to-date with what’s happening at that moment.
The biggest downside of the available low latency solutions is that they do not permit players to buffer enough content, which leads to playback interruptions when streaming conditions are less than ideal (e.g., poor wifi, an ISP problem, device performance). This alone can quickly lead to slow video starts, rebuffering, decreased stream quality, and other performance issues, creating a terrible experience for the user.
Any video streaming service can use target latency in a way that minimizes any downside to the viewer experience. This is because you can set the delay for a consistent experience for the entire audience or for a predefined audience, ensuring your viewers will have a better quality of experience due to increased stream stability and control during playback. For example, if you offer a second screen experience like a chat feature within the live event, target latency will keep everyone at the same live point so that it feels more like a live event. The only potential downside of the target latency solution is for viewers who may be using different video streaming services, which may cause them to be at different live points relative to their neighbors.
What does this mean for a business’s bottom line?
Pricing concerns are one of the top priorities when evaluating what is best for your business. From each part of your setup to the encoding and bandwidth requirements, low-latency workflows have the potential to be more expensive. This is because each component of your video supply chain must support the low-latency streaming technology you choose and can potentially expand to multiple ones if you’re offering low-latency streaming across different platforms (e.g., iOS and Android). Due to the complexity, it can take numerous vendors and a lot of integration for you to achieve low latency needs. This is a fundamental challenge as high costs inevitably limit realizing these capabilities, especially in tough economic times.
Target latency, on the other hand, requires only client-side software changes, so implementation and operational costs are relatively low, as you won’t need to buy and integrate specialized components.
Wrapping up
Reduced latency of 8-10 seconds is already achievable for most video streaming services today using standardized HLS and DASH protocols which already support a broad range of devices compared to (ultra) low latency solutions. Video streaming services should carefully consider the real-world pros and cons of (ultra) low latency vs. target latency solutions as they continue to push the limits in delivering the best viewer experience to their audiences.