Video latency together with audio latency is one of the main metrics that is important in video conferencing applications. When it comes to video calls, we want our communication to be as seamless as possible. However, because there are multiple stages that data has to go through, like encoding, network, decoding, and even sometimes processing to add a background or enhance a video, it is impossible to have instant communication. As a result, engineers spend a lot of time trying to minimize the impact that these stages have on users and the quality of their experience.
In this article I will show you how we acquire real life end-to-end latency data from our tests and demonstrate some typical behavior that we have encountered during our years of testing video conferencing and communication applications under different network conditions.
How do we measure video latency?
Our approach to measuring video latency is very much end-to-end based, so it is next to impossible to argue with the accuracy of the results we acquire during testing. The main idea is that we have both (or all) participants next to each other, then we film them with an external camera and have special recognizable color markers that change every second, which we can analyze in the post processing of the recorded video. It’s important to note that we use network manipulations to make the use case that we test close to real-life scenarios by adding additional delay or restricting P2P connection within our network. You can read more about how we simulate real-world network conditions for testing in this article.
The video latency resolution (smallest step) we get is dependent on the frequency rate at which the video is captured and the frequency rate at which it is displayed. Generally, the frequency rate at which the video is captured is the limiting factor at 30 fps. So the delay we get will be with a step of 1/ (captured frequency) or 1/30 = 0.033ms.
Here is a short animation showing the idea behind end-to-end video delay testing:
And this is an example of what the actual test looks like:
How do we process video latency test data?
When we process the test data, we can usually observe three distinct behaviors:
- The delay is stable with a few small increases and drops. This is normal behavior.
- There are more major changes in delay, however, the content does not get lost.
- The content gets lost once the delay ‘resets’. This is atypical behavior.
When there are no network limitations, the results typically look like this:
In the graph above we can see that the video delay is stable and normal with a small difference, from 200-300 milliseconds. Here the video was captured without any network limitation. However, our clients are always eager to challenge their applications, so here is a scenario where, during the call, we limit the network bandwidth to 500kbps (a normal call would use around 2 Mbit) so there is a point at which the network gets congested and during that time there is a huge increase in delay—see graph 4.
But if we made some network limitation, for example, in the first 60 seconds the network limitation will be unlimited, then for the next 60 seconds it will be limited to 500Kbps, and then for the last 60 seconds it will be unlimited again, the results will look like this:
Now, let's look at this graph a bit closer and analyze the data.
0–60 seconds and 120–180 seconds
In the first and last 60 seconds, the video delay is stable. This is the default case with good network limitation and the delay data is around 200-300 milliseconds.
60–120 seconds
Starting from 60 seconds to 120 seconds, the network has changed from unlimited to 500Kbps and we see a big spike. At 75 seconds, the video skips frames and after the 120-second mark, the data goes back to a normal delay of around 200-300 milliseconds. This increase in the video delay happens because after good network limitation, the application is not ready for fast network changes, so content skipping is an app behavior that is likely to occur.
What we see here is the video content being sped up. You have most likely found yourself in a situation when you are having a video call with someone while having a bad network connection, so the person you are talking to seems to be talking very fast. Well, this kind of behavior of the content being sped up is exactly what can be seen in the graph above.
Key takeaways
Looking at the data, we can reach a conclusion on how the application works with different network limitations. Analyzing this graph bit by bit, we see that this application reacts badly to fast network limitation changes with packets dropping and taking a longer time to recover. This particular example and our data analysis can help you be more alert of the effects of video latency and how you can improve your video conferencing, streaming, or any other type of communication app.
Do you want to test your video conferencing or streaming apps to see how your app performs under different network conditions? Contact us with your project details and find out how our audio and video quality testing services can help you improve your app.