In the last few years, how we communicate for work, education, and personal interactions has drastically changed. Namely, ever since COVID-19 struck, in-person and face-to-face interactions seem to have taken the back seat, with video conferencing rising in popularity and usage. In fact, there has been a 500% increase in video conferencing usage in 2020, while virtual meetings have jumped from 48% in 2020 to 77% in 2022, mainly as a result of remote and hybrid work arrangements. In addition, according to a report by Dialpad, approximately 37% of people spend 4–12 hours in video meetings each week, while almost 5% spend 20 hours or more in video calls a week.
As video conferencing apps continue to gain prominence, we see the development of various features and capabilities for video conferencing. Virtual backgrounds is one such feature. Based on a recent Zoom customer survey, 26% of their users prefer using virtual backgrounds as opposed to real-life or blurred backgrounds. But as with any other software feature, testing virtual backgrounds is important to ensure that users have a pleasant experience.
In this blog post, we will explore the concept of virtual backgrounds, their importance, the challenges they present, and our approach to testing this feature.
What are virtual backgrounds?
Virtual backgrounds are digital images or videos that replace your actual surroundings during video calls, allowing you to control your on-screen appearance. This feature has gained popularity across various video conferencing platforms, such as Zoom, Microsoft Teams, and Google Meet, and now seems to be one of the ‘must-have’ features for video conferencing applications.
The importance of virtual backgrounds
Privacy
Virtual backgrounds conceal the real environment and offer an additional layer of privacy, safeguarding sensitive information or personal items from being visible to others.
Professionalism
Virtual backgrounds enable users to maintain a professional appearance during work-related calls by replacing cluttered or distracting backgrounds with more suitable or branded images.
Personalization
These digital backdrops allow users to express their personalities and interests or create a more engaging atmosphere during casual conversations or online events.
Challenges with virtual backgrounds
Technical requirements
High-quality virtual backgrounds often demand more powerful devices and stable internet connections, which might not be accessible to all users.
Imperfect separation
Separating the user from the actual background can be challenging, especially when they share similar colors, textures, or patterns. This can result in errors, such as parts of the user being mistakenly replaced by the virtual background.
Lighting and shadows
Inconsistent lighting and shadows may cause irregularities in the virtual background, making it appear less realistic or polished.
Why virtual Background Testing and benchmarking matters
As virtual backgrounds become more prevalent, it's crucial to develop standardized testing and benchmarking procedures. These evaluations can help:
- Improve algorithms. By identifying areas of weakness, developers can enhance algorithms responsible for background separation and image quality, leading to more accurate and realistic virtual backgrounds.
- Set performance standards. Testing and benchmarking can help establish performance expectations and ensure that virtual background features are consistent and reliable across different platforms and devices.
- Meet different hardware requirements. Evaluating virtual background performance on various devices can inform users about minimum hardware specifications, ensuring a seamless experience for all participants.
How we perform virtual background testing at TestDevLab
Step 1: Film a realistic call scenario with a green screen
We film a realistic call scenario using a green screen backdrop, which provides a uniform color that can be easily separated from the subject in the foreground.
Step 2: Implement unique, ever-changing ArUco marker
ArUco markers are square-shaped patterns that can be easily detected and tracked in video streams. By implementing a unique, ever-changing ArUco marker on the video, we enable spatial and temporal alignment of video frames between the reference and degraded videos.
Step 3: Remove the background and make a list of foreground and background pixels
We remove the green screen from this reference video using chroma-keying techniques. After removing the green screen, we save all the pixels that should be part of the foreground in one list and all pixels that should be part of the background in another list. Information about pixel location is relative to the ArUco markers.
Step 4: Replace the green background with a realistic background
Using chroma keying techniques, we replace the green screen background with a realistic background image or video that will be used during the call.
Step 5: Make a call and set an easily distinguishable virtual background
We initiate the video call using a virtual camera, and the video is created in Step 4. We set the virtual background to green using the app's virtual background feature. This green virtual background will be analyzed later in the process to determine the accuracy of the foreground/background separation. By using green as the virtual background, we can efficiently evaluate the performance of the virtual background feature in handling the actual background removal and maintaining a clean separation between the foreground subject and the intended virtual background.
Step 6: Record results on the call receiver side
We record the call on the receiver side.
Step 7: Create another list of background and foreground pixels
After recording the call, we analyze the video and create another list of background and foreground pixels based on the output. This step requires precise alignment with the original lists generated in Step 2 for accurate error detection.
Step 8: Compare lists from Step 3 and Step 7 and categorize results
We compare the lists from Step 3 (reference) and Step 7 (recorded) to identify errors in the virtual background process. We then categorize the detected errors into false positives or under-cropped (pixels that should have been background but were not) and false negatives or overcropped (pixels that should not have been background but were). The challenge here is to analyze these errors and determine their root causes, which could range from algorithm limitations to hardware constraints.
Obviously, there are many more nuances for the methodology, but the end result is that we are able to gather under and over-cropped pixels from the video call and analyze them in different ways.
Results
To better demonstrate our approach to virtual background testing, we did an experiment where we tested two applications and how well they handle the virtual background.
The video below shows a visual representation of the original video and over-/under-cropped parts for each of the test applications.
Left side shows the original video; Top right side shows the cropping performance of App1; and bottom right side shows the cropping performance of App2.
Here is a data representation of how each app scored.
Some points we can conclude from the graph:
- Both applications choose to do some more aggressive overcropping.
- App2 is doing a better job of not showing anything that should not be shown*
- App2 sometimes crops out too much content*
*If you look at the video, the parts where App2 is heavily overcropping are where there are multiple faces on the screen. However, the parts where App1 is under-cropping, are parts where there are people further behind.
Conclusion
Looking at our approach to virtual background testing, we can confidently say that we have created a solution that allows us to test the effectiveness of different virtual background solutions. This solution can be used for both benchmarking against competitors and creating regression tests for different use cases on the foreground and background content side.
One of the disadvantages of the methodology used is the synthetic nature of the videos—it requires some extra work to make the video look realistic. However, it also allows us to create different use cases:
- Foreground/background combos
- Different quality videos
- Different color and brightness videos
Also, the same approach can be applied to normal videos by cutting out the ‘perfect crop’ area, though this would require doing it for every frame and be much more time-consuming.
Do you want to test the virtual background feature on your video conferencing solution—or any other feature, for that matter? Contact us with more details, and let’s discuss your project.