Reach out with magic ideas!

Email
[email protected]

 

Full Body Tracking for Oculus Quest Using AI

Full Body Tracking for Oculus Quest Using AI

Without additional hardware, artificial intelligence provides full body tracking for Oculus Quest.

Full body tracking makes your VR experience a lot more immersive, bridging your body movements perfectly to virtual space. So far Quest 2 only has trackers on the head and hands. But with the help of AI, Meta has developed the ability to estimate body poses with QuestSim. 

 

What is body tracking?

Body tracking tracks users’ body movements through sensors. Full body tracking makes your interactions in virtual reality more realistic using more accurate models of human bodies. Full body tracking in VR usually depends on sensors and controllers to provide accurate information about how a user moves to be realistically represented in a virtual environment. AR body tracking intelligently understands a human body’s position via your smartphone camera. It uses the same capabilities as an AR filter, extended to the whole body, making it possible to e.g try on clothes in augmented reality. In VFX and movie production, you’ve probably seen a more sophisticated use of motion tracking where an actor uses a full body tracking suit to transfer his/her movements to an animated character. 

 

Using AI to do full body tracking in VR

Body tracking is difficult, especially if you want to catch the movements of the entire body including the legs and feet. And the more we shrink the form factor the harder it gets. QuestSim developed by Meta instead uses the data from the headset and controller to make predictions on the rest of the body where there’s no tracking data. Its AI is trained on a synthetic motion capture dataset and adds physics simulator rules for reasonable movement. According to the Meta team the system has been trained on artificially generated action sequences based on eight hours of motion-capture clips like running, balancing, walking, and more, and after uses the headset and controller data to understand the user’s movement. As of now, the system can unfortunately not understand if the user is e.g. kneeling or crawling and performs more accurately on more conventional movements where it has more training data.