Continuous Scene Meshing On Quest 3
The Quest 3 lets you scan a room and build up an internal 3D mesh that represents the world you are in. This can take from 20 seconds to minutes and requires the user walking around the area – and is not able to change dynamically to opening/closing doors/etc.
The Depth API provides live depth frames up to 5 meters in distance – but how to use that to build up the environment in real time?
Julian Triveri‘s multiplayer mixed reality Quest 3 game Lasertag does just this. It takes the live frames and uses an open-source Unity implementation of marching cubes. Apple Vision Pro and Pico 4 Ultra already use this method – but have hardware accelerated depth sensors to help. Quest 3 developers need to do this computation themselves.
See the code on GitHub.
https://www.uploadvr.com/developer-implemented-continuous-scene-meshing-quest-3-lasertag