Creating 3D meshes from a variety of images or point clouds is not new technology. Doing it well, however, is difficult.
nVidia did an amazing job AI generating 3D meshes from text prompts using GET3D (also here). While it looks a little better and more advanced than the Stable DreamFusion code I played with earlier, it still suffers with some of the similar problems. The meshes they generate can be pretty rough, bumpy, missing features, poor textures, and other issues generating 3D geometry from AI generated data.
Marching cubes and DMTet have existed for some time – but nVidia has introduced an even more interest technique called FlexiCubes. The algorithm is designed to be a drop-in replacement for marching cubes and not only generates better quality meshes from point or course voxel data; but meshes that can easily be dropped into physics simulations.