Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

Neural Radiance Fields (NeRF) produce some pretty beautiful renderings. A little like photogrammetry, it utilizes objects placed in a multi-dimensional volume (as captured from multiple viewpoints) and then when you want to render it from a particular angle, shoots rays into the scene based on camera location and queries the volume in order to get a screen coordinate pixel color at that location.

It does suffer from some shortcomings – such as largely only working well on static scenes, has trouble when there is missing or occluded portions, and most notably renders objects that lack fine details or produces blobby geometry common to volumetric rendering techniques.

But it doesn’t stop people from trying. Zip-NeRF is an example where these Google scientists demonstrate how ideas from rendering and signal processing yield better error rates and trains dramatically faster than previous techniques.

It’s always interesting to see what new things people are trying out these days.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.