Old games often suffer from the limited graphics capabilities of the time they were made, while developing new games costs a fortune due to the requirements to author high quality models and textures. What if you could solve BOTH problems – with the same solution? A machine learning project from Intel Labs in 2021 called “Enhancing Photorealism Enhancement” might push rendering toward photorealism a lot quicker and easier.
Researchers studied how to use a convolution network to re-render the scene. Below you can see an example of how they used the CityScapes dataset to give a much more realistic output of a race game – all in realtime.
You can read how the image enhancement actually works in their paper (PDF). It includes a lot of good information about how their method works and how it improves on previous attempts that have issues with color, object hallucination, and temporal instability. They do this by using the extra information provided by rendered scenes such as clever use of the g-buffer – along with a specialized discriminator and segmentation network.