Last week, Swiss software engineer Matthias Bühlmann discovered that the popular image synthesis model Stable Diffusion could compress existing 2D images with fewer visual artifacts than JPEG or WebP at high compression ratios, though there are some important limitations.
When Stable Diffusion analyzes and “compresses” images into weight form, they reside in what researchers call “latent space,” which is a way of saying that they exist as a sort of fuzzy potential that can be realized into images once they’re decoded. With Stable Diffusion 1.4, the weights file is roughly 4GB, but it represents knowledge about hundreds of millions of images.
While most people use Stable Diffusion with text prompts, Bühlmann cut out the text encoder and instead forced his images through Stable Diffusion’s image encoder process, which takes a low-precision 512×512 image and turns it into a higher-precision 64×64 latent space representation. At this point, the image exists at a much smaller data size than the original, but it can still be expanded (decoded) back into a 512×512 image with fairly good results.
Bühlmann’s method currently comes with significant limitations. It’s not good with faces or text, and in some cases, it can inject detail features in the decoded image that were not present in the source image. (You probably don’t want your image compressor inventing details in an image that don’t exist.) Also, decoding requires the 4GB Stable Diffusion weights file and extra decoding time that are inherent with Stable Diffusion.
Not the first time that AI has been explored as a method of compression as much as generation. Daniel Holden of Ubisoft presented an astounding paper at GDC in 2018 about using neural nets to compress animation data used in video game character animation.