Samsung’s deep fakes AI
Ever since the political purges of the old Soviet Union, faking and modifying photographs has been used for political gain. Since 2017, Deepfakes are the latest, and most convincing, versions of these efforts. No longer limited to still images, they typically use existing images and videos along with a machine learning technique called a “generative adversarial network” (GAN).
Deepfakes have been used to create fake celebrity videos, revenge videos, re-mixing movies so that the main actor is always Nicholas Cage, to fake news and malicious hoaxes. Apps to help you do this (FakeApp, etc) are available today. Here is one of better recent examples of this:
Until recently though, generating one of these clips required a lot of neutral photos, videos, and sound recordings to train the neural net. Samsung, however, has devised a method to train, and get fairly good results, using a model with an extremely limited dataset. One as small as just a single photo:
Obviously this has profound implications. While mostly used for comedy purposes, in just a few short years, making a fake video went from just a comical treatment to pretty convincing. It’s certain that in a few more years, spotting them with the naked eye might not even be possible.
It’s not unlikely that a last minute video clip could be released on election day to tarnish a competitor in a swing state. While it might be spotted relatively quickly, it might spread and gain enough traction in 8 hours to swing a state, and even a national election.
Here’s a video on the subject from Ted.