Videographer Guy Jones edits century-old film – the ones that usually run too fast and jerky. Jones slowed down the film’s original speed and added ambient sound to match the activity seen on the city’s streets. This particular film print was created by the Swedish company Svenska Biografteatern during a trip to America, and remains in mint condition.
Ever since the political purges of the old Soviet Union, faking and modifying photographs has been used for political gain. Since 2017, Deepfakes are the latest, and most convincing, versions of these efforts. No longer limited to still images, they typically use existing images and videos along with a machine learning technique called a “generative adversarial network” (GAN).
Deepfakes have been used to create fake celebrity videos, revenge videos, re-mixing movies so that the main actor is always Nicholas Cage, to fake news and malicious hoaxes. Apps to help you do this (FakeApp, etc) are available today. Here is one of better recent examples of this:
Until recently though, generating one of these clips required a lot of neutral photos, videos, and sound recordings to train the neural net. Samsung, however, has devised a method to train, and get fairly good results, using a model with an extremely limited dataset. One as small as just a single photo:
Obviously this has profound implications. While mostly used for comedy purposes, in just a few short years, making a fake video went from just a comical treatment to pretty convincing. It’s certain that in a few more years, spotting them with the naked eye might not even be possible.
It’s not unlikely that a last minute video clip could be released on election day to tarnish a competitor in a swing state. While it might be spotted relatively quickly, it might spread and gain enough traction in 8 hours to swing a state, and even a national election.
Here’s a video on the subject from Ted.
Back in the ‘early’ days (2012) of video processing, before we had AI algorithms, we often just pursued straight video processing techniques to tease out interesting information.
Enter MIT’s work on Eulerian Video Magnification. The underlying concept is to take small per-pixel differences from frame to frame, then magnify the changes, to detect things not ordinarily visible to the human eye.
One of the most powerful effects is the ability to accurately detect the pulse of someone, or the vibration of objects by sound – with just normal video footage. Check it out in their original demonstration video below:
In 2014 they did a Ted talk that shows some further interesting applications – including re-generating the sound going on in a room from the vibrations of the objects in it.
So, since 2014, you can even recover sound from a discarded bag of chips behind soundproof glass.
The geographic south pole is on my bucket list of places to go someday. But at over $50,000 for a trip, I don’t know if I’d do it just yet.
So, until I win the lottery, I’ll just check out videos like this – recorded during the long, cold nights at the South Pole.
Ok – that was funny.
In a world where you might be working with people from the EU in the morning, on your own code all day, then passing off to Korea/China in the evening, distributed development teams are everywhere.
Unfortunately, distributed teams are also very difficult to manage and keep wellbeing and moral high. Google has some tips for you and your team:
I’ve written about this growing trend before and there are indications it is happening in other countries.
In Japan, half a million people live isolated in their bedrooms, unable to face the outside world. These modern-day hermits are known as the hikikomori. Since April 2018, the Japanese government has been conducting a nationwide study in a bid to fully understand this strange phenomenon.
There are finally some fruits of this studies and some programs that are really working. It appears many of those suffering from this condition remain so because of fear that compounds to the point they are afraid of the outside world. Afraid to meet others. Even afraid of speaking.
I, however, take issue with the reporter that cause this all ‘disturbing’ or ‘frightening’. These are sick people that need help for sure, but what they crave is a sense of belonging and human contact without the skills or help to know how to do it.
I personally believe the proliferation of technology that replaces genuine
human contact with simple online presence are creating gulfs in our human need for real belonging, connection, and meaning. As evidence of this, it’s usually not until those suffering are connected with a real human being to help them out.
Seoul, Korea’s Cafe 연남동 239-20 has a nifty design aesthetic. The entire place is done up in a black-and-white line art style that gives everything an illustrated 2D look. No detail was overlooked, from the chairs to the coffee mugs.
It’s no secret that I love Clue the movie. But there were also some books written from the screenplay. Unfortunately, the movie wasn’t a big commercial success so the books were quickly discontinued and forgotten. This means that getting your hands on one of them is rather difficult – and expensive.
Thanks to an inter-library loan, however, I recently acquired a copy of Clue: The Storybook and did a page by page scan. I then combined them into a convenient PDF. I was actually surprised the in-book pictures weren’t actually the best quality, but the book itself is a fun, albeit abbreviated and simple, read.
Probably the most interesting part of the book is that it reveals a secret 4th ending that had been rumored at, but never filmed.
Clue: The Storybook
by Ann Matthews (Storybook Adaptor), Johnathan Lynn (Screenplay), John Landis (Story)
Published Dec 1, 1985
- ISBN-10: 0671618679
- ISBN-13: 978-0671618674