Videographer Guy Jones edits century-old film – the ones that usually run too fast and jerky. Jones slowed down the film’s original speed and added ambient sound to match the activity seen on the city’s streets. This particular film print was created by the Swedish company Svenska Biografteatern during a trip to America, and remains in mint condition.
You can see more of Jones’s edits of films from the late 19th-century to mid-1900s on his Youtube channel. (via Twisted Sifter)
Deepfakes have been used to create fake celebrity videos, revenge videos, re-mixing movies so that the main actor is always Nicholas Cage, to fake news and malicious hoaxes. Apps to help you do this (FakeApp, etc) are available today. Here is one of better recent examples of this:
Until recently though, generating one of these clips required a lot of neutral photos, videos, and sound recordings to train the neural net. Samsung, however, has devised a method to train, and get fairly good results, using a model with an extremely limited dataset. One as small as just a single photo:
Obviously this has profound implications. While mostly used for comedy purposes, in just a few short years, making a fake video went from just a comical treatment to pretty convincing. It’s certain that in a few more years, spotting them with the naked eye might not even be possible.
It’s not unlikely that a last minute video clip could be released on election day to tarnish a competitor in a swing state. While it might be spotted relatively quickly, it might spread and gain enough traction in 8 hours to swing a state, and even a national election.
Back in the ‘early’ days (2012) of video processing, before we had AI algorithms, we often just pursued straight video processing techniques to tease out interesting information.
Enter MIT’s work on Eulerian Video Magnification. The underlying concept is to take small per-pixel differences from frame to frame, then magnify the changes, to detect things not ordinarily visible to the human eye.
One of the most powerful effects is the ability to accurately detect the pulse of someone, or the vibration of objects by sound – with just normal video footage. Check it out in their original demonstration video below:
In 2014 they did a Ted talk that shows some further interesting applications – including re-generating the sound going on in a room from the vibrations of the objects in it.
So, since 2014, you can even recover sound from a discarded bag of chips behind soundproof glass.
In a world where you might be working with people from the EU in the morning, on your own code all day, then passing off to Korea/China in the evening, distributed development teams are everywhere.
Unfortunately, distributed teams are also very difficult to manage and keep wellbeing and moral high. Google has some tips for you and your team:
I’ve written about this growing trend before and there are indications it is happening in other countries.
In Japan, half a million people live isolated in their bedrooms, unable to face the outside world. These modern-day hermits are known as the hikikomori. Since April 2018, the Japanese government has been conducting a nationwide study in a bid to fully understand this strange phenomenon.
There are finally some fruits of this studies and some programs that are really working. It appears many of those suffering from this condition remain so because of fear that compounds to the point they are afraid of the outside world. Afraid to meet others. Even afraid of speaking.
I, however, take issue with the reporter that cause this all ‘disturbing’ or ‘frightening’. These are sick people that need help for sure, but what they crave is a sense of belonging and human contact without the skills or help to know how to do it.
I personally believe the proliferation of technology that replaces genuine human contact with simple online presence are creating gulfs in our human need for real belonging, connection, and meaning. As evidence of this, it’s usually not until those suffering are connected with a real human being to help them out.
Seoul, Korea’s Cafe 연남동 239-20 has a nifty design aesthetic. The entire place is done up in a black-and-white line art style that gives everything an illustrated 2D look. No detail was overlooked, from the chairs to the coffee mugs.
It’s no secret that I love the movie Clue. Besides the movie, there were also some books written by the screenwriters. Unfortunately, the movie wasn’t a big commercial success so the books were quickly discontinued and forgotten. Getting your hands on one of them is rather difficult – and expensive.
Thanks to an inter-library loan, however, I recently acquired a copy of Clue: The Storybook and did a page by page scan. I then combined the scans into a convenient PDF. The book provides a very abbreviated and thin read of the movie proper. The in-book pictures weren’t particularly good quality (very grainy prints), but there were some pictures I had not seen in any other sources nor in the movie itself.
The most interesting part of the book is that it reveals a secret 4th ending that had been rumored at, but supposedly never filmed.
Where can you get your hands on a copy? How about downloading the scanned copy I made right here so you don’t have to pay hundreds of dollars. Enjoy!
Clue: The Storybook by Ann Matthews (Storybook Adaptor), Johnathan Lynn (Screenplay), John Landis (Story) Published Dec 1, 1985 ISBN: 0671618679 ISBN13: 9780671618674
ISBN-10: 0671618679
ISBN-13: 978-0671618674
Oct 2023 Update:
It looks like someone took my scans and then put them together up on the Internet Archive!