Deep Fakes Keep getting better
I’ve written about the various AI algorithms that allow for image and voice replacement before. How about this scene from The Shining – but with Jim Carry instead?
I’ve written about the various AI algorithms that allow for image and voice replacement before. How about this scene from The Shining – but with Jim Carry instead?
Running out of memory while running Windows isn’t exactly a new phenomenon. But since Windows 8, things have mostly gotten better from memory usage/performance.
I recently ran into an issue where I’d close down all of my apps, but leave my system on overnight. When I’d jiggle the mouse in the morning, I would be greeted with horrendously sluggish drive swapping and 100% memory utilization. On a system with 32 gigs of memory. Even worse, I opened up my task manager and shut down everything possible – but nothing was indicated where 25+ gigs of memory went. Bad job Microsoft, shouldn’t your performance tools be able to tell me what is using up 90% of my system memory?
I got a clue from the fact my non-pageable memory usage was huge. This, apparently, indicates a driver or service with a memory leak:
https://windows101tricks.com/fix-memory-leak-problem-on-windows-10/
The problem was the Killer Network Suite. Apparently, this is a pretty common problem with the Killer network driver. I uninstalled it, and the problems seemed to go away.
https://www.reddit.com/r/buildapc/comments/3yzjyz/killer_network_suite_gave_me_a_major_memory_leak/
I then downloaded and installed the DRIVER ONLY package for the network interface by going directly to the Killer network website.
Turns out that the Killer network suite unfortunately lives up to it’s name: it certainly killed all the memory on my system. 🙁
I’ve already written about this before. Namely, rideshares like Uber/Lyft increase congestion in cities – not alleviate it. Mostly because these services put people that would have likely taken public transport/walked/biked into a car instead. It doesn’t reduce driving either since they’re just swapping one car on the road for someone else’s. Since then, there have been numerous other studies that have confirmed this effect.
A new study by EHT – Zurich’s Institute for Transport Planning and Systems seeks to find out if that equation changes if autonomous taxi services were introduced. Using an agent-based simulation on the city of Zurich, they tested a number of scenarios (service models, owner models, etc), examined costs, and how disruptive the shift would be. The simulation uses an agent-based system in which individuals make decisions based on time/cost/etc, instead of overarching rules. This has previously produced really accurate studies.
Turns out, the impact is not as much as people expect, and the fleet to do it actually must stay relatively small when paired with existing, good quality mass transit systems to be viable. This doesn’t shock me in European cities with great public transit, but I wonder how it would play out in American cities without that infrastructure. My gut tells me it likely would be the same as Uber/Lyft study results.
Either way, the study is definitely worth a read:
https://www.ethz.ch/en/news-and-events/eth-news/news/2019/06/driverless-congestion.html
Developing a iOS app used to require buying a Macbook or Mac mini. With VMWare, it is no longer necessary. I used VMWare Workstation 15.0 Pro and was able to develop an app and debug it on real iPad/iPhone hardware. Setup instructions are here:
https://techsviewer.com/install-macos-mojave-vmware-windows/
Here’s the latest VMWare Mojave 10.14.4, 18E226 (March 25, 2019) image:
Further tips after the above setup
gui.applyHostDisplayScalingToGuest = "FALSE" Released in 1973, the SEGA Moto Champ machine had no screens, buttons, or a joystick. The electro-mechanical racing game had a group of magnetically-attached motorcycles which rolled over a “road” that was generated by shining light through a spinning cylinder onto the playfield.
Here’s another playthrough with more narration:
I love the clever engineering of older machines because of all the constraints. Games required you be more imaginative – which I think is really part of the fun of playing them.
Which is why I’ve come to love some of the ideas behind hackathons. Limits on time, resources, can help unlock amazing creativity.
Videographer Guy Jones edits century-old film – the ones that usually run too fast and jerky. Jones slowed down the film’s original speed and added ambient sound to match the activity seen on the city’s streets. This particular film print was created by the Swedish company Svenska Biografteatern during a trip to America, and remains in mint condition.
You can see more of Jones’s edits of films from the late 19th-century to mid-1900s on his Youtube channel. (via Twisted Sifter)
Back in the ‘early’ days (2012) of video processing, before we had AI algorithms, we often just pursued straight video processing techniques to tease out interesting information.
Enter MIT’s work on Eulerian Video Magnification. The underlying concept is to take small per-pixel differences from frame to frame, then magnify the changes, to detect things not ordinarily visible to the human eye.
One of the most powerful effects is the ability to accurately detect the pulse of someone, or the vibration of objects by sound – with just normal video footage. Check it out in their original demonstration video below:
In 2014 they did a Ted talk that shows some further interesting applications – including re-generating the sound going on in a room from the vibrations of the objects in it.
So, since 2014, you can even recover sound from a discarded bag of chips behind soundproof glass.
Jean-Dominique Bauby, author of “The Diving Bell and the Butterfly.”,
tapped out the book letter by letter by blinking an eyelid after being paralyzed by a stroke that left him virtually unable to move a muscle.
Thousands of people are reduced to similarly painstaking means of communication as a result of injuries suffered in accidents, combat, strokes, or neurodegenerative disorders such as A.L.S. – all of which render the patient unable to speak.
Scientists are now reporting that they have developed a virtual prosthetic voice, a system that decodes the brain’s vocal intentions and translates them into mostly understandable speech, with no need to move a muscle, even those in the mouth.

The new system, described on Wednesday in the journal Nature, deciphers the brain’s motor commands guiding vocal movement during speech — the tap of the tongue, the narrowing of the lips — and generates intelligible sentences that approximate a speaker’s natural cadence.
This is astounding development and has untold of implications. Give it a listen below (audio starts at 0:16)
Google Duplex uses several different AI techniques in order to create a fascinating new capability: a digital assistant that can make reservations for you.
Give it a listen.
It uses a combination of different AI backed techniques: voice synthesis, voice recognition, and natural language processing.
This opens a brave new world in which each of us has a digital assistant that we assign tasks and it takes care of them for us. Including making phone calls. Such capability also has the ability to do frightening manipulation automatically too.
Here’s some more data about how the system works:
https://ai.googleblog.com/2018/05/duplex-ai-system-for-natural-conversation.html
As someone currently working in this space, Ed Sperling has some good observations about the difficulty auto makers are having adjusting to the rapid change occurring in their industry.