One of the more interesting and hot uses for VR and AR has turned out to be training and repair.
BMW announced it has added RealWear HMT-1 smartglasses to the tool kit its technicians use in order to reduce the time a car spends in the shop, and make its service centers more efficient. The technology will be fully implemented at all participating dealerships by the end of June 2019.
Ubimax’s augmented reality technology lets them access service manuals on the spot. Voice recognition lets mechanics open files hands free. For example, a technician changing the water pump can access a diagram showing how to install the part using the smartglasses.
The technology also puts mechanics in contact with engineers if they need to diagnose or solve a more enigmatic problem. Using the smartglasses, a mechanic can remotely work with a member of the engineering team via a hands-free video link. The engineer sees what the mechanic see, and can send an image, a diagram, or voice instructions directly to the smartglasses. https://www.digitaltrends.com/cars/bmw-mechanics-using-smart-glasses-to-fix-cars-faster/
There are amazing new designs coming from China. Some of them are happening in retail spaces – especially in book stores. Shanghai Zhongshu Industrial Ltd. is one such bookseller that has been using Shanghai-based X+living design firm to create amazing book stores.
Most noteable to me was this 1,300 square-meter (approx. 14,000 sq.ft) store on four levels is located in the Zodi Plaza completed in 2016 in Chongqing. The clever use of mirrors and staircases makes it a truly M.C. Escher-esque experience and takes its inspiration from old libraries with the dark brown hues and floor-to-ceiling shelves.
For other stores, the designers also took cues from the shapes of Chinese lanterns and lamp shades:
What makes the areas of the store even more fantastical is the use of mirrors and reflecting surfaces that makes the space seemingly endless and turns things upside down.
I’m continually amazed by the innovative work coming out of China. They appear to be having a real design renaissance. I for one will be keeping an eye out for work coming from there.
Running out of memory while running Windows isn’t exactly a new phenomenon. But since Windows 8, things have mostly gotten better from memory usage/performance.
I recently ran into an issue where I’d close down all of my apps, but leave my system on overnight. When I’d jiggle the mouse in the morning, I would be greeted with horrendously sluggish drive swapping and 100% memory utilization. On a system with 32 gigs of memory. Even worse, I opened up my task manager and shut down everything possible – but nothing was indicated where 25+ gigs of memory went. Bad job Microsoft, shouldn’t your performance tools be able to tell me what is using up 90% of my system memory?
A new study by EHT – Zurich’s Institute for Transport Planning and Systems seeks to find out if that equation changes if autonomous taxi services were introduced. Using an agent-based simulation on the city of Zurich, they tested a number of scenarios (service models, owner models, etc), examined costs, and how disruptive the shift would be. The simulation uses an agent-based system in which individuals make decisions based on time/cost/etc, instead of overarching rules. This has previously produced really accurate studies.
Turns out, the impact is not as much as people expect, and the fleet to do it actually must stay relatively small when paired with existing, good quality mass transit systems to be viable. This doesn’t shock me in European cities with great public transit, but I wonder how it would play out in American cities without that infrastructure. My gut tells me it likely would be the same as Uber/Lyft study results.
Developing a iOS app used to require buying a Macbook or Mac mini. With VMWare, it is no longer necessary. I used VMWare Workstation 15.0 Pro and was able to develop an app and debug it on real iPad/iPhone hardware. Setup instructions are here: https://techsviewer.com/install-macos-mojave-vmware-windows/
Released in 1973, the SEGA Moto Champ machine had no screens, buttons, or a joystick. The electro-mechanical racing game had a group of magnetically-attached motorcycles which rolled over a “road” that was generated by shining light through a spinning cylinder onto the playfield.
Here’s another playthrough with more narration:
I love the clever engineering of older machines because of all the constraints. Games required you be more imaginative – which I think is really part of the fun of playing them.
Which is why I’ve come to love some of the ideas behind hackathons. Limits on time, resources, can help unlock amazing creativity.
Videographer Guy Jones edits century-old film – the ones that usually run too fast and jerky. Jones slowed down the film’s original speed and added ambient sound to match the activity seen on the city’s streets. This particular film print was created by the Swedish company Svenska Biografteatern during a trip to America, and remains in mint condition.
Deepfakes have been used to create fake celebrity videos, revenge videos, re-mixing movies so that the main actor is always Nicholas Cage, to fake news and malicious hoaxes. Apps to help you do this (FakeApp, etc) are available today. Here is one of better recent examples of this:
Until recently though, generating one of these clips required a lot of neutral photos, videos, and sound recordings to train the neural net. Samsung, however, has devised a method to train, and get fairly good results, using a model with an extremely limited dataset. One as small as just a single photo:
Obviously this has profound implications. While mostly used for comedy purposes, in just a few short years, making a fake video went from just a comical treatment to pretty convincing. It’s certain that in a few more years, spotting them with the naked eye might not even be possible.
It’s not unlikely that a last minute video clip could be released on election day to tarnish a competitor in a swing state. While it might be spotted relatively quickly, it might spread and gain enough traction in 8 hours to swing a state, and even a national election.
Back in the ‘early’ days (2012) of video processing, before we had AI algorithms, we often just pursued straight video processing techniques to tease out interesting information.
Enter MIT’s work on Eulerian Video Magnification. The underlying concept is to take small per-pixel differences from frame to frame, then magnify the changes, to detect things not ordinarily visible to the human eye.
One of the most powerful effects is the ability to accurately detect the pulse of someone, or the vibration of objects by sound – with just normal video footage. Check it out in their original demonstration video below:
In 2014 they did a Ted talk that shows some further interesting applications – including re-generating the sound going on in a room from the vibrations of the objects in it.
So, since 2014, you can even recover sound from a discarded bag of chips behind soundproof glass.