While others artists are suing AI engine companies, Peter Gabriel is embracing it. And the court will rise, while the pillars all fall comes from his new album i/o. The video was created by Junie Lau using various AI tech, including Stable Diffusion, ChatGPT, MidJourney, and DALL·E 2.
These are programs that autogenerate EVERYTHING you see and here. They generate the dialog they say, they generate the actors voices, they generate the actors, movements, visuals, and scenes. Everything is automatically generated by AI.
One of the more interesting and new ones I’ve seen is Interactive AI Generated Star Trek. Created by just 4 people, this is one of the highest quality AI generated episodic content I’ve seen so far. It runs 24/7. There’s a more or less consistent story based on user submissions, moving (but fixed location) camera angles, autogenerated characters with movement, auto-generated voice audio, and good scene transitions between some fixed scene locations. Not only that, it’s interactive and you can help direct the action via chat commands like:
!topic [text]: Scenes on the bridge.
!awaymission [text]: Scenes on the desert planet.
!transmission [text]: Transmission scenes with Winglons.
!messhall [text]: Subs only! Scenes in the USS Archimedes Mess Hall
!iceplanet [text]: Subs only! Scenes on the tundra planet
Is this the future of TV shows? The end of actors?
Need some royalty-free music or sound effects for your game or a video you made? Do you need a drum solo, ambient music, or other audio track to set the mood? TIME’s Best Inventions of 2023 list called out the Stable Audio AI music creator as such a tool to get music generated for free.
As an example of how good Stable Audio is, enter “Post-Rock, Guitars, Drum Kit, Bass, Strings, Euphoric, Up-Lifting, Moody, Flowing, Raw, Epic, Sentimental, 125 BPM” for a 95-second track – and the site will create audio like the results in this YouTube video. (It just generates the music, not any imagery)
Hidreley Diao let curiosity get the best of him and he ask AI to render many animated household names in flesh and blood. He used Photoshop, FaceApp, Gradiente and Remini to create somewhat realistic images of well-known cartoon characters.
Animator Nikita Diakur thought it would be safer to have a digital stand-in do a backflip after he failed to do a backflip in real life. Maximilian Schneider helped him use machine learning tech to create a photorealistic avatar of himself, use a voice simulator trained for 15 minutes on his actual voice, wire the voice to the mesh on his face, and a few other techniques from the paper Deep Mimic. He then tried to train the avatar.
It’s an interesting way to tell a story – especially when he puts the avatar into his tiny apartment and proceeds to virtually receive what would be numerous serious head traumas, bone breaking collisions, and likely tons of broken furniture.
Traditionally, AI narrators and voices have limitations. Early generated voices were barely good enough for simple one phrase statements. For longer text, they tend to be very flat/monotone and have bad pacing to the point of being very painful to listen too for any extended period of time. While this is still somewhat the case, this version is much improved.
I personally love combining my workouts/hikes/drives with audiobooks – and having a new free source of good material is great.
The book selection obviously are works in the public domain, but that includes lots of classics – such as some of my favorites: Edwardian and Victorian ghost stories.
The Future of filmmaking is here – and it’s done by a single person using AI tools. I’ve already posted about AI videos before, and included a number of trailer-like ones on the Curious Refuge channel or these humorous fake commercials from Turbodong2000.
AutoDMP is short for Automated DREAMPlace-based Macro Placement. It is designed to plug into an Electronic Design Automation (EDA) system used by chip designers, to accelerate and optimize the time-consuming process of finding optimal placements for the building blocks of processors. In one of Nvidia’s examples of AutoDMP at work, the tool leveraged its AI on the problem of determining an optimal layout of 256 RSIC-V cores with 2.7 million standard cells and 320 memory macros. AutoDMP took 3.5 hours to come up with an optimal layout on a single Nvidia DGX Station A100.
Initial metrics shows it does an amazing job – in a fraction of the time. Definitely worth the read.
AutoDMP is open source, with the code published on GitHub. Below is a link to an article about Cadence’s Cerebrus AI place-and-route solution.