Browsed by
Category: AI

Using a Neural Net as compression for character animation

Using a Neural Net as compression for character animation

This was published in 2018, but it’s a fascinating dual purpose use of neural nets. Firstly, there was a massively increasing issue with character animation. Character animation is quickly becoming highly complex as it has becoming more realistic. The problem compounds when you want to make sure you can do things like crouch and aim at the same time. Or crouch and walk across uneven terrain while looking left or right. You can imagine all the different kinds of combinations of motion that must be described and handled. This all started taking massively more time to develop by artists; but even worse it was taking up more and more storage space on disk and especially in memory space.

Daniel Holden of Ubisoft wondered if he could use a neural net to not only reduce the combinations they had to handle into a net but also utilize the inherent nature of neural nets to compress data. It turns out he could – and he presents what he found in this excellent presentation.

Links:

AI solver reduces a 100,000-equation quantum physics problem to four equations

AI solver reduces a 100,000-equation quantum physics problem to four equations

Physicists recently use a neural net to compressed a daunting quantum problem that required 100,000 equations into a solution that requires as few as four equations—all without sacrificing accuracy.

The problem consists of how electrons behave as they move on a gridlike lattice. When two electrons occupy the same lattice site, they interact. This setup, known as the Hubbard model, is an idealization of several important classes of materials and enables scientists to learn how electron behavior gives rise to sought-after phases of matter, such as superconductivity, in which electrons flow through a material without resistance.

The Hubbard model is deceptively simple, however. For even a modest number of electrons the problem requires serious computing power. That’s because when electrons interact, their fates can become quantum mechanically entangled: Even once they’re far apart on different lattice sites, the two electrons can’t be treated individually, so physicists must deal with all the electrons at once rather than one at a time. With more electrons, more entanglements crop up, making the computational challenge exponentially harder.

One way of studying a quantum system is by using what’s called a renormalization group. That’s a mathematical apparatus physicists use to look at how the behavior of a system—such as the Hubbard model—changes when scientists modify properties such as temperature or look at the properties on different scales. Unfortunately, a renormalization group that keeps track of all possible couplings between electrons can contain tens of thousands, hundreds of thousands or even millions of individual equations that need to be solved. On top of that, the equations are tricky: Each represents a pair of electrons interacting.

Di Sante and his colleagues wondered if they could use a machine learning tool known as a neural network to make the renormalization group more manageable. The neural network is like a cross between a frantic switchboard operator and survival-of-the-fittest evolution. First, the machine learning program creates connections within the full-size renormalization group. The neural network then tweaks the strengths of those connections until it finds a small set of equations that generates the same solution as the original, jumbo-size renormalization group. The program’s output captured the Hubbard model’s physics even with just four equations.

“It’s essentially a machine that has the power to discover hidden patterns,” Di Sante says.

The work, published in the September 23 issue of Physical Review Letters, could revolutionize how quantum scientists investigate systems containing many interacting electrons. Moreover, if scalable to other problems, the approach could potentially aid in the design of materials with sought-after properties such as superconductivity or utility for clean energy generation.

Links:

Microsoft can synthesize your voice with just a 3 second clip

Microsoft can synthesize your voice with just a 3 second clip

Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person’s voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker’s emotional tone and background environmental noise balance.

The scientists also note that since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker.

You can find audio samples and the paper here.

It sure would make breaking into Werner Brandes office a lot easier (1992 movie Sneakers) than convincing your friend to record snippets of a really terrible date.

Re-cast your voice

Re-cast your voice

Koe Recast comes from developer Asara Near in Texas and it allows you to dramatically change your voice into a wide variety of styles – even opposite genders. They have a website demo that allows you to convert a 20 second clip. It’s a preview of their commercial product currently undergoing private alpha testing.

I guess the old TV trope of concealing your voice with a handkerchief over the telephone are long gone.

Link:

AI enhanced snowball fight – from 1897

AI enhanced snowball fight – from 1897

Remember old-school movies that were damaged, in black in white, and everyone ran around at 2x speed? With AI processing, they can fix many of those problems. Olden Days youtube channel has a number of great restored videos like this.

Amazing to see that when fixed, this looks just like a snowball fight one might see today – proving that we aren’t all that different from the people of our past as we’d like to think.

These restoration techniques have come a long way in just a few years.

Other links:

AI based digital re-aging

AI based digital re-aging

Disney published this paper about using AI to digitally age and de-age actors in a fraction of the time it usually takes for normal frame-by-frame manual aging techniques used today.

FRAN (which stands for face re-aging network) is a neural network that was trained using a large database containing pairs of randomly generated synthetic faces at varying ages, which bypasses the need to otherwise find thousands of images of real people at different (documented) ages that depict the same facial expression, pose, lighting, and background. Using synthetically generated training data is a method that’s been utilized for things like training self-driving cars to handle situations that aren’t easily reproducible.

The age changes are then added/merge onto the face. It appears this approach fixes a lot of the issues common in this kind of approach: facial identity loss, poor resolution, and unstable results across subsequent video frames. It does have some issues with greying hair and aging very young actors, but produces results better than techniques used just a few years ago (not that the bar was very hard to beat).

Links:

AI architecture

AI architecture

Architects and designers are increasingly experimenting with AI generated art and designs. Michael Arellanes II of MA2 Studio created a series called ‘Synthetic Futures’ in which he experiments primarily with Midjourney in an attempt to create a consistent and controlled aesthetic for architecture. 

I personally think wide-scale use of AI based art generation to continue a theme or even explore and create new ideas/directions is a foregone conclusion at this point. I’m continually astounded by the results these algorithms generate. Results that will just get better very quickly.

Arellanes seems to agree when he says: ‘The current open platforms for AI imagery work from word descriptions alone, as opposed to architectural 3D modeling and/or encoding surface parameters. This leaves the operator with flat images or AI impressions based on descriptions with extraordinary results of the unexpected. The unexpected results are the most exciting aspect of this new paradigm. As designers test the limits of AI’s imagination and complex image compositions, new possibilities emerge that have never been seen before.’

Where Zillow’s AI went wrong

Where Zillow’s AI went wrong

What went wrong with Zillow’s $500 million AI-based home purchasing program? It was a host of factors, but it highlights a unique problem in AI.

It turns out you can’t just set up an AI model and let it crank for years. You need to pay attention to something called drift. There are ways of telling if your AI model is drifting by monitoring model accuracy, outputs, and inputs on an ongoing basis and re-balancing them.