Browsed by
Category: AI

It’s nothing… forever

It’s nothing… forever

Nothing, Forever is a 24 hour a day Twitch stream with an amazing premise. It runs 24 hours a day, 365 days of the year and delivers new content every minute. Everything you see, hear, or experience (with the exception of the artwork and laugh track) is always brand new content, continually generated via machine learning and AI algorithms. It never repeats (except when the AI generates the same content).

It was launched by Mismatch Media, a media lab focused on creating experimental forms of television shows, video games, and more, using generative and other machine learning technologies.

Give it a watch and be amazed. Sadly, it’s probably better than probably 50% of current TV shows.

Free AI Art Prompt Builders

Free AI Art Prompt Builders

If you’re not interested in buying AI prompts from a Prompt Marketplace, you still are in luck. There are a number of free resources and AI Prompt Builder tools out there to help you along the way – or help you out of artist blocks you might run into.

Midjourney Prompt Generator – Either select one of the samples or provide your own, and the generator uses a GPT-2 model that has been fine-tuned on midjourney-prompts dataset. The prompt dataset contains 250,000 text prompts supplied to the Midjourney text-to-image service by users.

Phraser.tech is an tool for Midjourney and Dall-E art generators that walks you through numerous questions and steps to help you create precisely tailored prompts with the best parameters.

MidJourney Prompt Helper helps you experiment with different styles, lighting, cameras, colors, and other creative elements.

Drawing Prompt Generator is a simple helper to aid in getting rid of artists’ block. Simply gaze at a stream of unrelated objects might help you get the creative juices flowing.

Promptomania Builder is a strong but very easy-to-use helper with upscaling and different variations to become a prompt master.

MidJourney Random Commands Generator – is a prompt tool for generating complex outputs. It was created for entertainment purposes by enthusiasts.

Using a Neural Net as compression for character animation

Using a Neural Net as compression for character animation

This was published in 2018, but it’s a fascinating dual purpose use of neural nets. Firstly, there was a massively increasing issue with character animation. Character animation is quickly becoming highly complex as it has becoming more realistic. The problem compounds when you want to make sure you can do things like crouch and aim at the same time. Or crouch and walk across uneven terrain while looking left or right. You can imagine all the different kinds of combinations of motion that must be described and handled. This all started taking massively more time to develop by artists; but even worse it was taking up more and more storage space on disk and especially in memory space.

Daniel Holden of Ubisoft wondered if he could use a neural net to not only reduce the combinations they had to handle into a net but also utilize the inherent nature of neural nets to compress data. It turns out he could – and he presents what he found in this excellent presentation.

Links:

AI solver reduces a 100,000-equation quantum physics problem to four equations

AI solver reduces a 100,000-equation quantum physics problem to four equations

Physicists recently use a neural net to compressed a daunting quantum problem that required 100,000 equations into a solution that requires as few as four equations—all without sacrificing accuracy.

The problem consists of how electrons behave as they move on a gridlike lattice. When two electrons occupy the same lattice site, they interact. This setup, known as the Hubbard model, is an idealization of several important classes of materials and enables scientists to learn how electron behavior gives rise to sought-after phases of matter, such as superconductivity, in which electrons flow through a material without resistance.

The Hubbard model is deceptively simple, however. For even a modest number of electrons the problem requires serious computing power. That’s because when electrons interact, their fates can become quantum mechanically entangled: Even once they’re far apart on different lattice sites, the two electrons can’t be treated individually, so physicists must deal with all the electrons at once rather than one at a time. With more electrons, more entanglements crop up, making the computational challenge exponentially harder.

One way of studying a quantum system is by using what’s called a renormalization group. That’s a mathematical apparatus physicists use to look at how the behavior of a system—such as the Hubbard model—changes when scientists modify properties such as temperature or look at the properties on different scales. Unfortunately, a renormalization group that keeps track of all possible couplings between electrons can contain tens of thousands, hundreds of thousands or even millions of individual equations that need to be solved. On top of that, the equations are tricky: Each represents a pair of electrons interacting.

Di Sante and his colleagues wondered if they could use a machine learning tool known as a neural network to make the renormalization group more manageable. The neural network is like a cross between a frantic switchboard operator and survival-of-the-fittest evolution. First, the machine learning program creates connections within the full-size renormalization group. The neural network then tweaks the strengths of those connections until it finds a small set of equations that generates the same solution as the original, jumbo-size renormalization group. The program’s output captured the Hubbard model’s physics even with just four equations.

“It’s essentially a machine that has the power to discover hidden patterns,” Di Sante says.

The work, published in the September 23 issue of Physical Review Letters, could revolutionize how quantum scientists investigate systems containing many interacting electrons. Moreover, if scalable to other problems, the approach could potentially aid in the design of materials with sought-after properties such as superconductivity or utility for clean energy generation.

Links:

Microsoft can synthesize your voice with just a 3 second clip

Microsoft can synthesize your voice with just a 3 second clip

Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person’s voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything—and do it in a way that attempts to preserve the speaker’s emotional tone and background environmental noise balance.

The scientists also note that since VALL-E could synthesize speech that maintains speaker identity, it may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker.

You can find audio samples and the paper here.

It sure would make breaking into Werner Brandes office a lot easier (1992 movie Sneakers) than convincing your friend to record snippets of a really terrible date.

Re-cast your voice

Re-cast your voice

Koe Recast comes from developer Asara Near in Texas and it allows you to dramatically change your voice into a wide variety of styles – even opposite genders. They have a website demo that allows you to convert a 20 second clip. It’s a preview of their commercial product currently undergoing private alpha testing.

I guess the old TV trope of concealing your voice with a handkerchief over the telephone are long gone.

Link:

Introduction to writing stable diffusion prompts

Introduction to writing stable diffusion prompts

HowToGeek has a wonderful little introduction on how to start write your first Stable Diffusion prompts.

Update 02-2023: Here’s 10 really amazing resources to help you to generate really great prompts and art.

They start with some simple AI image generation and move on to more and more complex examples that includes a brief introduction to some key parameters, changing and including broader image sources, and then generating various famous artistic styles.

They finish out the intro with some links to help you learn more:

  • Lexica — a repository of images generated using Stable Diffusion and the corresponding prompt. Searchable by keyword.
  • Stable Diffusion Artist Style Studies — A non-exhaustive list of artists Stable Diffusion might recognize, as well as general descriptions of their artistic style. There is a ranking system to describe how well Stable Diffusion responds to the artist’s name as a part of a prompt.
  • Stable Diffusion Modifier Studies — a list of modifiers that can be used with Stable Diffusion, just like the artist page.
  • The AI Art Modifiers List — A photo gallery showcasing some of the strongest modifiers you can use in your prompts, and what they do. They’re sorted by modifier type.
  • Top 500 Artists Represented in Stable Diffusion — We know exactly what images were included in the Stable Diffusion training set, so it is possible to tell which artists contributed the most to training the AI. Generally speaking, the more strongly represented an artist was in the training data, the better Stable Diffusion will respond to their name as a keyword.
  • The Stable Diffusion Subreddit — The Stable Diffusion subreddit has a constant flow of new prompts and fun discoveries. If you’re looking for inspiration or insight, you can’t go wrong.

Links:

AI enhanced snowball fight – from 1897

AI enhanced snowball fight – from 1897

https://www.youtube.com/watch?v=AjToVdbPxbw&ab_channel=OldenDays

Remember old-school movies that were damaged, in black in white, and everyone ran around at 2x speed? With AI processing, they can fix many of those problems. Olden Days youtube channel has a number of great restored videos like this.

Amazing to see that when fixed, this looks just like a snowball fight one might see today – proving that we aren’t all that different from the people of our past as we’d like to think.

These restoration techniques have come a long way in just a few years.

Other links: