Browsed by
Category: Technical

Learning Quaternions

Learning Quaternions

3blue1brown makes lots of good videos on mathematics. One of those videos is how to visualize and understand quaternions.

Quaternions are a higher dimension system that can be used to describe 2D and 3D rotations. How they work, however, is often much harder to understand and more complex than understanding simple matrix rotations.

They made a very good video on the subject, but it required me to stop a lot and spend time thinking. These are complex concepts and almost more complex to visualize or conceptualize in your mind.

What’s nice is there is a written page that goes over these concept as well at https://eater.net/quaternions. I found this was much easier to digest than a fast-running video.

Also, if you want to play with the visualization in realtime, they even have a super-cool tool that lets you play with Quaternions in 2D, 3D, and 4D:

https://eater.net/quaternions/video/rotation

nVidia GPU’s top the Stable Diffusion performance charts

nVidia GPU’s top the Stable Diffusion performance charts

Toms Hardware did a great benchmarking test on which GPU’s do the best on Stable Diffusion.

They tried a number of different combinations and experiments such as changing the sampling algorithms (though they didn’t make much difference in performance), output size, etc. I wish, however, they discussed and compared the differences in memory sizes on these cards more clearly. Stable Diffusion is a memory hog, and having more memory definitely helps. They also didn’t check any of the ‘optimized models’ that allow you to run stable diffusion on as little as 4GB of VRAM.

There were some fun anomalies – like the RTX 2080 Ti often outperforming the RTX 3080 Ti.

AMD and Intel cards seem to be leaving a lot of performance on the table because their hardware should be able to do better than it is currently doing. Arc GPU’s matrix cores should provide similar performance to the RTX 3060 Ti and RX 7900 XTX, give or take, with the A380 down around the RX 6800. In practice, Arc GPUs are nowhere near those marks. This doesn’t shock me personally since nVidia has been much more invested and in the forefront of developing and optimizing AI libraries.

My heads are gone!

My heads are gone!

Are you losing the heads of those images you’re generating in stable diffusion?

Try adding these keywords to your prompt:

  • “A view of”
  • “A scene of”
  • “Viewed from a distance”
  • “Standing on a “
  • “longshot”, “full shot”, “wideshot”, “extreme wide shot”, “full body”
  • start the prompt with “Head, face, eyes”
  • Try adjusting the aspect ratio of the image to be taller instead of wider. Be careful not to go too tall (or two wide) or you’ll get the double-head or start generating combinations of two people.
  • The source material has been scanned in a taller aspect ratio, try adjusting the x-side of your ratio
  • Use img2img on a crop that includes part of the chest to make it match the rest of the drawing
  • Cinematography terms tend to work well. In order of close to far: Extreme close-up, close-up, medium close-up, medium shot, medium full shot, full shot, long shot, extreme long shot.

Links:

Stable diffusion high quality prompt thought process

Stable diffusion high quality prompt thought process

Content warning: Some of the links have some moderately NSFW pictures. There is no outright nudity, but it does deal with generating rather busty images. This article should be fine, but be aware following the links.

While this guide is translated from a Japanese source and uses the Waifu/Danbooru model to generate more anime-looking models, it works really well for generating ultra-realistic Victorian pictures using stable diffusion’s standard 1.5 model. Here’s some I made using his prompt with just 30 minutes of experimenting:

Fair warning, the original author is trying to generate more…busty women that look better as anime characters under the Waifu model. I won’t comment on his original purpose, but I thought this was an interesting description of how a ‘prompt engineer’ moved from an idea to generating a stable diffusion prompt.

First he started with a good description of what he wanted:

I want a VICTORIAN GIRL in a style of OIL PAINTING
Eye and Face are important in art so she must have PERFECT FACESEXY FACE and her eye have DETAILED PUPILS
I want she to have LARGE BREASTTONED ABS and THICK THIGH.
She must look FEMININE doing EVOCATIVE POSESMIRK and FULL BODY wearing NIGHT GOWN
The output must be INTRICATEHIGH DETAILSHARP
And in the style of {I’m not give out the artist names to avoid trouble. Apologize.}

This lead him to generate the following prompt. Note his use of parenthesize () to add emphasis, and terms inside square brackets [] to minimize the direction.

Prompt :
VICTORIAN GIRL,FEMININE,((PERFECT FACE)),((SEXY FACE)),((DETAILED PUPILS)).(ARTIST),ARTIST,ARTIST,(ARTIST). OIL PAINTING. (((LARGE BREAST)),((TONED ABS)),(THICK THIGH).EVOCATIVE POSE, SMIRK,LOOK AT VIEWER, ((BLOUSE)).(INTRICATE),(HIGH DETAIL),SHARP

Unfortunately, you don’t need to experiment for long to realize stable diffusion needs a lot of help with anatomy. It often generates nightmare fuel images that that have multiple heads, messed up arms, hands with too many fingers, eyes with terrifying pupils (or no pupils), too many limbs – well, you get the idea. So you need to make sure those things don’t show up by banning them via setting the negative prompts (again, not commenting on original purpose):

Negative Prompt :
((nipple)), ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), (((tranny))), (((trans))), (((transsexual))), (hermaphrodite), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))). (((more than 2 nipples))). [[[adult]]], out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))

Finally, he made these settings to the stable diffusion settings. Note you want to keep the aspect ratio in a portrait-like format (768 tall x 512 wide). Going taller can result in multiple heads, going wider can result in more than one person in the scene.

Restore Face: ON
Steps: 42
Sampler: DDIM
CFG scale: 10
Height: 768
Width: 512

Links:

Expanding and enhancing Stable Diffusion with specialized models

Expanding and enhancing Stable Diffusion with specialized models

Now that you have Stable Diffusion 1.5 installed on your local system, have learned how to make cool generative prompts, it might be time to take the next step of trying different latent models.

There is more than one model out there for stable diffusion, and they can generate vastly different images:

Check out this article to learn how to install and use different popular models you can use with stable diffusion:

  • F222 – People found it useful in generating beautiful female portraits with correct body part relations. It’s quite good at generating aesthetically pleasing clothing.
  • Anything V3 – a special-purpose model trained to produce high-quality anime-style images. You can use danbooru tags (like 1girl, white hair) in text prompt.
  • Open Journey – a model fine-tuned with images generated by Mid Journey v4.
  • DreamShaper – model is fine-tuned for portrait illustration style that sits between photorealistic and computer graphics
  • Waifu-diffusion – Japanese anime style
  • Arcane Diffusion – TV show Arcane style
  • Robo Diffusion – Interesting robot style model that will turn everything your subject into robot
  • Mo-di-diffusion – Generate Pixar-like style models
  • Inkpunk Diffusion – Generate images in a unique illustration style
Better stable diffusion and AI generated art prompts

Better stable diffusion and AI generated art prompts

Now that you have stable diffusion on your system, how do you start taking advantage of it?

One way is to try some sample prompts to start with. Techspot has some good ones (halfway through the article) to whet your appetite.

You can get inspiration by looking at good examples on free public prompt marketplaces.

Then you might want to learn how to fix some common problems.

When you’re really ready to dive in, this article from Metaverse gives you a list of excellent getting started guides to help get you from beginner to proficient in generating your own awesome art.

The key to it all is learning the syntax, parameters, and art of crafting AI prompts. It’s as much art as it is science. It’s complex enough that there are everything from beginner examples, free guides, tools to help, all the way to paid marketplaces.

Learning gotten a lot better in the last 6 months since people started learning how to use AI generated prompts last year.

Installing Stable Diffusion 1.5

Installing Stable Diffusion 1.5

To install Stable Diffusion 1.5 (released Oct 20, 2022) locally, I found this video was really excellent – except for a few points:

  1. You MUST use python 3.10.6 (I used 3.9.7 as recommended). The latest version (as of Feb 2023) is Python 3.11.1 – which stable diffusion does NOT seem to like and won’t run.

You might also want to read through this older stable diffusion 1.4 install guide, but he uses model checkpoints which haven’t been updated since version 1.4.

Gotchas and Fixes:

  • If you have an incompatible version of Python installed when you try to run webui-user.bat for the first time, stable diffusion will set itself up to point at this bad python version directory. Even if you uninstall and install the correct python version, stable diffusion will still look at the wrong python version. You can go fiddle with the different setup files – but it’s faster just to blow away the pulled git source at the top level and re-pull it to ensure you don’t have cruft laying around.

Installer Links:

Stable diffusion 2.0 was…well…

Stable diffusion 2.0 was…well…

Stable Diffusion 2.0 seems to have been a step backwards in capabilities and quality. Many people went back to v1.5 for their business.

The difficulty in 2.0 was in part caused by:

  1. Using a new language model that is trained from scratch
  2. The training dataset was heavily censored with a NSFW filter

The second part would have been fine, but the filter was quite inclusive and has removed substantial amount of good-quality data. 2.1 promised to bring the good data back.

Installing Stable Diffusion 2.1

If you’re interested in trying Stable Diffusion 2.1, use this tutorial to installing and use 2.1 models in AUTOMATIC1111 GUI, so you can make your judgement by using it.

You might also try this tutorial by TingTing

Links:

Stable diffusion in other languages

Stable diffusion in other languages

Stable Diffusion was developed by CompVisStability AI, and LAION. It mainly uses the English subset LAION2B-en of the LAION-5B dataset for its training data and, as a result, requires English text prompts to producing images.

This means that the tagging and correlating of images and text are based on English tagged data sets – which naturally tend to come from English-speaking sources and regions. Users that use other languages must first use a translator from their native language to English – which often loses the nuances or even core meaning. On top of that, it also means the latent model images Stable Diffusion can use are usually limited to English-speaking region sources.

For example, one of the more common Japanese terms re-interpreted from the English word businessman is “salary man” which we most often imagine as a man wearing a suit. You would get results that look like this, which might not be very useful if you’re trying to generate images for a Japanese audience.

rinna Co., Ltd. has developed a Japanese-specific text-to-image model named “Japanese Stable Diffusion”. Japanese Stable Diffusion accepts native Japanese text prompts and generates images that reflect the naming and tagged pictures of the Japanese-speaking world which may be difficult to express through translation and whose images may simply not present in the western world. Their new text-to-image model was trained on source material that comes directly from Japanese culture, identity, and unique expressions – including slang.

They did this by using a two step approach that is instructive on how stable diffusion works.

First, the latent diffusion model is left alone and they replaced the English text encoder with a Japanese-specific text encoder. This allowed the text encoder to understand Japanese natively, but would still generate western style tagged images because the latent model remained intact. This was still better than just translating the stable diffusion prompt.

Now Stable Diffusion could understand what the concept of a ‘businessman’ was but it still generated images of decidedly western looking businessmen because the underlying latent diffusion model had not been changed:

The second step was to retrain the the latent diffusion model from more Japanese tagged data sources with the new text encoder. This stage was essential to make the model become more language-specific. After this, the model could finally generate businessmen with the Japanese faces they would have expected:

Read more about it on the links below.

Links:

RAII: Resource Acquisition is Initialization

RAII: Resource Acquisition is Initialization

This is a great little video from the Back to Basics series offered by CppCon. They even have their slides and code on github.

CppCon has a bunch of other great ‘Back to Basics’ videos that cover a whole host of great topics: safe exception handling, move semantics, type erasure, lambdas, and a bunch of other critical but oft misunderstood elements of C++

In this video, you get a refresher on RAII.

“Resource Allocation is Initialization is one of the cornerstones of C++. What is it, why is it important, and how do we use it in our own code?”