Browsed by
Category: Problem solutions

Oculus Quest 2 issues from sitting too long

Oculus Quest 2 issues from sitting too long

Oculus is doing a black Friday sale for 2023. I have an Oculus Quest 2, but hadn’t used it in over a year. I plugged it in to charge it back up and browse the store. Unfortunately, the store app screen told me it couldn’t load the store.

I went to setting->Wifi and manually connected to my home network. Duh! (so I thought) Even after this, the store app was blank and would tell me it couldn’t display anything. Time to start debugging.

  • The wifi would connect and said it had excellent wifi signal, but limited connectivity.
    • I tested my other wireless devices and they had no trouble connecting to the wifi and could browse the net normally.
    • I unplugged the mesh network repeaters around my house in case it was picking up a weak signal from one of those. No change.
    • I tried setting up my iPhone with as a wireless hotspot and connected to that. I got the same strong signal, but limited connectivity.
    • I checked the IP address of my Oculus in Settings, and I could ping the device just fine. If I turned the headset off, I couldn’t ping it. It seemed like it was connected ok.
    • I tried unplugging other devices from my wifi in case there was an IP conflict anywhere. Same problem.
  • I tried connecting the quest with a USB cable but I still could not get updates nor see anything in the store app or main menu.
  • Despite not being used for 2 years, when I went to system->updates, it showed no updates available. Something is fishy, there had to be updates.
  • I opened the in-headset browser and it would tell me that I could not browse because the date was wrong. It was set to 5:00am Sept 17, 2037. Whoa.
    • There is NO way to change the date/time in the settings or anywhere else I could find.

It turns out, others have seen this issue too. Their Oculus fast forwards to the future mysteriously and then connectivity to the store/web/updates doesn’t seem to work after that. You need to get the date fixed, but there’s no obvious way to do it.

Solution: Factory Reset

In the end, I decided to do a factory reset (Hold the power and volume buttons while booting) because it had been well over a year since I used it last and I figured it would be good to have a clean start. However, there is the option of using side-loaded apps (see below).

Unfortunately, even the factory reset gave me a few headaches. First, the headset isn’t always obvious when it sleeps vs actually powers off. My first attempt I didn’t power off all the way and just woke from sleep and I didn’t get the reset menu holding the power+volume buttons down. I went to settings menu and shut the device down in the headset to be sure.

After that, I was then able to cold boot and get into the factory reset menu. I selected factory reset and waited for it to clean the device and the progress bar indicated the reset was complete. The screen went black (but still powered), but didn’t reboot. I let it set a few minutes, then decided to manually reboot with the power button. Fingers crossed.

The first reboot I got the meta logo, but shortly after that the screen went blank (still powered) but no reboot. I let it set for a few minutes then manually powered it off using the power button – again.

On the second reboot, I got the meta logo, and then it started animating. That’s a good sign. Then the welcome page came up and I could connect to wifi and start updating.

During the update phase (1/2) while the progress bar was moving, I took the headset off to read more instructions. When I put it back on to see how far it was, the display was a patterned garbled static. I took it off and let it sit for a minute, then tried again. The display came back up and the update phase 1 of 2 completed normally.

Sidequest

The bad part about a factory reset is you lose all your installed games. I had to go back in and start installing all of them again. What a pain, because it wasn’t a very fast process.

Another option is to load an app that will update your time via an alternate Oculus app store called Sidequest. Sidequest allows you to load your own apps – including an ‘Open Settings’ app that allows you to update your date/time.

ADB

The Oculus is really just an Android device underneath. This means if you have developer mode enabled and have the Android developer kit installed, you can use ADB commands. I haven’t tried this, but supposedly this will work:

adb shell am start -a android.settings.SETTINGS

If you have Sidequest loaded, you can use this:

adb shell am start -a android.intent.action.VIEW -d com.oculus.tv -e uri com.android.settings/.DevelopmentSettings com.oculus.vrshell/.MainActivity

Links:

My heads are gone!

My heads are gone!

Are you losing the heads of those images you’re generating in stable diffusion?

Try adding these keywords to your prompt:

  • “A view of”
  • “A scene of”
  • “Viewed from a distance”
  • “Standing on a “
  • “longshot”, “full shot”, “wideshot”, “extreme wide shot”, “full body”
  • start the prompt with “Head, face, eyes”
  • Try adjusting the aspect ratio of the image to be taller instead of wider. Be careful not to go too tall (or two wide) or you’ll get the double-head or start generating combinations of two people.
  • The source material has been scanned in a taller aspect ratio, try adjusting the x-side of your ratio
  • Use img2img on a crop that includes part of the chest to make it match the rest of the drawing
  • Cinematography terms tend to work well. In order of close to far: Extreme close-up, close-up, medium close-up, medium shot, medium full shot, full shot, long shot, extreme long shot.

Links:

Stable diffusion high quality prompt thought process

Stable diffusion high quality prompt thought process

Content warning: Some of the links have some moderately NSFW pictures. There is no outright nudity, but it does deal with generating rather busty images. This article should be fine, but be aware following the links.

While this guide is translated from a Japanese source and uses the Waifu/Danbooru model to generate more anime-looking models, it works really well for generating ultra-realistic Victorian pictures using stable diffusion’s standard 1.5 model. Here’s some I made using his prompt with just 30 minutes of experimenting:

Fair warning, the original author is trying to generate more…busty women that look better as anime characters under the Waifu model. I won’t comment on his original purpose, but I thought this was an interesting description of how a ‘prompt engineer’ moved from an idea to generating a stable diffusion prompt.

First he started with a good description of what he wanted:

I want a VICTORIAN GIRL in a style of OIL PAINTING
Eye and Face are important in art so she must have PERFECT FACESEXY FACE and her eye have DETAILED PUPILS
I want she to have LARGE BREASTTONED ABS and THICK THIGH.
She must look FEMININE doing EVOCATIVE POSESMIRK and FULL BODY wearing NIGHT GOWN
The output must be INTRICATEHIGH DETAILSHARP
And in the style of {I’m not give out the artist names to avoid trouble. Apologize.}

This lead him to generate the following prompt. Note his use of parenthesize () to add emphasis, and terms inside square brackets [] to minimize the direction.

Prompt :
VICTORIAN GIRL,FEMININE,((PERFECT FACE)),((SEXY FACE)),((DETAILED PUPILS)).(ARTIST),ARTIST,ARTIST,(ARTIST). OIL PAINTING. (((LARGE BREAST)),((TONED ABS)),(THICK THIGH).EVOCATIVE POSE, SMIRK,LOOK AT VIEWER, ((BLOUSE)).(INTRICATE),(HIGH DETAIL),SHARP

Unfortunately, you don’t need to experiment for long to realize stable diffusion needs a lot of help with anatomy. It often generates nightmare fuel images that that have multiple heads, messed up arms, hands with too many fingers, eyes with terrifying pupils (or no pupils), too many limbs – well, you get the idea. So you need to make sure those things don’t show up by banning them via setting the negative prompts (again, not commenting on original purpose):

Negative Prompt :
((nipple)), ((((ugly)))), (((duplicate))), ((morbid)), ((mutilated)), (((tranny))), (((trans))), (((transsexual))), (hermaphrodite), [out of frame], extra fingers, mutated hands, ((poorly drawn hands)), ((poorly drawn face)), (((mutation))), (((deformed))), ((ugly)), blurry, ((bad anatomy)), (((bad proportions))), ((extra limbs)), cloned face, (((disfigured))). (((more than 2 nipples))). [[[adult]]], out of frame, ugly, extra limbs, (bad anatomy), gross proportions, (malformed limbs), ((missing arms)), ((missing legs)), (((extra arms))), (((extra legs))), mutated hands, (fused fingers), (too many fingers), (((long neck)))

Finally, he made these settings to the stable diffusion settings. Note you want to keep the aspect ratio in a portrait-like format (768 tall x 512 wide). Going taller can result in multiple heads, going wider can result in more than one person in the scene.

Restore Face: ON
Steps: 42
Sampler: DDIM
CFG scale: 10
Height: 768
Width: 512

Links:

Expanding and enhancing Stable Diffusion with specialized models

Expanding and enhancing Stable Diffusion with specialized models

Now that you have Stable Diffusion 1.5 installed on your local system, have learned how to make cool generative prompts, it might be time to take the next step of trying different latent models.

There is more than one model out there for stable diffusion, and they can generate vastly different images:

Check out this article to learn how to install and use different popular models you can use with stable diffusion:

  • F222 – People found it useful in generating beautiful female portraits with correct body part relations. It’s quite good at generating aesthetically pleasing clothing.
  • Anything V3 – a special-purpose model trained to produce high-quality anime-style images. You can use danbooru tags (like 1girl, white hair) in text prompt.
  • Open Journey – a model fine-tuned with images generated by Mid Journey v4.
  • DreamShaper – model is fine-tuned for portrait illustration style that sits between photorealistic and computer graphics
  • Waifu-diffusion – Japanese anime style
  • Arcane Diffusion – TV show Arcane style
  • Robo Diffusion – Interesting robot style model that will turn everything your subject into robot
  • Mo-di-diffusion – Generate Pixar-like style models
  • Inkpunk Diffusion – Generate images in a unique illustration style
Better stable diffusion and AI generated art prompts

Better stable diffusion and AI generated art prompts

Now that you have stable diffusion on your system, how do you start taking advantage of it?

One way is to try some sample prompts to start with. Techspot has some good ones (halfway through the article) to whet your appetite.

You can get inspiration by looking at good examples on free public prompt marketplaces.

Then you might want to learn how to fix some common problems.

When you’re really ready to dive in, this article from Metaverse gives you a list of excellent getting started guides to help get you from beginner to proficient in generating your own awesome art.

The key to it all is learning the syntax, parameters, and art of crafting AI prompts. It’s as much art as it is science. It’s complex enough that there are everything from beginner examples, free guides, tools to help, all the way to paid marketplaces.

Learning gotten a lot better in the last 6 months since people started learning how to use AI generated prompts last year.

Installing Stable Diffusion 1.5

Installing Stable Diffusion 1.5

To install Stable Diffusion 1.5 (released Oct 20, 2022) locally, I found this video was really excellent – except for a few points:

  1. You MUST use python 3.10.6 (I used 3.9.7 as recommended). The latest version (as of Feb 2023) is Python 3.11.1 – which stable diffusion does NOT seem to like and won’t run.

You might also want to read through this older stable diffusion 1.4 install guide, but he uses model checkpoints which haven’t been updated since version 1.4.

Gotchas and Fixes:

  • If you have an incompatible version of Python installed when you try to run webui-user.bat for the first time, stable diffusion will set itself up to point at this bad python version directory. Even if you uninstall and install the correct python version, stable diffusion will still look at the wrong python version. You can go fiddle with the different setup files – but it’s faster just to blow away the pulled git source at the top level and re-pull it to ensure you don’t have cruft laying around.

Installer Links:

Stable diffusion 2.0 was…well…

Stable diffusion 2.0 was…well…

Stable Diffusion 2.0 seems to have been a step backwards in capabilities and quality. Many people went back to v1.5 for their business.

The difficulty in 2.0 was in part caused by:

  1. Using a new language model that is trained from scratch
  2. The training dataset was heavily censored with a NSFW filter

The second part would have been fine, but the filter was quite inclusive and has removed substantial amount of good-quality data. 2.1 promised to bring the good data back.

Installing Stable Diffusion 2.1

If you’re interested in trying Stable Diffusion 2.1, use this tutorial to installing and use 2.1 models in AUTOMATIC1111 GUI, so you can make your judgement by using it.

You might also try this tutorial by TingTing

Links:

Introduction to writing stable diffusion prompts

Introduction to writing stable diffusion prompts

HowToGeek has a wonderful little introduction on how to start write your first Stable Diffusion prompts.

Update 02-2023: Here’s 10 really amazing resources to help you to generate really great prompts and art.

They start with some simple AI image generation and move on to more and more complex examples that includes a brief introduction to some key parameters, changing and including broader image sources, and then generating various famous artistic styles.

They finish out the intro with some links to help you learn more:

  • Lexica — a repository of images generated using Stable Diffusion and the corresponding prompt. Searchable by keyword.
  • Stable Diffusion Artist Style Studies — A non-exhaustive list of artists Stable Diffusion might recognize, as well as general descriptions of their artistic style. There is a ranking system to describe how well Stable Diffusion responds to the artist’s name as a part of a prompt.
  • Stable Diffusion Modifier Studies — a list of modifiers that can be used with Stable Diffusion, just like the artist page.
  • The AI Art Modifiers List — A photo gallery showcasing some of the strongest modifiers you can use in your prompts, and what they do. They’re sorted by modifier type.
  • Top 500 Artists Represented in Stable Diffusion — We know exactly what images were included in the Stable Diffusion training set, so it is possible to tell which artists contributed the most to training the AI. Generally speaking, the more strongly represented an artist was in the training data, the better Stable Diffusion will respond to their name as a keyword.
  • The Stable Diffusion Subreddit — The Stable Diffusion subreddit has a constant flow of new prompts and fun discoveries. If you’re looking for inspiration or insight, you can’t go wrong.

Links: