Browsed by
Category: Problem solutions

Replacing your Subaru mid 2010’s Crosstrek Headlights

Replacing your Subaru mid 2010’s Crosstrek Headlights

Once your car gets about 10 years old, one of the most annoying things is that headlights dim and yellow. This is due to a number of reasons – but primarily due to the degradation of the UV coating. You can buff it off, but it often quickly returns and you’re stuck with an annoying chore almost every year.

Another option is to buy replacement headlights. In the old days, you simply unscrewed the old bulbs and put in the new ones. Now you need to remove the assemblies – which often involves removing the bumper and surrounding shrouds – as is the case with mid 2010 Subarus.

The Crosstrek/Impreza’s in the 2015 era were actually not that bad to replace. TRQ does a great job showing you how to do the job yourself – including how to re-aim the headlights. It’s a great video.

Set up Windows 11 without an annoying Microsoft Account

Set up Windows 11 without an annoying Microsoft Account

Being required to connect to the internet while installing Windows 11 has been one, in a long line of reasons, why many users refuse to update to the new OS, even though it has been out for 4 years (since Nov 2021). After finally reaching an adoption rate of just over 50%, it has since dropped to 49.08%

The most popular bypass to having to log in with an internet connected Microsoft account was to use “oobe\bypassnro” which, when typed into the command prompt during the Windows 11 setup experience, would enable a button that let you skip connecting to the internet

Unfortunately, Microsoft is removing that trick, but user @witherornot1337 on X found that typing “start ms-cxh:localonly” into the command prompt during the Windows 11 setup experience will allow you to create a local account directly without needing to skip connecting to the internet first.

Or you could, you know, actually give customers what they want instead of the kind of backwards thinking that gave us the universally hated Windows 8.

Links:

Reset a forgotten Windows 7 password

Reset a forgotten Windows 7 password

https://youtu.be/RCInsJ6BLjY?si=1a2wUVjed_kpSpm0

TipsNNTricks shows how to bypass the login password without a recovery CD or without any software. It does require physical access to the system (or a way to trigger a recovery boot); but this really helps if you found an old hard drive or system and can’t remember the password from eons ago.

You first boot in recovery mode. You then gain access to the drive by opening a debug message which opens notepad. This allows you to do File…Open and look at all the files on the C drive. You rename the ‘c:\windows\system32\sethc.exe’ to something else (bak or whatever), then make a copy of cmd.exe and name it sethc.exe in the same directory as the original sethc.exe.

When windows reboots, you can then press the shift key 5 times to trigger hotkeys (sethc.exe), and it will open a cmd prompt instead. Then use net use to reset the password for your accounts and you can log in. Clever!




Install Windows 11 with a local account

Install Windows 11 with a local account

Hate that Windows 11 requires an internet connection and registering a Microsoft Account?

The most popular bypass was “oobe\bypassnro” which, when typed into the command prompt (opened with Shift + F10) during the Windows 11 setup experience, would enable a button that let you skip connecting to the internet and the Microsoft account requirement.

@witherornot1337 on X, used “start ms-cxh:localonly” into the command prompt during the Windows 11 setup experience will allow you to create a local account directly without needing to skip connecting to the internet first.

https://www.windowscentral.com/software-apps/windows-11/an-even-better-microsoft-account-bypass-for-windows-11-has-already-been-discovered#

Choosing something: the 37% rule

Choosing something: the 37% rule

It was the year 1960 and a brainteaser was formulated as “The Secretary Problem”. You need to hire a secretary; there are n applicants to be interviewed. You meet each of them in a random order. You can rank them according to suitability, but once rejected an applicant they cannot be recalled. How can you maximize the probability of picking the best person for the job? 

Other versions of this include the “fiancé problem” (same idea, but you’re looking for a fiancé instead of a secretary) and the “googol game” – in which you are flipping slips of paper to reveal numbers until you decide you’ve probably found the largest of all.

The answer is… surprisingly predictable, it turns out.

“This basic problem has a remarkably simple solution,” wrote mathematician and statistician Thomas S Ferguson in 1989. “First, one shows that attention can be restricted to the class of rules that for some integer r > 1 rejects the first r – 1 applicants, and then chooses the next applicant who is best in the relative ranking of the observed applicants.”

So, when faced with a stream of random choices and wanting to pick the best, the first thing you do is reject everyone. That is, up to a point. Once you reach that point, just accept the next applicant, suitor, or slip of paper, that beats everything you’ve seen so far.

The statistics are fascinating; and it says that you reject the first 37% of applicants and then take the next one that’s better than what you’ve seen in the rejected pool.

This works if it’s apartments, job candidates, or potential life partners.

Article:

Loading Collada files for Maya and 3DSMax

Loading Collada files for Maya and 3DSMax

Collada was an interchange file format for 3D application that started around 2004 and largely died around 2016. I actually worked in a group with Remi Arnaud when it was being used for a project at Intel.

It was a sound idea. With lots of 3d packages and engines out there, getting files from one tool or engine to another was never easy. Since every authoring tool and game uses different structures for storing mesh, material, animation data, etc – the Collada format tried to define a open-standard format to store these relationships in an XML style text file. This allowed maximum flexibility to define relationships; but had the unfortunate side effect of generating sometimes gigantic files that were extremely slow to load.

While it was an extremely flexible format for exchanging data between packages or game engines, once you got there, it was dramatically faster to use a native binary format. Trying to load or save a XML based file format to load a block of content often took 10-100x longer than a binary version. The speed alone meant that it wasn’t practical for any realtime purposes.

Additionally, supporting the entire Collada spec would mean supporting every kind of data relationship – even if the tool or game didn’t need it. It meant that loaders often only implemented the desired features – which meant that you were almost back to where you started from. Custom loaders and savers with limited features. Except Collada files were gigantic and slow to load/save. A real problem when your primary costs are the speed of your content development.

Collada’s practical use was therefore primarily in one or two time transfers between tools. As time went on, and tools and engines consolidated on a few efficient binary formats, formats such as Collada became less and less useful. By the early 2010’s, development and work on it largely died. The last loaders were apparently updated in 2018 and the github site that hosts the binary versions is kind of broken.

At any rate, if you do need to load an old Collada file (.dae, etc) then you’ll need a copy of 3D Studio Max or Maya, and a plugin loader. You can download one of the last collada loaders here.

Install the plugin (make sure Maya is closed) and then start your tool (Maya in my case).

Ensure the Collada plugin is loaded. Go to the Windows-> Settings/Preferences -> Plug-in Manager in Maya and ensure the fbxmaya, FBX, or ColladaMaya pluings are loaded and/or set to auto load:

When you want to import a Collada file, go to File->Import and select the fbx/collada file you want to load and it should load it up.

Links:

Old versions of Long Dark

Old versions of Long Dark

The Long Dark was a great game I started playing during early access and really enjoyed. The lonely and desolate wilderness feel really worked well with the the struggle against very simple but brutal natural elements.

The game has been in development longer than some teenagers have even been alive – and has consequently changed a lot over that time. Kudos to Long Dark team for making a time capsule that lets you go back to those early drops by entering a release code in Steam.

While one should ALWAYS be cautious of trainers and save game editors (and there are some on the list that do have viruses (so it’s a good idea to scan them with a virus scanner and only run them in a virtual machine) here’s some of the older trainers for these early drops on GameCopyWorld.

Installing Black Forest Flux.1

Installing Black Forest Flux.1

Stable Diffusion really opened the world to what is possible with generative AI. Stable Diffusion 2 and 3 …well…did not go so well. For a while now, Stable Diffusion 1.5 was your best bet on locally generated AI art but it is really showing it’s age.

Now there is a new player in open source generative AI you can run locally. The developers from Stability.ai have founded Black Forest Labs and released their open source tool: Flux.1

While there are plenty of online generative AI’s like Midjourney, Adobe Firefly and others, they usually require paid or only give limited usage. What’s great about Flux.1 is that is allows completely local installation and usage.

Like many open source packages, there are free and paid versions. Their paid Pro version gives the most impressive results via their api (no purely local generation), a local dev version that can be used by developers but not for commercial use, and a free schnell version for personal use. Both the dev and shnell versions are available for local install and use.

So, lets get started with the shnell version – but the instructions are the same for dev except using 2 different model/weight files.

Instructions for installing Flux.1 on nVidia based Windows 10/11 system:

  1. Prerequisites:
    • Ensure you have python installed (I used 3.12.5)
    • Ensure you have pip installed (I used pip 24.2)
    • Ensure you have git installed and working
    • You might want to enable Windows Long Path support as python sometimes requires it for dependent packages. Be sure to reboot your system after enabling it.
    • Supported graphics card.
    • 32gb of system ram (though again, you can use the smaller model if you have less ram)
  2. Open a command prompt and make a local working root directory somewhere, I’ll use c:\depot\
  3. We’re going to follow the instructions on the ComfyUI git page.
    • Clone the ComfyUI project
C:\depot> git clone https://github.com/comfyanonymous/ComfyUI.git
  1. Install pytorch

Nvidia users should install stable pytorch using this command:

C:\depot> pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu121

This is the command to install pytorch nightly instead which might have performance improvements:

C:\depot>pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124
  1. Change directory into ComfyUI and ensure the requirements.txt file is there:
  1. Use pip to install all the ComfyUI requirements:
C:\depot\ComfyUI>pip install -r requirements.txt
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: torch in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from -r requirements.txt (line 1)) (2.4.0+cu121)
Collecting torchsde (from -r requirements.txt (line 2))
Downloading torchsde-0.2.6-py3-none-any.whl.metadata (5.3 kB)
Requirement already satisfied: torchvision in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from -r requirements.txt (line 3)) (0.19.0+cu121)
Requirement already satisfied: torchaudio in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from -r requirements.txt (line 4)) (2.4.0+cu121)
Collecting einops (from -r requirements.txt (line 5))
Downloading einops-0.8.0-py3-none-any.whl.metadata (12 kB)
Collecting transformers>=4.28.1 (from -r requirements.txt (line 6))
Downloading transformers-4.44.0-py3-none-any.whl.metadata (43 kB)
Collecting tokenizers>=0.13.3 (from -r requirements.txt (line 7))
Downloading tokenizers-0.20.0-cp312-none-win_amd64.whl.metadata (6.9 kB)
Collecting sentencepiece (from -r requirements.txt (line 8))
Downloading sentencepiece-0.2.0-cp312-cp312-win_amd64.whl.metadata (8.3 kB)
Collecting safetensors>=0.4.2 (from -r requirements.txt (line 9))
Downloading safetensors-0.4.4-cp312-none-win_amd64.whl.metadata (3.9 kB)
Collecting aiohttp (from -r requirements.txt (line 10))
Downloading aiohttp-3.10.2-cp312-cp312-win_amd64.whl.metadata (7.8 kB)
Collecting pyyaml (from -r requirements.txt (line 11))
Downloading PyYAML-6.0.2-cp312-cp312-win_amd64.whl.metadata (2.1 kB)
Requirement already satisfied: Pillow in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from -r requirements.txt (line 12)) (10.4.0)
Collecting scipy (from -r requirements.txt (line 13))
Downloading scipy-1.14.0-cp312-cp312-win_amd64.whl.metadata (60 kB)
Collecting tqdm (from -r requirements.txt (line 14))
Downloading tqdm-4.66.5-py3-none-any.whl.metadata (57 kB)
Collecting psutil (from -r requirements.txt (line 15))
Downloading psutil-6.0.0-cp37-abi3-win_amd64.whl.metadata (22 kB)
Collecting kornia>=0.7.1 (from -r requirements.txt (line 18))
Downloading kornia-0.7.3-py2.py3-none-any.whl.metadata (7.7 kB)
Collecting spandrel (from -r requirements.txt (line 19))
Downloading spandrel-0.3.4-py3-none-any.whl.metadata (14 kB)
Collecting soundfile (from -r requirements.txt (line 20))
Downloading soundfile-0.12.1-py2.py3-none-win_amd64.whl.metadata (14 kB)
Requirement already satisfied: filelock in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from torch->-r requirements.txt (line 1)) (3.15.4)
Requirement already satisfied: typing-extensions>=4.8.0 in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from torch->-r requirements.txt (line 1)) (4.12.2)
Requirement already satisfied: sympy in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from torch->-r requirements.txt (line 1)) (1.13.1)
Requirement already satisfied: networkx in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from torch->-r requirements.txt (line 1)) (3.3)
Requirement already satisfied: jinja2 in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from torch->-r requirements.txt (line 1)) (3.1.4)
Requirement already satisfied: fsspec in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from torch->-r requirements.txt (line 1)) (2024.6.1)
Requirement already satisfied: setuptools in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from torch->-r requirements.txt (line 1)) (72.1.0)
Requirement already satisfied: numpy>=1.19 in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from torchsde->-r requirements.txt (line 2)) (2.0.1)
Collecting trampoline>=0.1.2 (from torchsde->-r requirements.txt (line 2))
Downloading trampoline-0.1.2-py3-none-any.whl.metadata (10 kB)
Collecting huggingface-hub<1.0,>=0.23.2 (from transformers>=4.28.1->-r requirements.txt (line 6))
Downloading huggingface_hub-0.24.5-py3-none-any.whl.metadata (13 kB)
Collecting packaging>=20.0 (from transformers>=4.28.1->-r requirements.txt (line 6))
Downloading packaging-24.1-py3-none-any.whl.metadata (3.2 kB)
Collecting regex!=2019.12.17 (from transformers>=4.28.1->-r requirements.txt (line 6))
Downloading regex-2024.7.24-cp312-cp312-win_amd64.whl.metadata (41 kB)
Collecting requests (from transformers>=4.28.1->-r requirements.txt (line 6))
Downloading requests-2.32.3-py3-none-any.whl.metadata (4.6 kB)
Collecting tokenizers>=0.13.3 (from -r requirements.txt (line 7))
Downloading tokenizers-0.19.1-cp312-none-win_amd64.whl.metadata (6.9 kB)
Collecting aiohappyeyeballs>=2.3.0 (from aiohttp->-r requirements.txt (line 10))
Downloading aiohappyeyeballs-2.3.5-py3-none-any.whl.metadata (5.8 kB)
Collecting aiosignal>=1.1.2 (from aiohttp->-r requirements.txt (line 10))
Downloading aiosignal-1.3.1-py3-none-any.whl.metadata (4.0 kB)
Collecting attrs>=17.3.0 (from aiohttp->-r requirements.txt (line 10))
Downloading attrs-24.2.0-py3-none-any.whl.metadata (11 kB)
Collecting frozenlist>=1.1.1 (from aiohttp->-r requirements.txt (line 10))
Downloading frozenlist-1.4.1-cp312-cp312-win_amd64.whl.metadata (12 kB)
Collecting multidict<7.0,>=4.5 (from aiohttp->-r requirements.txt (line 10))
Downloading multidict-6.0.5-cp312-cp312-win_amd64.whl.metadata (4.3 kB)
Collecting yarl<2.0,>=1.0 (from aiohttp->-r requirements.txt (line 10))
Downloading yarl-1.9.4-cp312-cp312-win_amd64.whl.metadata (32 kB)
Collecting colorama (from tqdm->-r requirements.txt (line 14))
Downloading colorama-0.4.6-py2.py3-none-any.whl.metadata (17 kB)
Collecting kornia-rs>=0.1.0 (from kornia>=0.7.1->-r requirements.txt (line 18))
Downloading kornia_rs-0.1.5-cp312-none-win_amd64.whl.metadata (8.9 kB)
Collecting cffi>=1.0 (from soundfile->-r requirements.txt (line 20))
Downloading cffi-1.17.0-cp312-cp312-win_amd64.whl.metadata (1.6 kB)
Collecting pycparser (from cffi>=1.0->soundfile->-r requirements.txt (line 20))
Downloading pycparser-2.22-py3-none-any.whl.metadata (943 bytes)
Collecting idna>=2.0 (from yarl<2.0,>=1.0->aiohttp->-r requirements.txt (line 10))
Downloading idna-3.7-py3-none-any.whl.metadata (9.9 kB)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from jinja2->torch->-r requirements.txt (line 1)) (2.1.5)
Collecting charset-normalizer<4,>=2 (from requests->transformers>=4.28.1->-r requirements.txt (line 6))
Downloading charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl.metadata (34 kB)
Collecting urllib3<3,>=1.21.1 (from requests->transformers>=4.28.1->-r requirements.txt (line 6))
Downloading urllib3-2.2.2-py3-none-any.whl.metadata (6.4 kB)
Collecting certifi>=2017.4.17 (from requests->transformers>=4.28.1->-r requirements.txt (line 6))
Downloading certifi-2024.7.4-py3-none-any.whl.metadata (2.2 kB)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in c:\users\matt\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (from sympy->torch->-r requirements.txt (line 1)) (1.3.0)
Downloading torchsde-0.2.6-py3-none-any.whl (61 kB)
Downloading einops-0.8.0-py3-none-any.whl (43 kB)
Downloading transformers-4.44.0-py3-none-any.whl (9.5 MB)
---------------------------------------- 9.5/9.5 MB ? eta 0:00:00
Downloading tokenizers-0.19.1-cp312-none-win_amd64.whl (2.2 MB)
---------------------------------------- 2.2/2.2 MB 3.9 MB/s eta 0:00:00
Downloading sentencepiece-0.2.0-cp312-cp312-win_amd64.whl (991 kB)
---------------------------------------- 992.0/992.0 kB 2.3 MB/s eta 0:00:00
Downloading safetensors-0.4.4-cp312-none-win_amd64.whl (286 kB)
Downloading aiohttp-3.10.2-cp312-cp312-win_amd64.whl (376 kB)
Downloading PyYAML-6.0.2-cp312-cp312-win_amd64.whl (156 kB)
Downloading scipy-1.14.0-cp312-cp312-win_amd64.whl (44.5 MB)
---------------------------------------- 44.5/44.5 MB 2.9 MB/s eta 0:00:00
Downloading tqdm-4.66.5-py3-none-any.whl (78 kB)
Downloading psutil-6.0.0-cp37-abi3-win_amd64.whl (257 kB)
Downloading kornia-0.7.3-py2.py3-none-any.whl (833 kB)
---------------------------------------- 833.3/833.3 kB 1.7 MB/s eta 0:00:00
Downloading spandrel-0.3.4-py3-none-any.whl (268 kB)
Downloading soundfile-0.12.1-py2.py3-none-win_amd64.whl (1.0 MB)
---------------------------------------- 1.0/1.0 MB 7.9 MB/s eta 0:00:00
Downloading aiohappyeyeballs-2.3.5-py3-none-any.whl (12 kB)
Downloading aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Downloading attrs-24.2.0-py3-none-any.whl (63 kB)
Downloading cffi-1.17.0-cp312-cp312-win_amd64.whl (181 kB)
Downloading frozenlist-1.4.1-cp312-cp312-win_amd64.whl (50 kB)
Downloading huggingface_hub-0.24.5-py3-none-any.whl (417 kB)
Downloading kornia_rs-0.1.5-cp312-none-win_amd64.whl (1.3 MB)
---------------------------------------- 1.3/1.3 MB 6.5 MB/s eta 0:00:00
Downloading multidict-6.0.5-cp312-cp312-win_amd64.whl (27 kB)
Downloading packaging-24.1-py3-none-any.whl (53 kB)
Downloading regex-2024.7.24-cp312-cp312-win_amd64.whl (269 kB)
Downloading trampoline-0.1.2-py3-none-any.whl (5.2 kB)
Downloading yarl-1.9.4-cp312-cp312-win_amd64.whl (76 kB)
Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Downloading requests-2.32.3-py3-none-any.whl (64 kB)
Downloading certifi-2024.7.4-py3-none-any.whl (162 kB)
Downloading charset_normalizer-3.3.2-cp312-cp312-win_amd64.whl (100 kB)
Downloading idna-3.7-py3-none-any.whl (66 kB)
Downloading urllib3-2.2.2-py3-none-any.whl (121 kB)
Downloading pycparser-2.22-py3-none-any.whl (117 kB)
Installing collected packages: trampoline, sentencepiece, urllib3, scipy, safetensors, regex, pyyaml, pycparser, psutil, packaging, multidict, kornia-rs, idna, frozenlist, einops, colorama, charset-normalizer, certifi, attrs, aiohappyeyeballs, yarl, tqdm, requests, cffi, aiosignal, torchsde, soundfile, kornia, huggingface-hub, aiohttp, tokenizers, spandrel, transformers
WARNING: The script normalizer.exe is installed in 'C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script tqdm.exe is installed in 'C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script huggingface-cli.exe is installed in 'C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\models\deprecated\trajectory_transformer\convert_trajectory_transformer_original_pytorch_checkpoint_to_pytorch.py'
HINT: This error might have occurred since this system does not have Windows Long Path support enabled. You can find information on how to enable this at https://pip.pypa.io/warnings/enable-long-paths

c:\depot\ComfyUI>
  1. Download and install the model data files in the correct folders

After you have ComfyUI downloaded, you need to get the model files and put them in the right places. Model files are found here and are downloaded and put inside the proper comfyUI\models\ subfolders.

You have a few options. First, you need to pick if you’re using the non-commercial Dev version or Schnell version. After that, each has the option of a single easy to use checkpoint package file, or each of the model data files individually. I’ll be using the Schnell ones, but you just need to get the Dev ones from the Dev branch if you want those instead.

If you’re running out of memory, you can replace the \clip\t5xxl_fp16.safetensors with t5xxl_fp8_e4m3fn.safetensors located here.

Schnell checkpoint file:

FileDownload linkCopy location
flux1-dev-fp8.safetensorshttps://huggingface.co/Comfy-Org/flux1-dev/blob/main/flux1-dev-fp8.safetensorsComfyUI\models\checkpoints

Schnell individual files:

FileDownload linkCopy location
t5xxl_fp16.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/tree/mainComfyUI\models\clip\
ae.safetensors https://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/ae.safetensorsComfyUI\models\vae\
flux1-schnell.safetensorshttps://huggingface.co/black-forest-labs/FLUX.1-schnell/blob/main/flux1-schnell.safetensorsComfyUI\models\unet\
  1. Start up the engine by running python on main.py
C:\depot\ComfyUI>python main.py

A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

Traceback (most recent call last):  File "C:\depot\ComfyUI\main.py", line 83, in <module>
    import comfy.utils
  File "C:\depot\ComfyUI\comfy\utils.py", line 20, in <module>
    import torch
  File "C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\__init__.py", line 2120, in <module>
    from torch._higher_order_ops import cond
  File "C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\_higher_order_ops\__init__.py", line 1, in <module>
    from .cond import cond
  File "C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\_higher_order_ops\cond.py", line 5, in <module>
    import torch._subclasses.functional_tensor
  File "C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\_subclasses\functional_tensor.py", line 42, in <module>
    class FunctionalTensor(torch.Tensor):
  File "C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\_subclasses\functional_tensor.py", line 258, in FunctionalTensor
    cpu = _conversion_method_template(device=torch.device("cpu"))
C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\_subclasses\functional_tensor.py:258: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_numpy.cpp:84.)
  cpu = _conversion_method_template(device=torch.device("cpu"))
Total VRAM 24576 MB, total RAM 32492 MB
pytorch version: 2.4.0+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using pytorch cross attention
C:\depot\ComfyUI\comfy\extra_samplers\uni_pc.py:19: SyntaxWarning: invalid escape sequence '\h'
  """Create a wrapper class for the forward SDE (VP type).
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
[Prompt Server] web root: C:\depot\ComfyUI\web
C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
  @torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)

Import times for custom nodes:
   0.0 seconds: C:\depot\ComfyUI\custom_nodes\websocket_image_save.py

Starting server

To see the GUI go to: http://127.0.0.1:8188
  1. Open your web browser and go to http://127.0.01:8188
  1. Click on the ‘Queue Prompt’ button to execute the current prompt

Technically it queues up the work and you should see progress in the command window where you launched python main.py

got prompt
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLOW
Using pytorch attention in VAE
Using pytorch attention in VAE
Model doesn't have a device attribute.
C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
Model doesn't have a device attribute.
loaded straight to GPU
Requested to load Flux
Loading 1 new model
Requested to load FluxClipModel_
Loading 1 new model
C:\depot\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00,  1.18s/it]
Requested to load AutoencodingEngine
Loading 1 new model
Prompt executed in 23.65 seconds
  1. When it completes you should see your image. You can then save your image or tweak the parameters.

Debugging help:

  1. numpy is not available

My first runs, I got this from the console when I queued up a request:

got prompt
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLOW
Using pytorch attention in VAE
Using pytorch attention in VAE
Model doesn't have a device attribute.
C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
  warnings.warn(
Model doesn't have a device attribute.
loaded straight to GPU
Requested to load Flux
Loading 1 new model
Requested to load FluxClipModel_
Loading 1 new model
C:\depot\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
  out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00,  1.19s/it]
Requested to load AutoencodingEngine
Loading 1 new model
!!! Exception during processing!!! Numpy is not available
Traceback (most recent call last):
  File "C:\depot\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\depot\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\depot\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\depot\ComfyUI\nodes.py", line 1445, in save_images
    i = 255. * image.cpu().numpy()
               ^^^^^^^^^^^^^^^^^^^
RuntimeError: Numpy is not available

Prompt executed in 26.44 seconds

It turns out that I, and others, have the wrong version of numpy. This fixed it by exiting out of the server (ctrl-c) and then installing numpy verison 1.26.4:

C:\depot\ComfyUI>pip install numpy==1.26.4
Defaulting to user installation because normal site-packages is not writeable
Collecting numpy==1.26.4
  Downloading numpy-1.26.4-cp312-cp312-win_amd64.whl.metadata (61 kB)
Downloading numpy-1.26.4-cp312-cp312-win_amd64.whl (15.5 MB)
   ---------------------------------------- 15.5/15.5 MB 57.4 MB/s eta 0:00:00
Installing collected packages: numpy
  Attempting uninstall: numpy
    Found existing installation: numpy 2.0.1
    Uninstalling numpy-2.0.1:
      Successfully uninstalled numpy-2.0.1
Successfully installed numpy-1.26.4

C:\depot\ComfyUI>

Uninstalling all pip/python package, clear your pip cache, then re-install the requirements

The first time I installed, I got an error when downloading the numpy library during step in which you pip install the requirements. In order to clear the pip cache, uninstall all pip packages, then re-install all requirements again, I did the following:

C:\depot\ComfyUI> pip uninstall -r requirements.txt -y 
C:\depot\ComfyUI> python -m pip cache purge

Then I re-ran all the pip installation commands.

Links:

Other generative AI installation guides:

I have previous posted instructions on how to install Stable Diffusion 2 (as well as Stable Diffusion 1.5 and 1.4) as well as some other package installs.

Attaching a ST-225 hard drive

Attaching a ST-225 hard drive

Here’s a collection of all the tools you’ll need to set up an old MFM style hard drive in a XT/286/386/486 computer.

Hardware you’ll need:

Software

Informational links:

Blue-screen Windows on purpose

Blue-screen Windows on purpose

I wrote awhile back on how to crash Linux/cause a Linux kernel panic in order to test how your program can handle a crash – but can you cause a Windows blue-screen programmatically?

Raymond Chen of the New Old Thing describes a variety of methods to crash Windows purposefully. He also cautions against ad-hoc methods like killing winlogin.

Methods you can use to cause a Windows Blue-screen:

  1. Windows allows you to configure a specific keyboard combination to cause a crash. You set some registry keys and then can crash a system by holding right CTRL then pressing scroll lock key twice. You can also customize the key sequence via registering custom keyboard scan codes. If you have a kernel debugger attached it will trigger the kernel debugger after the crash dump is written.
  2. The best way to trigger an artificial kernel crash is to use NotMyFault, which is part of the Microsoft Windows SysInternals tools.