The motorcycle race on the Isle of Man is legendary.
Peter Hickman just obliterated the lap record onboard his FHO Racing BMW M 1000 RR and average 136.358 mph. Ride along for the whole 17 minutes of utter madness. You couldn’t pay me enough to even try this at 1/2 the speed – in a car.
The Long Dark was a great game I started playing during early access and really enjoyed. The lonely and desolate wilderness feel really worked well with the the struggle against very simple but brutal natural elements.
The game has been in development longer than some teenagers have even been alive – and has consequently changed a lot over that time. Kudos to Long Dark team for making a time capsule that lets you go back to those early drops by entering a release code in Steam.
While one should ALWAYS be cautious of trainers and save game editors (and there are some on the list that do have viruses (so it’s a good idea to scan them with a virus scanner and only run them in a virtual machine) here’s some of the older trainers for these early drops on GameCopyWorld.
If you’re looking for a unique keyboard, Drop makes some of the most interesting ones. This was kind of unique – it’s a keyboard that has the letters on the edges, not the tops of the keys. This is the CSTM80 mechanical keyboard. It’s pretty chonky on the thickness and not cheap at $149, but could be an interesting addition to your custom setup.
They also make a ton of other keyboards and devices that use keyboards as well as offering the parts so you can build your own custom creation.
By studying real humans completing tasks (such as playing chess or solving a maze), researchers have determined a way to model human behavior. They did this by calculating a peron’s ‘inference budget’. Most humans think for some time, then act. How long they think before acting is called their ‘inference budget’. Researchers found they could measure a person’s individual budget by simply watching how long a person thought about a problem before acting.
“At the end of the day, we saw that the depth of the planning, or how long someone thinks about the problem, is a really good proxy of how humans behave,”
The next step was to run their own model to solve the problem presented to the person. Then, by watching how long the monitored agent took to solve the same problem, they could make very accurate inferences as to when the human stopped planning and know what the person would do next. That value could then be used to predict how that agent would react when solving similar problems.
The researchers tested their approach in three different tasks: inferring navigation goals from previous routes, guessing someone’s communicative intent from their verbal cues, and predicting subsequent moves in human-human chess matches and beat current models.
If we know that a human is about to make a mistake, having seen how they have behaved before, the AI agent could step in and offer a better way to do it. Or the agent could adapt to the weaknesses that its human collaborators have.
In an example from their paper, a person is given different rewards for reaching the blue or orange star. The path to the blue star is always easier than the orange star. As the complexity of the maze grows, the person will start showing bias towards the easier path in some cases. The difference between when they choose the higher reward vs the easier, lower reward can determine a person’s inference budget. When the system determines a problem will be harder than the person’s inference budget allows, the system might offer a hint.
Links:
Research paper: “Modeling Boundedly Rational Agents With Latent Inference Budgets” by Athul Paul Jacob, Abhishek Gupta and Jacob Andreas, ICLR 2024. OpenReview
Conner Slevin, a local resident paralyzed in an accident in 2020, is suing his former attorney Jessica Molligan who he claims made legal arrangements, sent communications, and negotiated settlements without his knowledge. Molligan allegedly made dozens of ADA violation lawsuits in his name against local Portland businesses with the real goal of making herself rich.
In many cases, the property owners said they didn’t know there was an ADA compliance issue until they received a demand letter from the Portland lawyer. The initial letter didn’t specify what needed to be fixed but proposed a settlement agreement. Molligan wouldn’t sue if the owner agreed to make repairs, bring the property into compliance, and pay attorney’s fees of roughly $10,000 or more.
The only problem is that Slevin didn’t know she was doing all this in his name. She’s now being sued by Slevin.
Synthet has a set of fun music/mixing tutorials in which he teaches his various editing and tweaking techniques using the very techniques to do the teaching. They’re really creative and enjoyable. Give one a listen:
Welcome to 6:47 of complete, bonkers industrial music played by the experimental music group Einstürzende Neubauten. Formed in West Berlin in 1980, here they are in 1984 playing on piles of junk in a piece they called ‘Autobahn’. Scraping steel, chainsaws, grinders, and screaming. Just what’s you expect from a 1980’s German punk/industrial/what-the-heck-was-that band.
Stable Diffusion really opened the world to what is possible with generative AI. Stable Diffusion 2 and 3 …well…did not go so well. For a while now, Stable Diffusion 1.5 was your best bet on locally generated AI art but it is really showing it’s age.
Now there is a new player in open source generative AI you can run locally. The developers from Stability.ai have founded Black Forest Labs and released their open source tool: Flux.1
While there are plenty of online generative AI’s like Midjourney, Adobe Firefly and others, they usually require paid or only give limited usage. What’s great about Flux.1 is that is allows completely local installation and usage.
Like many open source packages, there are free and paid versions. Their paid Pro version gives the most impressive results via their api (no purely local generation), a local dev version that can be used by developers but not for commercial use, and a free schnell version for personal use. Both the dev and shnell versions are available for local install and use.
So, lets get started with the shnell version – but the instructions are the same for dev except using 2 different model/weight files.
Instructions for installing Flux.1 on nVidia based Windows 10/11 system:
You might want to enable Windows Long Path support as python sometimes requires it for dependent packages. Be sure to reboot your system after enabling it.
Supported graphics card.
32gb of system ram (though again, you can use the smaller model if you have less ram)
Open a command prompt and make a local working root directory somewhere, I’ll use c:\depot\
You have a few options. First, you need to pick if you’re using the non-commercial Dev version or Schnell version. After that, each has the option of a single easy to use checkpoint package file, or each of the model data files individually. I’ll be using the Schnell ones, but you just need to get the Dev ones from the Dev branch if you want those instead.
C:\depot\ComfyUI>python main.py
A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.0.1 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.
Traceback (most recent call last): File "C:\depot\ComfyUI\main.py", line 83, in <module>
import comfy.utils
File "C:\depot\ComfyUI\comfy\utils.py", line 20, in <module>
import torch
File "C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\__init__.py", line 2120, in <module>
from torch._higher_order_ops import cond
File "C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\_higher_order_ops\__init__.py", line 1, in <module>
from .cond import cond
File "C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\_higher_order_ops\cond.py", line 5, in <module>
import torch._subclasses.functional_tensor
File "C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\_subclasses\functional_tensor.py", line 42, in <module>
class FunctionalTensor(torch.Tensor):
File "C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\_subclasses\functional_tensor.py", line 258, in FunctionalTensor
cpu = _conversion_method_template(device=torch.device("cpu"))
C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\_subclasses\functional_tensor.py:258: UserWarning: Failed to initialize NumPy: _ARRAY_API not found (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\utils\tensor_numpy.cpp:84.)
cpu = _conversion_method_template(device=torch.device("cpu"))
Total VRAM 24576 MB, total RAM 32492 MB
pytorch version: 2.4.0+cu121
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using pytorch cross attention
C:\depot\ComfyUI\comfy\extra_samplers\uni_pc.py:19: SyntaxWarning: invalid escape sequence '\h'
"""Create a wrapper class for the forward SDE (VP type).
****** User settings have been changed to be stored on the server instead of browser storage. ******
****** For multi-user setups add the --multi-user CLI argument to enable multiple user profiles. ******
[Prompt Server] web root: C:\depot\ComfyUI\web
C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\kornia\feature\lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
Import times for custom nodes:
0.0 seconds: C:\depot\ComfyUI\custom_nodes\websocket_image_save.py
Starting server
To see the GUI go to: http://127.0.0.1:8188
Open your web browser and go to http://127.0.01:8188
Click on the ‘Queue Prompt’ button to execute the current prompt
Technically it queues up the work and you should see progress in the command window where you launched python main.py
got prompt
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLOW
Using pytorch attention in VAE
Using pytorch attention in VAE
Model doesn't have a device attribute.
C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Model doesn't have a device attribute.
loaded straight to GPU
Requested to load Flux
Loading 1 new model
Requested to load FluxClipModel_
Loading 1 new model
C:\depot\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00, 1.18s/it]
Requested to load AutoencodingEngine
Loading 1 new model
Prompt executed in 23.65 seconds
When it completes you should see your image. You can then save your image or tweak the parameters.
Debugging help:
numpy is not available
My first runs, I got this from the console when I queued up a request:
got prompt
model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16
model_type FLOW
Using pytorch attention in VAE
Using pytorch attention in VAE
Model doesn't have a device attribute.
C:\Users\matt\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Model doesn't have a device attribute.
loaded straight to GPU
Requested to load Flux
Loading 1 new model
Requested to load FluxClipModel_
Loading 1 new model
C:\depot\ComfyUI\comfy\ldm\modules\attention.py:407: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:555.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
100%|████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:04<00:00, 1.19s/it]
Requested to load AutoencodingEngine
Loading 1 new model
!!! Exception during processing!!! Numpy is not available
Traceback (most recent call last):
File "C:\depot\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\depot\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\depot\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\depot\ComfyUI\nodes.py", line 1445, in save_images
i = 255. * image.cpu().numpy()
^^^^^^^^^^^^^^^^^^^
RuntimeError: Numpy is not available
Prompt executed in 26.44 seconds
C:\depot\ComfyUI>pip install numpy==1.26.4
Defaulting to user installation because normal site-packages is not writeable
Collecting numpy==1.26.4
Downloading numpy-1.26.4-cp312-cp312-win_amd64.whl.metadata (61 kB)
Downloading numpy-1.26.4-cp312-cp312-win_amd64.whl (15.5 MB)
---------------------------------------- 15.5/15.5 MB 57.4 MB/s eta 0:00:00
Installing collected packages: numpy
Attempting uninstall: numpy
Found existing installation: numpy 2.0.1
Uninstalling numpy-2.0.1:
Successfully uninstalled numpy-2.0.1
Successfully installed numpy-1.26.4
C:\depot\ComfyUI>
Uninstalling all pip/python package, clear your pip cache, then re-install the requirements
The first time I installed, I got an error when downloading the numpy library during step in which you pip install the requirements. In order to clear the pip cache, uninstall all pip packages, then re-install all requirements again, I did the following: