Browsed by
Author: matt

Anadol’s data projections

Anadol’s data projections

Refik Anadol makes projection mapping and LED screen art. His unique approach, however, is embracing massive data sets churned through various AI algorithms as his visualization source.

I think one of his unique additions to the space is visualizing the latent space generated during machine learning stages.

Some of his projects:

The Field, The Cube, The Ladder, The Horse

The Field, The Cube, The Ladder, The Horse

The first personality tests appeared in the 1920s and were intended to aid in personnel selection, particularly in the military. Since then, a number of personality tests have been developed and optimized for different purposes. Some of these—such as the MBTI and the Keirsey Temperament Sorter—really can help people understand themselves better, but others are made more for fun.

One of the first ones I heard was the field, cube, ladder, and horse test. You ask those with you to sit quietly and then imagine each of these things as you ask the questions. Don’t tell them how many questions there are. Encourage some time as you ask the question so they can get a very good visual picture in their mind before moving on to the next question.

  1. Think of an open field. How big is this field? What is it filled with? What are the surroundings like?
  2. Think of a cube. How big is the cube? What is it made of, and what is the surface like? What color is it? Where in the field is it? Where is the cube (e.g. on the ground, floating, etc.)? Is it transparent? If so, can you see inside?
  3. Think of a ladder. How long is this ladder, and where is this located in your field? What’s the distance between the ladder and the cube?
  4. Think of a horse. What color is the horse? What is the horse doing, and where is it in relation to your cube?

How to interpret the questions:

  1. The field represents your mind. Its size is the representation of your knowledge of the world, and how vast your personality is. The condition of the field (dry, grassy, or well-trimmed) is what your personality looks like at first glance
  2. The cube represents you. The size of the cube is your ego. The surface of the cube represents what is visibly observable about your personality, or maybe it is what you want others to think about you. The texture of the cube (e.g. smooth, rough, bumpy, etc.) represents your nature.
  3. The ladder represents two different aspects of your life—your goals and your friendships (though I heard it originally only as your friends). The size location and material of your ladder can also tell you how close you are with your friends. You guessed it—the closer the ladder is to the cube and the stronger the ladder is, the better it is for your friendships
  4. The horse represents your ideal partner/sexuality. It could be playing, running around, or grazing right next to your cube or clear across the field.

See the links below for additional guides on how to interpret the results as well as some other tests you might try.

Links:

Install Stable Dreamfusion on Windows

Install Stable Dreamfusion on Windows

I wrote about Stable Dreamfusion previously. Dreamfusion first takes normal Stable Diffusion text prompts to generate 2D images of the desired object. Stable Dreamfusion then uses those 2D images to generate 3D meshes.

A hamburger

The authors seemed to be using A100 nVidia cards on an Ubuntu system. I wanted to see if I could get this to work locally on my home Windows PC, and found that I could do so.

System configuration I am using for this tutorial:

  • nVidia GeForce GTX 3090
  • Intel 12th gen processor
  • Windows 10

Setting Stable Dreamfusion up locally:

Step 1: Update your Windows and drivers

  1. Update Windows
  2. Ensure you have the latest nVidia driver installed.

Step 2: Install Windows Subsystem for Linux (WSL)

  1. Install Windows Subsystem for Linux (WSL). WSL install is a simple command line install. You’ll need to reboot after you install. You want to make sure you install Ubuntu 22.04, which is the default in Feb 2023 since that is what Stable Dreamfusion likes. Currently WSL installs the latest Ubuntu distro by default, so this works:
    wsl –install
    If you want to make sure you get Ubuntu 22.04, use this command line:
    wsl –install -d Ubuntu-22.04
  2. After installing WSL, Windows will ask to reboot.
  3. Upon reboot, the WSL will complete installation and ask you to create a user account.
  4. Start Ubuntu 22.04 on WSL by clicking on the Windows Start menu and typing ‘Ubuntu’ or you can type Ubuntu at a command prompt and type ‘Ubuntu’.

Step 2b (optional): Install Ubuntu wherever you want on your Windows system. By default it installs the image on your C:\Users directory – which is kind of annoying.

Step 3: Install dependent packages on Ubuntu

  1. If you don’t have Ubuntu started, go ahead and start Ubuntu 22.04 on WSL by clicking on the Windows Start menu and typing ‘Ubuntu’ (or you can type Ubuntu at a command prompt as well). A new shell terminal should appear.
  2. You need to install the nVidia CUDA SDK on Ubuntu. You’ll choose one of these two options:
    • You will then get a set of install instructions at the bottom of the page (wget, apt-get, etc). Simply copy the lines one by one and put them into your Ubuntu terminal. Ensure each step passes without errors before continuing.
    • The ‘sudo apt-get -y install cuda’ line will install a lot of packages. It can take 10-15 minutes.
  3. Install python3 pip. This is required for the Dreamfusion requirements installation script.
    • sudo apt install python3-pip

Step 4: Install Stable Dreamfusion and dependent packages

  1. You should now follow the install instructions found on the Dreamfusion page.
  2. Clone the project as directed: git clone https://github.com/ashawkey/stable-dreamfusion.git
  3. Install with PIP: Install the pre-requisites via pip as directed on the Dreamfusion github page:
    • pip install -r requirements.txt
    • I also installed both optional packages nvdiffrast and CLIP.
    • Add this export line to your .bashrc to ensure python can find libcudnn:
      export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
  4. I did not install the build extension options
  5. Exit and restart your shell so that all path changes take effect

Step 5: Run a workload!

Follow the instructions in the USAGE section of the Dreamfusion instructions. Instead of ‘python’ use ‘python3’. They have a number of things you can specify like negative prompts, using the GUI interface (which does not work under WSL),

The very first run will take a long time. It will download several gigabytes of training data, then train 100 epoch’s, which can take up to an hour.

$> python3 main.py --text "a hamburger" --workspace trial -O 
$> python3 main.py --text "a hamburger" --workspace trial -O --sd_version 1.5 
$> python3 main.py --workspace trial -O --test 
$> python3 main.py --workspace trial -O --test --save_mesh 

Check Your Output:

Look in the results directory under the workspace name:

.\stable-dreamfusion\<workspace name>\mesh\ #directory holds the .obj, .mat, and .png files
.\stable-dreamfusion\<workspace name>\results\ #directory holds a mp4 video that shows the object rotating

Copying them to Windows:
All Windows drives are pre-mounted in \mnt\<drive letter>\ for WSL.
Ex: \mnt\c\
So you can copy the output files to your windows side by doing:
cp -rP .\<workspace name> \mnt\c\workdir\

Looking at the generated meshes with materials:

  1. Install Blender
  2. File->Import->Wavefront (.obj) (legacy)
  3. Or, use 3D Viewer (though it seems to have issues with material loading at times)

Fixes:

  1. You might get an error about missing libcudnn_cnn_infer.so.8
==> Start Training trial Epoch 1, lr=0.050000 …
0% 0/100 [00:00<?, ?it/s]Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory

add this to your .bashrc to ensure it can find libcudnn:
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH

2. If you load the object in Blender but it doesn’t load the texture maps, try Alt-Z

Links:

Midjourney intro + prompt guide

Midjourney intro + prompt guide

Matt Wolfe briefly walks you through getting Midjourney set up (via Discord) and then gives you some great geting started prompts to help you learn different styles and image generation capabilities.

He also recommends Guy Parsons who gives out lots of tips on building prompts and who has a free e-book with some of his best tips.

Comparing AI art generators for common artist workloads

Comparing AI art generators for common artist workloads

Gamefromscratch decides to do side-by-side results tests on DALL-E 2, Stable Diffusion, and MidJourney for a variety of art generation tasks.

His conclusion is they are not going to replace artist for all tasks, but for concept art, pixel art, and some other simple tasks these AI generators can replace artists.

If you use social media, you should watch this

If you use social media, you should watch this

Smarter Every Day does some amazing videos. While just a few years old, these videos on social media manipulation are even more relevant today than ever. They cover how foreign agents are already manipulating and infiltrating social media platforms Reddit, Facebook, YouTube, and likely countless others. He also shares some good data on how the manipulators work – which echoes almost exactly the techniques used by Ryan Holladay for over a decade and he outlines in Trust Me, I’m Lying: Confessions of a Media Manipulator. All of this information is now about 5-10 years old – so you can imagine how much more sophisticated it has become since then.

If you use social media, this is critical information since it has become crystal clear (from the social media companies themselves, as well as congressional reports) that foreign entities have been doing this for almost a decade now. This was before AI based chat and response bots – which will likely make the volume and sophistication of these kinds of manipulators massively more pervasive and harder to spot.

How close are we a complete world transformation due to AI?

How close are we a complete world transformation due to AI?

Tom Scott distils down his encounter with AI doing a job he used to do (almost equally well) and then reflects on why this could be a completely transformative development for the world – much like when the internet really took off in the late 90’s. I think he’s probably right. As someone that has played with AI art generation and watching the ground breaking papers that are using AI for even traditional rendering and modeling tasks in just the graphics world, I think we’re just at the first part of his sigmoid curve.

This transformation is likely to be very different than just the early internet upheavals of the music industry, cellular phones, and stores/commerce that he describes. Those were largely transformations of market form with the same commercial and societal needs.

I think this is different in at least 2 ways. First, AI is bringing about a change in which thought, analysis, creativity, and response to problems themselves is likely about to be abdicated (and somewhat blindly by the lazy or those that aren’t critically looking at what is being generated). And we’ll be abdicating that power to systems aren’t truly or fully understood, controlled, or protected.

With things like chatGPT, we will very easily start abdicating the hard work of thinking itself. If we no longer crafting the actual language of our responses, doing the hard logical work of building arguments for our daily actions or policies we live by – we will never develop the critical thinking ability to even question what is generated. Instead, they are generated for us. What would that do to us long term? Especially we we already see that chatGPT and other AI systems can get things terribly wrong – and not give us the first clue they are wrong.

Secondly, like all tools, they could even be controlled/manipulated by nefarious agents. Today, our most deadly and horrific tools of destruction (nuclear bombs and sophisticated strategic weapons) are today largely contained within government military systems and by needing the highly specialized ability to build them.

AI can be wielded by anyone, anywhere in the world, with any motivation (political, personal, etc.). With just a small rack of commercially available servers, one has the ability to unleash the kind of infinitely scalable social media posting, auto-responding, narrative controlling, news story generating, and possibly subverted think-for-you devices upon the whole world.

We have known since at least 2019 that this is happening on all major social media platforms despite the best efforts of some of the smartest people in Silicon Valley working on it. Smarter every day did a series of stories on the problem. Research has proven again and again these things are happening and are very, very easy to do and very, very hard to stop:

A few clever AI systems that would likely cost less than a single cruise missile could easily overwhelm social media forums, message boards, Wikipedia edits, generated news articles, etc – before we could ever hope to verify the claims or combat its ability to generate hundreds of thousands of responses, up/down votes, planted webpage articles, etc every hour. How could one even verify the claims if everything is suspect? Why WOULDN’T a country do this if it cost less than what a single missile costs? Even better, what if the AI can be subverted to bias certain responses (which we have already seen too)?

In the post-truth internet, people are well into putting their trust in anonymous influencer opinions and echo chamber forum posts before well verified facts. What will this mean in our internet era in which ‘objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.’?

My how far we’ve come from the idea that the internet would become a forum in which people share ideas and the best ones rise to the top. How dangerously naïve we were…

gigachad bari sax – where’s he now?

gigachad bari sax – where’s he now?

I remember clipping this video of a random busker playing some funky bari sax in New York subways about 10 or so years back. I wonder whatever happened to that guy?

There he is. Looks like Leo was recently at the Royal Albert Hall in London playing front man on the song Moanin’ at the BBC Proms.

Sausage Samba

Sausage Samba

The early internet had some absolute amazing creations. This is one of them. A fantastical mashup of Star Trek TNG and an ode to sausage by Friendly Rich. It’s been out for almost a decade, and only has about 300k views – which is a crime and shows people still lack quality taste even in the modern, post-truth internet era.