Bet you never saw this video before
These are the deeds that pass with us to eternity. Helping another man when he is at his lowest point is a sign of a true man.
These are the deeds that pass with us to eternity. Helping another man when he is at his lowest point is a sign of a true man.
It’s no longer Tesla – it’s BYD from China. BYD sold 1.86 million cars in 2022 which surpassed Tesla’s 1.3 million cars. This wasn’t unexpected – back in the financial crisis of 2008’s, Berkshire Hathaway invested a hefty $230 million in BYD stock. How has that gone? Charlie Munger of Berkshire has said the investment was one of his best investments and is doing ridiculously better than Tesla. (paywall free) That outlay has ballooned more than 1,570%, and even after the most recent selldown of it’s holdings in late 2022/early 2023, Berkshire’s remaining stake is still worth around $4.5 billion. BYD is looking to expand it’s reach in Japan and Europe.
BYD has made batteries for commercial and industrial purposes for years – and now they have developed their Blade Battery that seems to handle puncture and temperature tests much better than current EV batteries that have a bad tendency to catch fire and explode in accidents. Tesla plans on using a blade battery in its car in upcoming models.
BYD hasn’t come to the US, but are (obviously) selling quite well in China and Europe. Middle class Chinese customers are flocking to the $14,500 and $29,000 price tag instead of Tesla and other EV makers. So what do these cars look like? Their more recent dual motor flagship model the BYD Han is quite nice. It sells for $42,000 in China and 70,000 Euros in the EU. The interiors and ride is about as nice as the fact it can do 0-60 in 3.9 seconds.
Some of the other offerings from BYD, and a whole host of other makers if you’re curious:
Refik Anadol makes projection mapping and LED screen art. His unique approach, however, is embracing massive data sets churned through various AI algorithms as his visualization source.
I think one of his unique additions to the space is visualizing the latent space generated during machine learning stages.
Some of his projects:
The first personality tests appeared in the 1920s and were intended to aid in personnel selection, particularly in the military. Since then, a number of personality tests have been developed and optimized for different purposes. Some of these—such as the MBTI and the Keirsey Temperament Sorter—really can help people understand themselves better, but others are made more for fun.
One of the first ones I heard was the field, cube, ladder, and horse test. You ask those with you to sit quietly and then imagine each of these things as you ask the questions. Don’t tell them how many questions there are. Encourage some time as you ask the question so they can get a very good visual picture in their mind before moving on to the next question.
How to interpret the questions:
See the links below for additional guides on how to interpret the results as well as some other tests you might try.
Links:
I wrote about Stable Dreamfusion previously. Dreamfusion first takes normal Stable Diffusion text prompts to generate 2D images of the desired object. Stable Dreamfusion then uses those 2D images to generate 3D meshes.

The authors seemed to be using A100 nVidia cards on an Ubuntu system. I wanted to see if I could get this to work locally on my home Windows PC, and found that I could do so.
System configuration I am using for this tutorial:
Setting Stable Dreamfusion up locally:
Step 1: Update your Windows and drivers
Step 2: Install Windows Subsystem for Linux (WSL)
Step 2b (optional): Install Ubuntu wherever you want on your Windows system. By default it installs the image on your C:\Users directory – which is kind of annoying.
Step 3: Install dependent packages on Ubuntu
Step 4: Install Stable Dreamfusion and dependent packages
Step 5: Run a workload!
Follow the instructions in the USAGE section of the Dreamfusion instructions. Instead of ‘python’ use ‘python3’. They have a number of things you can specify like negative prompts, using the GUI interface (which does not work under WSL),
The very first run will take a long time. It will download several gigabytes of training data, then train 100 epoch’s, which can take up to an hour.
$> python3 main.py --text "a hamburger" --workspace trial -O $> python3 main.py --text "a hamburger" --workspace trial -O --sd_version 1.5 $> python3 main.py --workspace trial -O --test $> python3 main.py --workspace trial -O --test --save_mesh
Check Your Output:
Look in the results directory under the workspace name:
.\stable-dreamfusion\<workspace name>\mesh\ #directory holds the .obj, .mat, and .png files
.\stable-dreamfusion\<workspace name>\results\ #directory holds a mp4 video that shows the object rotating
Copying them to Windows:
All Windows drives are pre-mounted in \mnt\<drive letter>\ for WSL.
Ex: \mnt\c\
So you can copy the output files to your windows side by doing:
cp -rP .\<workspace name> \mnt\c\workdir\
Looking at the generated meshes with materials:
Fixes:
==> Start Training trial Epoch 1, lr=0.050000 …
0% 0/100 [00:00<?, ?it/s]Could not load library libcudnn_cnn_infer.so.8. Error: libcuda.so: cannot open shared object file: No such file or directory
add this to your .bashrc to ensure it can find libcudnn:
export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH
2. If you load the object in Blender but it doesn’t load the texture maps, try Alt-Z
Links:
Matt Wolfe briefly walks you through getting Midjourney set up (via Discord) and then gives you some great geting started prompts to help you learn different styles and image generation capabilities.
He also recommends Guy Parsons who gives out lots of tips on building prompts and who has a free e-book with some of his best tips.
Gamefromscratch decides to do side-by-side results tests on DALL-E 2, Stable Diffusion, and MidJourney for a variety of art generation tasks.
His conclusion is they are not going to replace artist for all tasks, but for concept art, pixel art, and some other simple tasks these AI generators can replace artists.
Smarter Every Day does some amazing videos. While just a few years old, these videos on social media manipulation are even more relevant today than ever. They cover how foreign agents are already manipulating and infiltrating social media platforms Reddit, Facebook, YouTube, and likely countless others. He also shares some good data on how the manipulators work – which echoes almost exactly the techniques used by Ryan Holladay for over a decade and he outlines in Trust Me, I’m Lying: Confessions of a Media Manipulator. All of this information is now about 5-10 years old – so you can imagine how much more sophisticated it has become since then.
If you use social media, this is critical information since it has become crystal clear (from the social media companies themselves, as well as congressional reports) that foreign entities have been doing this for almost a decade now. This was before AI based chat and response bots – which will likely make the volume and sophistication of these kinds of manipulators massively more pervasive and harder to spot.
In 2019, Walmart’s local Portland tech hub was about to hire dozens of new employees. Fast forward to 2023, and Walmart Labs is closing it’s Portland office. Employees can either move to their California or Arkansas offices – or take severance. Along with the move, employees will be expected to be in office at least 2 days a week.
Tom Scott distils down his encounter with AI doing a job he used to do (almost equally well) and then reflects on why this could be a completely transformative development for the world – much like when the internet really took off in the late 90’s. I think he’s probably right. As someone that has played with AI art generation and watching the ground breaking papers that are using AI for even traditional rendering and modeling tasks in just the graphics world, I think we’re just at the first part of his sigmoid curve.
This transformation is likely to be very different than just the early internet upheavals of the music industry, cellular phones, and stores/commerce that he describes. Those were largely transformations of market form with the same commercial and societal needs.
I think this is different in at least 2 ways. First, AI is bringing about a change in which thought, analysis, creativity, and response to problems themselves is likely about to be abdicated (and somewhat blindly by the lazy or those that aren’t critically looking at what is being generated). And we’ll be abdicating that power to systems aren’t truly or fully understood, controlled, or protected.
With things like chatGPT, we will very easily start abdicating the hard work of thinking itself. If we no longer crafting the actual language of our responses, doing the hard logical work of building arguments for our daily actions or policies we live by – we will never develop the critical thinking ability to even question what is generated. Instead, they are generated for us. What would that do to us long term? Especially we we already see that chatGPT and other AI systems can get things terribly wrong – and not give us the first clue they are wrong.
Secondly, like all tools, they could even be controlled/manipulated by nefarious agents. Today, our most deadly and horrific tools of destruction (nuclear bombs and sophisticated strategic weapons) are today largely contained within government military systems and by needing the highly specialized ability to build them.
AI can be wielded by anyone, anywhere in the world, with any motivation (political, personal, etc.). With just a small rack of commercially available servers, one has the ability to unleash the kind of infinitely scalable social media posting, auto-responding, narrative controlling, news story generating, and possibly subverted think-for-you devices upon the whole world.
We have known since at least 2019 that this is happening on all major social media platforms despite the best efforts of some of the smartest people in Silicon Valley working on it. Smarter every day did a series of stories on the problem. Research has proven again and again these things are happening and are very, very easy to do and very, very hard to stop:
A few clever AI systems that would likely cost less than a single cruise missile could easily overwhelm social media forums, message boards, Wikipedia edits, generated news articles, etc – before we could ever hope to verify the claims or combat its ability to generate hundreds of thousands of responses, up/down votes, planted webpage articles, etc every hour. How could one even verify the claims if everything is suspect? Why WOULDN’T a country do this if it cost less than what a single missile costs? Even better, what if the AI can be subverted to bias certain responses (which we have already seen too)?
In the post-truth internet, people are well into putting their trust in anonymous influencer opinions and echo chamber forum posts before well verified facts. What will this mean in our internet era in which ‘objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.’?
My how far we’ve come from the idea that the internet would become a forum in which people share ideas and the best ones rise to the top. How dangerously naïve we were…