Browsed by
Category: Technical

Dall-E results

Dall-E results

Michael Green did some experiments with Dall-E2 – and it’s pretty mind blowing what it can produce. He tests it out by asking it to reproduce various kinds of photographs in different artists styles.

These are just two of the images that are completely generated by AI:

https://twitter.com/triplux/status/1542529379485396995?s=20&t=CfkfkIvsM74LQgwpDvcrnA
Using Stable Diffusion for compression

Using Stable Diffusion for compression

Last week, Swiss software engineer Matthias Bühlmann discovered that the popular image synthesis model Stable Diffusion could compress existing 2D images with fewer visual artifacts than JPEG or WebP at high compression ratios, though there are some important limitations.

When Stable Diffusion analyzes and “compresses” images into weight form, they reside in what researchers call “latent space,” which is a way of saying that they exist as a sort of fuzzy potential that can be realized into images once they’re decoded. With Stable Diffusion 1.4, the weights file is roughly 4GB, but it represents knowledge about hundreds of millions of images.

While most people use Stable Diffusion with text prompts, Bühlmann cut out the text encoder and instead forced his images through Stable Diffusion’s image encoder process, which takes a low-precision 512×512 image and turns it into a higher-precision 64×64 latent space representation. At this point, the image exists at a much smaller data size than the original, but it can still be expanded (decoded) back into a 512×512 image with fairly good results.

Bühlmann’s method currently comes with significant limitations. It’s not good with faces or text, and in some cases, it can inject detail features in the decoded image that were not present in the source image. (You probably don’t want your image compressor inventing details in an image that don’t exist.) Also, decoding requires the 4GB Stable Diffusion weights file and extra decoding time that are inherent with Stable Diffusion.

Not the first time that AI has been explored as a method of compression as much as generation. Daniel Holden of Ubisoft presented an astounding paper at GDC in 2018 about using neural nets to compress animation data used in video game character animation.

Links:

Biggest discoveries of the year

Biggest discoveries of the year

Quanta Magazine makes a wonderful set of videos on mathematics, computer science, physics, biology, cosmology, and science fields. They distill amazing discoveries down to quick videos that often include interviews with the very scientists involved. One series I really like is their yearly summary videos that sum up some of the biggest breakthroughs of the year.

The 2021 video has a really great interview on how we’re starting to formalize and start really understanding how neural nets used in AI algorithms work. They used a clever idea of starting with how these networks worked if the net width was infinite.

It’s a great part of my effort to move away from emptier forms of social media consumption and more intentionally spend my time/energy on creative, positive, constructive, uniting, uplifting, and educational efforts.

The 2020 video has a really good segment on the LEAN mathematical proof assistant that is building up a library of theorems and assist in proving them.

Paravirtualization of GPU

Paravirtualization of GPU

Looks like nVidia now allows paravirtualization of a single GPU between different virtual machines. This is really cool for AI work. Craft Computing shows you how to set up the graphics cards and virtual machines.

Check out some of his other great videos as well. I like this one where he was finally able to figure out how to use ANY graphics card for 3D Acceleration in a virtual machine:

Gödel’s incompleteness theorems

Gödel’s incompleteness theorems

Gödel proved his 3 famous incompleteness theorems at the opening of the 1900’s, and I would argue that they are still probably the most profound discoveries in mathematics of the whole century.

Veritasium gives one of the best descriptions of these proofs, and the mathematical developments that led to them.

Gödel’s Incompleteness Theorems

Gödel’s Incompleteness Theorems

Two of the the greatest intellectual achievements of modern times might surprise you. Both were developed by Austrian mathematician and logician Kurt Gödel in 1931. They are called simply Gödel’s incompleteness theorems and apply to all of mathematics, formal logic, and even philosophy (epistemology in particular). The implications turned out to be deeply profound and have thrown all of mathematics, logic, and even philosophy into disarray ever since. Despite almost a century of attempts, no one has been able to disprove them. In fact, almost all attempts end up supporting, and even reinforcing and expanding them. They now are accepted as almost certainly true.

The theorems sound simple enough at first blush. The first incompleteness theorem states that in any consistent formal system (mathematics, logic, physics, etc) in which a certain amount of arithmetic can be carried out, there are statements of the language of which can neither be proved nor disproved in that language. According to the second incompleteness theorem, such a formal system cannot prove that the system itself is consistent (assuming it is indeed consistent).

What is so shocking about these two simple theorem? They prove something devastating: that mathematics and logic is not complete. There will always be truths in reality that the system cannot prove. It means that some problems can NEVER be solved in some kinds of mathematics or logic. You can even try making new systems of math/logic (Algebra, Calculus, etc) but they ALL will have things they cannot prove. It meant that you might work on a mathematical, physics, or logic problem your whole life, and none of the systems we know about might be able to solve it – even though it might have a solution. There might even be some problems that if we make infinite numbers of logical or mathematical systems, we might STILL not be able to find a solution.

Veritasium did an absolutely fabulous video on the topic that’s worth a listen.

It blew my mind when I learned about Godel’s incompleteness theorems in college. Knowing that our tools are limited is frightening at first. It completely unseats our certainty that known mathematics or science as we have today is sufficient. In fact, we know it is NOT sufficient. In fact, we know that we’ll almost certainly have to make more logical systems for the rest of eternity. We can never have a grand unified theory of everything. There is no ‘bottom’ to reach.

Yet this opens the reality that there will ALWAYS be something new to learn and know. There will be countless other models that might work for problem we have but we haven’t found yet – even though each one will be flawed and incomplete in their own way.

Many purists find this knowledge to be disastrous. It rips the rug out from anyone that asserts we can know everything. Others were excited by the fact there will always be new developments. Others are left in awe that even our very universe/reality itself lacks the limits we have. Still others have taken this as proof of the infinite. I know at least one mathematician that believed it gave us proof of God.

I do believe in God – without question. Many people forget that the vast majority of modern science was developed by believers in God that saw no conflict with discovery of properties of the physical world. The idea that faith and science are incompatible is a very modern and absolutely incorrect train of thought.

Instead, I see this reality as much like ourselves. None of us are perfect, yet each of us has a uniqueness that might just express a great truth no one else in history has seen or could see. This is why life is so infinitely precious and a tragedy to all when even one life is lost. This is why it is a crime to all humanity when we decide suffering is reason to end a life or that a disadvantage life is a life not worth living when we have such contrary examples and saw exactly where that idea led too in the early 20th century during WW 2.

Teaching an old Vectrex new tricks

Teaching an old Vectrex new tricks

I’ve always wanted to own an old Vectrex, but the Vecfever cartridge amps that desire up to 11. Not only can it let you program your own Vectrex games, but it also allows you to play actual arcade vector-based ROM chips like Star Wars, Battlezone, Tempest, Lunar Lander, and many other vector-based graphics classics.

Check out this wonderful video (and many other fascinating retro computing videos) from the guys at The Cave.

You can also replace the CPU of an old Vectrex with PiTrex – a Raspberry Pi based emulator.

There’s also a lot of replacement parts from tenpencearcade if you need some replacement Vectrex parts.

Controllers: http://tenpencearcade.co.uk/victors-t…