Browsed by
Category: Technical

Demo scenes are not dead

Demo scenes are not dead

Massive in the 90’s, Demoscenes are not dead. Revision 2024 demo party just took place March 29th to April 1st in Saarbrücken Germany.

There was music, seminars, videos, livestreams, a 5k run, and of course – amazing code demos. This included some competing 256-byte demos here. One of the best was a post-apocalyptic black-and-white city created with just 256 bytes of Gopher code running on DOS.

Don’t pay for a VPN, make your own

Don’t pay for a VPN, make your own

There’s been some trouble lately in which free VPN services have been collecting and selling your data. Others have had major leaks or hacks (such as the new TunnelVision attack). So why not set up your own VPN and avoid those issues?

A few important reminders. VPN’s do not make you anonymous. They only create a secure pipe between you and that server. From that point on, your traffic can be collected and used – and many free VPN services do exactly that. Anonymity comes only if you use things like the TOR network.

But instead of paying a VPN service fee or potentially having your data collected and sold, you might set up your own VPN server on a Rasberry Pi.

  1. Create an account on a cloud hosting provider like DigitalOcean
  2. Download Algo VPN on your local computer, unzip it
  3. Install the dependencies with the command lines on this page
  4. Run the installation wizard
  5. Double click on the configuration profiles in the configs directory

It’s important to note that there are some limitations. This setup is good if you need a secure connection from where you are to the location of the server (ex: You’re in China and need access to US services that are blocked). Again, this doesn’t make you anonymous as your data exits the VPN and becomes public again.

Paid VPN services also often offer servers in different countries so you can spoof being in specific countries. This will not do that unless you have paid for hosting in those countries.

Articles:

DREM – MFM/RLL hard drive and Floppy emulators

DREM – MFM/RLL hard drive and Floppy emulators

Connecting old floppy disk drives to modern hardware is not easy. Resurrecting old MFM and RLL hard drives is even harder. The primary method would simply be to get an old PC with the legacy hardware to read the hard drives. But now there’s a few soltuions.

DREM:

DREM is based on the high performance FPGA platform and does not require the use of a PC for any file encoding operations. DREM is equipped with an VGA output, PS/2 keyboard input and file manager software. A user can browse the SD card and insert DSK images into virtual drives.

DREM uses DSK disk image files, which contain the raw dump of a disk. The raw image consists of a sector-by-sector binary copy of the source medium.

If you’re just looking for floppy emulation, I recommend GreaseWeazle or other solutions.

https://www.drem.info/drem

MFM Board Emulator:

Also available, but doesn’t seem quite as well baked, is the pdp8online MFM board emulator.

Holodeck flooring

Holodeck flooring

Lanny Smoot is a Disney Research Fellow that is being inducted into the National Inventors Hall of Fame.

Here he is showing off his holo-tile floor that allows multiple people to walk on it in any direction and it automatically keeps you centered on the floor. Definitely something that could be used for VR.

Admitting your mistakes

Admitting your mistakes

Speaking at QCon back in 2009, Tony Hoare admitted to probably one of the biggest mistakes of his career – one that every programmer knows all too well. The invention of NULL because ‘it was so easy to implement’.

I call it my billion-dollar mistake. It was the invention of the null reference in 1965.

At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.

Extracting Bitlocker keys in just a few seconds

Extracting Bitlocker keys in just a few seconds

Stacksmashing demonstrates that the communication between the CPU and TPM is unencrypted and can be snooped by attaching wires to the traces between them. This is not new, but now has all the source/board design to make it easier – on old systems with a long known security flaw of exposed traces.

This isn’t really new info. It requires numerous things to be right: physical access to the device and non-integrated TPM with a design flaw. Modern CPUs don’t have this easily exploitable design given the TPM is integrated into the die now. This was somewhat common in early days. At one point just connecting a firewire cable into a Mac let you read the encryption keys out of memory from a sleeping or running Apple.

Additionally, Bitlocker using TPM without pin was cracked years ago using fairly common electronic components. Any secure Bitlocker deployment has long been understood to be using TPM and a pin.

A reminder that security is only as good as its weakest link

Links:

  • https://www.tomshardware.com/pc-components/cpus/youtuber-breaks-bitlocker-encryption-in-less-than-43-seconds-with-sub-dollar10-raspberry-pi-pico
  • https://www.zdnet.com/article/new-bitlocker-attack-puts-laptops-storing-sensitive-data-at-risk/
  • https://github.com/stacksmashing/pico-tpmsniffer
Reverse engineering game code from Yar’s Revenge explosion

Reverse engineering game code from Yar’s Revenge explosion

It’s long been known that the graphical explosions and safe zone graphical glitchy area in Yars Revenge was the game binary code. Retro Game Mechanics Explained wondered if it was possible to reverse-engineer what the code was from this display.

He does an AMAZING job analyzing and dissecting the graphical patterns to determine not only how it works, but what the underlying code might have been.

It’s a wonderful bit of reverse engineering and definitely worth a watch.

Attacking AI with Adversarial Machine Learning

Attacking AI with Adversarial Machine Learning

Adversarial machine learning is a machine learning branch that tries to trick AI models by providing carefully crafted/deceptive input to break AI algorithms.

Adversarial network attacks are starting to get more and more research, but had humble beginnings. The first attempts were by protest activists that did very simple defacing or face painting techniques. Dubbed CV Dazzle, it sought to thwart early computer vision detection routines by painting over your face/objects with geometric patterns.

These worked on very early computer vision algorithms, but are largely ineffective on modern CV systems. The creators of this kind of face painting were largely artists that now talk about the effort more as a political and fashion statement than actually being effective.

More effective approaches

It turns out that you can often fool algorithms in a way not actually visible to average users. This paper shows that you can cause AI’s to consistently misclassify adversarially modified images. It does this by applying small but intentionally worst-case perturbations to examples from the dataset. This perturbed input results in the model outputting an incorrect answer with high confidence. For example, the panda picture below is combined with perturbations to produce an output image that looks ok visually, but is recognized by AI models as something incorrect – and incorrectly at high confidence.

This isn’t the only technique. There’s a lot more. One of them, Generative Adversarial Networks (GAN), are actually used to improve current AI models by attempting to fool a model, which is then used to help train it to be more robust – like working out at a gym or practicing the same thing with many variations.

Nightshade and Glaze

This kind of attack isn’t academic. Some artists see themselves currently in a battle with generative AI algorithms.

Nightshade is a tool that artists can use to alters the pixels of an image in a way that fools an AI algorithm and computer vision technology but leaves it unaltered to human eyes. If the images are scraped by an AI model it can result in images being incorrectly classified which results in an increasingly incorrectly trained model.

Glaze is a tool that prevents style mimicry. Glaze computes a set of minimal changes that will appear unchanged to human eyes but appears to AI models like a dramatically different art style. For example, a charcoal portrait but an AI model might see the glazed version as a modern abstract portrate. So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.

The AI Arms Race is On

As with anything, we’re now in an arms race with lots of papers written about the various problems of adversarial attacks and how to protect your models and training data from them. Viso.ai has a good overview of the space that will get you started.

Links: