Browsed by
Category: Technical

Extracting Bitlocker keys in just a few seconds

Extracting Bitlocker keys in just a few seconds

Stacksmashing demonstrates that the communication between the CPU and TPM is unencrypted and can be snooped by attaching wires to the traces between them. This is not new, but now has all the source/board design to make it easier – on old systems with a long known security flaw of exposed traces.

This isn’t really new info. It requires numerous things to be right: physical access to the device and non-integrated TPM with a design flaw. Modern CPUs don’t have this easily exploitable design given the TPM is integrated into the die now. This was somewhat common in early days. At one point just connecting a firewire cable into a Mac let you read the encryption keys out of memory from a sleeping or running Apple.

Additionally, Bitlocker using TPM without pin was cracked years ago using fairly common electronic components. Any secure Bitlocker deployment has long been understood to be using TPM and a pin.

A reminder that security is only as good as its weakest link

Links:

  • https://www.tomshardware.com/pc-components/cpus/youtuber-breaks-bitlocker-encryption-in-less-than-43-seconds-with-sub-dollar10-raspberry-pi-pico
  • https://www.zdnet.com/article/new-bitlocker-attack-puts-laptops-storing-sensitive-data-at-risk/
  • https://github.com/stacksmashing/pico-tpmsniffer
Reverse engineering game code from Yar’s Revenge explosion

Reverse engineering game code from Yar’s Revenge explosion

It’s long been known that the graphical explosions and safe zone graphical glitchy area in Yar’s Revenge was the game binary code. Retro Game Mechanics Explained wondered if it was possible to reverse-engineer what the code was from this display.

He does an AMAZING job analyzing and dissecting the graphical patterns to determine not only how it works, but what the underlying code might have been.

It’s a wonderful bit of reverse engineering and definitely worth a watch.

Attacking AI with Adversarial Machine Learning

Attacking AI with Adversarial Machine Learning

Adversarial machine learning is a machine learning branch that tries to trick AI models by providing carefully crafted/deceptive input to break AI algorithms.

Adversarial network attacks are starting to get more and more research, but had humble beginnings. The first attempts were by protest activists that did very simple defacing or face painting techniques. Dubbed CV Dazzle, it sought to thwart early computer vision detection routines by painting over your face/objects with geometric patterns.

These worked on very early computer vision algorithms, but are largely ineffective on modern CV systems. The creators of this kind of face painting were largely artists that now talk about the effort more as a political and fashion statement than actually being effective.

More effective approaches

It turns out that you can often fool algorithms in a way not actually visible to average users. This paper shows that you can cause AI’s to consistently misclassify adversarially modified images. It does this by applying small but intentionally worst-case perturbations to examples from the dataset. This perturbed input results in the model outputting an incorrect answer with high confidence. For example, the panda picture below is combined with perturbations to produce an output image that looks ok visually, but is recognized by AI models as something incorrect – and incorrectly at high confidence.

This isn’t the only technique. There’s a lot more. One of them, Generative Adversarial Networks (GAN), are actually used to improve current AI models by attempting to fool a model, which is then used to help train it to be more robust – like working out at a gym or practicing the same thing with many variations.

Nightshade and Glaze

This kind of attack isn’t academic. Some artists see themselves currently in a battle with generative AI algorithms.

Nightshade is a tool that artists can use to alters the pixels of an image in a way that fools an AI algorithm and computer vision technology but leaves it unaltered to human eyes. If the images are scraped by an AI model it can result in images being incorrectly classified which results in an increasingly incorrectly trained model.

Glaze is a tool that prevents style mimicry. Glaze computes a set of minimal changes that will appear unchanged to human eyes but appears to AI models like a dramatically different art style. For example, a charcoal portrait but an AI model might see the glazed version as a modern abstract portrate. So when someone then prompts the model to generate art mimicking the charcoal artist, they will get something quite different from what they expected.

The AI Arms Race is On

As with anything, we’re now in an arms race with lots of papers written about the various problems of adversarial attacks and how to protect your models and training data from them. Viso.ai has a good overview of the space that will get you started.

Links:

Motion capture artist

Motion capture artist

曦曦鱼SAKANA shows off some of amazing skills one needs to have if you’re a motion capture artist working for a video game. She seems to have mastered both male and female (and zombie!) walks along with lots of interesting and really unique kinds of swagger and variations.

One rail train – the self-balancing monorail from 1910

One rail train – the self-balancing monorail from 1910

Primal Space (which has some fantastic videos with 3D model recreations) shows us the innovative Brennan gyroscopic monorail designed in the early 1900s.

Louis Brennan wondered if he could help the spread of rail by making it half as expensive – needing only one rail instead of two rails. But how do you balance tons of train on one rail?

In the end, he designed a monorail that defied conventional limitations by balancing on a single rail, leaning into corners without external input, and remaining stable (no hunting oscillation) even when stationary by the use of 2 extremely clever interconnected gyroscopes.

What seems to have largely done in the idea is that each car in the train would need its own gyroscope motor and assembly. It makes me wonder if there would be a way to reduce that space requirement using an interconnected air system in modern train brake systems to power the gyroscopes. But it also would have the unfortunate problem of falling over if the gyroscopes stopped/malfunctioned/ran out of fuel or weren’t parked with supports. It also didn’t remove the problem of needing to design and acquire right-of-way to lay the tracks in the first place.

Still – it’s quite amazing to see this thing in action. All done before computers and mechanically.

Procedurally generated VR city

Procedurally generated VR city

Vuntra City is a procedural VR city generator in Unreal Engine 5 developed by a single person over the last few years. I know, I know. Procedurally generated content has got some serious shortcomings. Too many games with procedural content are just thinly veiled programmer art designed to fill spaces rather than be part of the experience.

The author actually does a great job recognizing those traditional limitations and attempts to fix them. Probably the best observations they make is not from the technical side, but the aesthetics side.

It turns out they have made an excellent solution with just some good observations and shockingly simple engineering solutions. As an engineer, I see far, far too many projects over-complicate things that could be done much more simply. Simplicity is how you know you’re on the right track. Complexity leads to tears.

After 2 years of experimenting, they have a really interesting solution. Check out the VuntraCity youtube channel to see vidoes of how they experimented with different techniques and solutions. I particularly liked how they used a normal old treemap layout to break up boring city grid structures. Combining it with a caching and pooled allocation system is nothing new; but was a good little optimization.

Links:

RowHammer attacks have a new friend – RowPress

RowHammer attacks have a new friend – RowPress

Rowhammer is a DRAM memory security vulnerability discovered in June 2014 (paper here). It demonstrates a security problem in which programs can modify memory they should not have access too. In the paper, they note how DRAM memory cells interact electrically between themselves by leaking their charges, possibly changing the contents of nearby memory rows that were not addressed in the original memory access. This circumvention of the isolation between DRAM memory cells results from the high cell density in modern DRAM, and can be triggered by specially crafted memory access patterns that rapidly activate the same memory rows numerous times.

The row hammer effect has been used in some privilege escalation computer security exploits (Paper here). Google’s Project Zero demonstrated two working privilege escalation exploits based on the row hammer effect in 2015. Since then, there has been a back and forth war of fixes and new exploits – some even involving ways to circumvent ECC (error-correcting) DRAM.

Now we fast forward to today, and there is another way to manipulate bits – RowPress (Paper here). Instead of ‘hammering’ neighbor rows with certain write patterns, this method involves manipulating the length of time the aggressor row is left open when reading it. When a computer accesses a chunk of memory, it opens the rows to the cells storing the desired data and transfers it to the CPU. The researchers show you can use clever methods to manipulate how long that row is left open. When an attacker row is left open the optimal amount, you can affect nearby victim rows:

We show that keeping a DRAM row (i.e., aggressor row) open for a long period of time (i.e., a large aggressor row on time, tAggON) disturbs physically nearby DRAM rows. Doing so induces bitflips in the victim row without requiring (tens of) thousands of activations to the aggressor row. We characterize RowPress in 164 off-the-shelf DDR4 DRAM chips from all three major manufacturers and find that RowPress significantly amplifies DRAM’s vulnerability to read-disturb attacks (i.e., greatly reduces the minimum number of total aggressor row activations to cause at least one bitflip, ACmin.

The methods they use are VERY clever. They started on a FPGA-based test beds to test the idea, then moved to PC’s. This required a deep knowledge of memory hardware and involves clever manipulation of the memory controller and cache systems (section 6.2 of the paper). The summary in the comments was great:

With respect to knowing how physical memory maps to their process memory, they allocated a 1GB hugepage and use a technique called DRAMA to determine the row-column mapping.

To keep their target row open, they take advantage of the fact (new to me) that multiple cache blocks will live on the same physical row, which means that repeated accesses to those blocks can influence the memory controller to keep that row open. They also empty the processor cache between each iteration so that they can be sure that they will hit the actual RAM.
To bypass the target row refresh (TRR) mechanisms that have been implemented to counter traditional RowHammer attacks, they also toggle a large number of dummy rows so that the TRR will pick up on those rather than the actual aggressor rows, since TRR implementations apparently have a small number of candidate aggressor rows.

Article:

Running almost any Unreal game in VR

Running almost any Unreal game in VR

Waifu Enjoyer shows off UEVR. UEVR allows you to play just about any Unreal Engine 4 & 5 game in VR – even if it wasn’t made for VR. It does this by hooking into the DirectX API and then overriding.

Read more on the UEVR project page here.

Links: