After analyzing nearly 10 years of CVEs, Google researchers calculated that at least 40% of safety exploits in C++ were related to spatial memory exploits like writing to an out-of-bounds memory location.
Google researchers showed they were able to “retrofit” spatial safety onto their C++ codebases, and to do it with a surprisingly low impact on performance. They used straightforward strategies such as bounds checking buffers and data structures – as is done in other languages and released a new, safer Hardened libc++.
The results show up in this chart of segfaults across the entire fleet of computers before and after using the improvements. Their internal red team testing results were also much improved, uncovered over 1000 bugs and likely prevent 1000-2000 new bugs each year based on current development rate.
AMD has just unveiled Frame Latency Meter (FLM) – which allows you to determine keyboard to display latency. Normally, this was done with a high-speed camera, a mouse, and an FPS game with a visible muzzle flash. The camera would capture the moment the mouse was clicked, and you would count the frames until the muzzle flash or other on-screen reaction appeared.
This utility does not require any special equipment and works with any AMD, Nvidia, or Intel GPU that supports DirectX 11 or newer. For capturing data, AMD GPUs use the Advanced Media Framework or AMF codec, while other GPUs use the DirectX Graphics Infrastructure or DXGI codec. FLM can generate detailed latency and effective frame-rate statistics, which can be exported to CSV files for further data analysis.
The way it works is clever: FLM measures latency by continuously capturing frames and comparing each one to the previous frame within a selected region. It then generates a mouse movement event using standard Windows functionality and waits for the frame contents to change. The time between the mouse movement and the detected frame change is recorded as the latency.
FLM is available as a free download for Windows 10 and 11 users via GPU Open or the official GitHub repository
A Microsoft engineer became suspicious of performance problems while optimizing his code. After digging in, he discovered that a simple data compression library called XZ Utils was creating a secret backdoor. What made this discovery noteworthy is that the innocuous looking compression library is used in tons of open-source projects and Linux distributions.
The analysis of how the code got into XZ utils uncovered a fiendishly sophisticated operation. The XZ utility was understaffed with only one primary maintainer. He was increasingly catching flack for falling behind – an increasing problem with open source projects. An eager developer named Jia Tan had been a contributor to the XZ project since at least late 2021 and built trust with the community of developers working on it. Eventually Tan ascended to being co-maintainer of the project which allowed him to add code without needing the contributions to be approved.
Tan did this by what now appears to be a coordinated set of accounts and discussions that were aimed at installing him as a co-owner. Various accounts appeared and started complaining about the speed of updates, features, and questions. They coordinated questions and complaints as well as contributions by Tan appear to create pressure for the owner to elevate Tan as a co-owner. Whether this was done by one person or several, this mechanism is known as ‘persona management’ – something that’s been proposed as far back as 2010.
“I think the multiple green accounts seeming to coordinate on specific goals at key times fits the pattern of using networks of sock accounts for social engineering that we’ve seen all over social media,” said Molly, the EFF system administrator. “It’s very possible that the rogue dev, hacking group, or state sponsor employed this tactic as part of their plan to introduce the back door. Of course, it’s also possible these are just coincidences.”
The code introduced was sophisticated enough that analysis of its precise functionality and capability is still ongoing.
The National Counterintelligence and Security Center has defined this kind of attack as a ‘supply chain attack’; and open-source projects are particularly susceptible to it.
It’s definitely worth reading the article because these kinds of sophisticated social attacks are obviously now reality.
The Sony Trinitron KX-45ED1, aka the PVM-4300, is thought to be the largest CRT TV ever sold to consumers. It has a 43-inch visible diagonal on its 45-inch tube and weighs in at almost 440 lbs. The stand alone is over 170lbs. At the time, it cost $40,000 USD in 1989 (or about $100K today, adjusted for inflation)
Long since thought gone, Shank Mods managed to save an extremely rare 43-inch Sony Trinitron KX-45ED1 from an untimely ending. It was being kept on the second floor of an Osaka noodle shop called Chikuma Soba – a building due for demolition in just a few weeks.
It was moved from the soba shop, crated up, and shipped to the US. While it worked well – it did need servicing. The alignment was off, had some tube cataracts, and the dynamic convergence amplifier circuit had failed. They worked on them all and have a very nice display.
The video describes the incredible journey and is definitely worth a watch
Andreas from Insomniac Games made a Amiga 500 demo in 2019 as part of this work with The Black Lotus demo group. He presented not only the Eon Amiga 500 demo, but tons of great technical information about the 4 years it took to develop it.
Old demo scene programmers hold amazing amounts of wisdom. When solving the core pieces of logic, I found this is true (but when doing larger, complete system development, these don’t work)
Work backwards from desired outcome to discover your constraints. Don’t just brute force. Instead, ask, what must be in place for us to get the peak performance from the key component we’re dependent on (render, disk load, etc). Then work from that constraint.
Do everything you can at compile time, not run time. Pre-compute tons of things – even the build-up of the data structures in memory. Just run it and then save and reload that blob automatically.
Over-generalizing early is a trap many devs fall into. Solve the problem in front of you. Trust that you can delete the code and do something else if it sucks. It’s cheaper and faster than trying to anticipate things ahead of time. Do the simplest thing that will work and if it sucks come back and delete it.
If you end up with a small runtime table/code that doesn’t require runtime checks because you can prove it can’t go wrong, you’re doing something right.
When developing, the actual Amiga is super slow and limited. They took an Amiga emulator and hacked it up so they could debug on it instead. Using calltraps to trigger the emulator, they added memory protection, fast forward, trigger debug, loading symbols, cycle accurate profiling, single step, high-resolution timers, etc. Also allows perfect input playback.
Modern threading and consumer/producer components (disk loading, data transfer, decompressors, etc) often just throw things in buffers and YOLO. There’s no clear backpressure to show you where you’re wasting time/space. Running on this kind of hardware/simulator shows you how much time the design is wasting by poorly and inefficiently designed algorithms/constraints.
Japhy Riddle in a hackaday article tries to re-create the look of old CRT sub-pixels – the individual red, green, blue phosphors that make up a single pixel. His approach is to basically fake it with Photoshop, but old systems like the Apple II, debayering, and even modern text anti-aliasing actually use some of these techniques.
When teaching myself to program as a kid, my first language was type-in BASIC programs. After that, I made the very un-orthodox choice to learn assembly. I wrote a small database, a TSR (Terminate and stay resident program), and a couple other small creations.
Looks like GreatCorn did one better by writing his own game in x86 assembly.
RESound: Interactive Sound Rendering for Dynamic Virtual Environments
About 15 years ago, people noticed that rendering virtual scenes with ray tracing was a lot like how sound propagates through an environment. Light rays travel through open spaces, hit objects and then reflect, refract, and bend. Sound waves follow many of the same principles.
What if you use the same ray casting methods to simulate sound traveling through an environment? Instead of standard hacks on sound to make something sound like it’s in a tiled bathroom or a big orchestra hall, you could accurately simulate it – reducing artist time. Simply play the sound and let the algorithm figure out how it should sound.
Not sure what other research has happened since. It was too computationally expensive for real time back then, but it was a cool idea and maybe we have the compute for it with today’s GPU’s.
Daniel Holden from Ubisoft gave this great talk at GDC 2018 on how data-driven analysis of their character animation control system turned into a AI system that vastly reduced the complexity and manpower involved in building an animation system for character control.