The world has gotten very familiar to retro hardware re-creations, game emulation, re-releases, speed runs, creating new games for old platforms, as well as new exploits, tools, and discoveries. The nitty gritty work of doing all of this, however, is a labor of love. For those that dig into the binary, there’s tricky copyright concerns that need to be managed, only scraps of information about old hardware and software, highly optimized/tricky code that is tough to read, and almost no financial gain – except for commercial re-releases.
Made Up of Wires walks us through a live bit of decompiling of the PS1 classic: Castlevania: Symphony of the Night to give you a taste of the work involved in this kind of work. Not really that different than any other reverse engineering but surprisingly accessible as these old games were relatively small and simple.
Back in the day, I worked on this little project called Larrabee – which later turned into the Intel Xeon Phi coprocessor. It was an ambitious and exciting platform. It consisted of a ton of 512 bit wide instructions to operate like a lot of streaming GPU architectures, yet was fully general purpose x86.
It turned out that getting performance out of this hardware was difficult. In order to get the full potential of the hardware, you simply had to utilize the vector units. Without that, it is like writing a single threaded app on a 8 core system. Single SIMD lane operation just wasn’t going to cut it as was written about in 2017 International Journal of Parallel Programming article:
“Our results show that, although the Xeon Phi delivers a relatively good speedup in comparison with a shared-memory architecture in terms of scalability, the relatively low computing power of its computational units when specific vectorization and SIMD instructions are not fully exploited makes this first generation of Xeon Phi architectures not competitive”
The paper, and the host of others linked on the page as references, are a good read and gives some hints why fixed-function GPUs have an advantage when it comes to raw streaming throughput. Hint: cache and data flow behavior is as, if not more, important as utilizing vectorization in such architectures.
Ever want to know what it’s like to work in a game studio? Double Fine has released a 33 episode series called PsychOdyssey which shows them developing Psychonauts 2 over 7 years.
Acerola created a graphics to text shader. He talks about a number of interesting techniques beyond just ASCII lookups such as: edge detection, depth color falloff, blooming and tone mapping, color tones, color quantization, and other filters, etc. Definitely worth a watch and you can check out a good amount of his code on his github repro.
I wrote awhile back on how to crash Linux/cause a Linux kernel panic in order to test how your program can handle a crash – but can you cause a Windows blue-screen programmatically?
Methods you can use to cause a Windows Blue-screen:
Windows allows you to configure a specific keyboard combination to cause a crash. You set some registry keys and then can crash a system by holding right CTRL then pressing scroll lock key twice. You can also customize the key sequence via registering custom keyboard scan codes. If you have a kernel debugger attached it will trigger the kernel debugger after the crash dump is written.
The best way to trigger an artificial kernel crash is to use NotMyFault, which is part of the Microsoft Windows SysInternals tools.
Massive in the 90’s, Demoscenes are not dead. Revision 2024 demo party just took place March 29th to April 1st in Saarbrücken Germany.
There was music, seminars, videos, livestreams, a 5k run, and of course – amazing code demos. This included some competing 256-byte demos here. One of the best was a post-apocalyptic black-and-white city created with just 256 bytes of Gopher code running on DOS.
Speaking at QCon back in 2009, Tony Hoare admitted to probably one of the biggest mistakes of his career – one that every programmer knows all too well. The invention of NULL because ‘it was so easy to implement’.
I call it my billion-dollar mistake. It was the invention of the null reference in 1965.
At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
Vercidium (Patreon) did some experimenting with using DirectX fences to render constantly updating geometry smoothly, at a high FPS, and without GPU uploading hitches/stuttering.
It’s an interesting technique and he has some other good videos too.