Best AI for coding
David Gewirtz reviews a bunch of AI tools for coding; and does some ranking.

Article:
David Gewirtz reviews a bunch of AI tools for coding; and does some ranking.
Article:
not-matthias has blogged his journey on writing a kernel driver with Rust.
He has a few other fun hacky posts about reverse engineering a digital jukebox in a bar to vote for his songs.
The author’s of the 3D Math Primer for Graphics and Game Development book have provided their intro book for free online. It’s basic for those getting started, but nothing beats free and the author has done a number of GDC talks.
Google published a report on it’s effort to migrate code to the latest dependencies – an often thankless task fraught with risk. Google’s code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java’s standard java.time package. The 32-bit ID’s were particularly rough because they were often generically defined types that were not easily searchable.
They used a collection of AI tools as well as manual code reviews and touch-ups to achieve their goal. They emphasize that LLMs should be viewed as complementary to traditional migration techniques that rely on Abstract Syntax Trees (ASTs), grep-like searches, Kythe, and custom scripts because LLMs can be very expensive.
The results?
With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. For the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time.
Links:
Mr-Figs asked a great question on the reddit gamedev forum: how do you handle making your own tools needed to make a game?
It used to be that building a game also meant building all the authoring tools to go along with it. With the advent and spread of game engines like Unity, Unreal, Godot (and literally hundreds of others) along with amazing tools like Photoshop and Blender, the need to make your own tooling has dramatically decreased. Almost to the point that in a majority of cases, you probably don’t need to write tools.
Even if you do find you can’t use an existing tool, others suggest using chatGPT to either extend an existing tool or a tool in the engine you’re using via their SDK. Let AI do the work for you since tools are not shipping code nor need to be overly performant.
Strict_Bench_6264 wrote up a whole blog article to describe what he learned:
After analyzing nearly 10 years of CVEs, Google researchers calculated that at least 40% of safety exploits in C++ were related to spatial memory exploits like writing to an out-of-bounds memory location.
Google researchers showed they were able to “retrofit” spatial safety onto their C++ codebases, and to do it with a surprisingly low impact on performance. They used straightforward strategies such as bounds checking buffers and data structures – as is done in other languages and released a new, safer Hardened libc++.
The results show up in this chart of segfaults across the entire fleet of computers before and after using the improvements. Their internal red team testing results were also much improved, uncovered over 1000 bugs and likely prevent 1000-2000 new bugs each year based on current development rate.
Here’s a blog post about their results.
Articles:
Andreas from Insomniac Games made a Amiga 500 demo in 2019 as part of this work with The Black Lotus demo group. He presented not only the Eon Amiga 500 demo, but tons of great technical information about the 4 years it took to develop it.
Old demo scene programmers hold amazing amounts of wisdom. When solving the core pieces of logic, I found this is true (but when doing larger, complete system development, these don’t work)
Work backwards from desired outcome to discover your constraints. Don’t just brute force. Instead, ask, what must be in place for us to get the peak performance from the key component we’re dependent on (render, disk load, etc). Then work from that constraint.
Do everything you can at compile time, not run time. Pre-compute tons of things – even the build-up of the data structures in memory. Just run it and then save and reload that blob automatically.
Over-generalizing early is a trap many devs fall into. Solve the problem in front of you. Trust that you can delete the code and do something else if it sucks. It’s cheaper and faster than trying to anticipate things ahead of time. Do the simplest thing that will work and if it sucks come back and delete it.
If you end up with a small runtime table/code that doesn’t require runtime checks because you can prove it can’t go wrong, you’re doing something right.
When developing, the actual Amiga is super slow and limited. They took an Amiga emulator and hacked it up so they could debug on it instead. Using calltraps to trigger the emulator, they added memory protection, fast forward, trigger debug, loading symbols, cycle accurate profiling, single step, high-resolution timers, etc. Also allows perfect input playback.
Modern threading and consumer/producer components (disk loading, data transfer, decompressors, etc) often just throw things in buffers and YOLO. There’s no clear backpressure to show you where you’re wasting time/space. Running on this kind of hardware/simulator shows you how much time the design is wasting by poorly and inefficiently designed algorithms/constraints.
Presented at Handmade Cities event in Seattle at: https://handmadecities.com/
When teaching myself to program as a kid, my first language was type-in BASIC programs. After that, I made the very un-orthodox choice to learn assembly. I wrote a small database, a TSR (Terminate and stay resident program), and a couple other small creations.
Looks like GreatCorn did one better by writing his own game in x86 assembly.
Daniel Holden from Ubisoft gave this great talk at GDC 2018 on how data-driven analysis of their character animation control system turned into a AI system that vastly reduced the complexity and manpower involved in building an animation system for character control.
A reasonable good, simple discussion on the pros/cons of monoliths and microservices.
Some interesting comments:
Here are the links referenced: