LLM’s are creating a new Y2K
In just the last 6 months, LLM’s are now find bugs and security issues in code that nobody has found in literally decades of investigations.
More terrifying? This isn’t your game code – it’s code that runs everything. Bugs in the Linux kernel that have been there since 2003 but were found by an LLM in minutes. Bugs in packages that have been around for 25 years. Bugs so bad that they give you access to almost anything. The rate of bug discovery by LLM’s is now on an exponential curve that is unlikely to stop anytime soon. In fact, he knows of hundreds of kernel bugs he cannot file because he simply can’t keep up with verifying them at the rate the LLM’s are finding them.
Based on current AI progression, he believes LLM’s will be better at finding bugs than the top 1-2% of vulnerability researchers in less than a year – and that power will likely be available not with GPU’s in the cloud, but on LLM’s you can run on your laptop. With this facing the world of computing, it’s highly likely bad actors could nearly completely destroy and hijack banking systems, just about any online service, and critical infrastructure. Almost everything could be open to hackers in just months.
We’re facing a real Y2K situation. It’s happened so fast that almost everyone is in denial or simply doesn’t know it’s happening.
AI’s pursuit of Trackmania’s records
Yosh has been covering the increasing use of AI to test for better and faster racing times for track A01. The conclusion: AI was able to bet the human world record, but is still lagging the hand-tuned TAS (tool assisted speedrun). They’re not done trying yet – we’ll have to watch and see how many more records fall to AI and which do not.
AI is transforming software security in realtime
Just a few months ago in 2025, the curl team was getting inundated with an explosion of AI slop bug reports. They went from a real issue rate of about 15% to less than 5% of reports being real issues. Many were almost nonsensical submissions by AI – in attempts to gather bug bounties.
That has all changed.
Daniel Stenberg is now reporting that they’re getting really good security reports – almost all done with the help of AI. It’s gotten so many, they’re having trouble keeping up. Combine that with what is coming with systems like Claude Mythos that are finding thousands of zero day bugs in every OS and browser, and the software world is likely about to see a massive software security upheaval that will leave the development world dazed.
The revolution that is coming at all levels of software is clearly starting to rival what happened in the dot-com era.
AI able to escape any current security box, crack any system
The researcher had encouraged Mythos to find a way to send a message if it could escape. “The researcher found out about this success by receiving an unexpected email from the model while eating a sandwich in a park,” Anthropic wrote.
The new release of Claude Mythos has been put on hold because it’s turned out to be incredibly effective at finding security vulnerabilities. So good it was able to escape it’s container and email it’s creator when it did so. The AI was able to find a 27-year-old vulnerability in OpenBSD—an operating system with a reputation as one of the most security-hardened operating systems in the world.
But it gets worse.
“Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit,” Anthropic’s Frontier Red Team wrote in a blog post. “In other cases, we’ve had researchers develop scaffolds that allow Mythos Preview to turn vulnerabilities into exploits without any human intervention.”
This is profound. These weren’t security experts using AI to find exploits – they were just regular engineers. As it turns out, the AI is so good it has found thousands of exploitable zero-day bugs in every major operating system and browser. It’s so concerning Anthropic has rightly paused release of Mythos until they can work with the various OS and browser vendors to fix their issues (Project Glasswing).
We sit on a moment that looks almost exactly like this scene from Sneakers:
Right after this scene, Crease says, “There’s not a government on this planet that wouldn’t kill every one of us to get this thing.”
In a world where everyone’s entire financial systems, infrastructure, and military run on computers – who wouldn’t? If something like this got into the wrong hands, they could easily attack and destroy all ownership and financial records, destroy power and water systems, shut down all airline flight computers, fabricate any police report, create or changes any social media post, change your medical records/prescriptions, or anything else.
Hook up Claude + Figma via MCP
Things are moving fast. Create your UI in Figma and now let Claude have at it via the Figma MCP hookup. Or maybe just use Google’s own ‘vibe design’ tool Stitch.
Replace your whole dev team with AI
Front end and back end developers, security experts, social media engagement managers, and marketing. Replace everyone with an AI powered bot and even create/train your own code GPT for your internal code.
How easy mass AI surveillance is – today
If armies of AI spies are not a thing now, they soon will be.
Imagine having thousands and thousands of agents watching every single one of your government and company officials morning, noon, and night. Now, imagine they never get tired and only cost as much as a high-end PC.
Using data that most companies collect on users and a few planted agent employees to give you key access and your leaders data is ripe for the picking and exploiting. Low-tier hackers may hack you to ransom your data – the big guys will quietly watch you to see if you’re vulnerable in ways never before.
Here’s just one simple example of what it might look like:
AI logistics callers
Yes – this is an AI bot calling to arrange a trucker to deliver a load.
Taco Bell rethinks AI order taker
Taco Bell has experimented with an AI order taker since 2023, but several embarrassing videos of people using the service have shown up. Such as this guy that orders a mountain dew but keeps getting asked what he’d like to drink with that.
This isn’t that unlike McDonalds which tried, and then turned off, the bots.

