Browsed by
Category: Technical

When to make your own tools

When to make your own tools

Mr-Figs asked a great question on the reddit gamedev forum: how do you handle making your own tools needed to make a game?

It used to be that building a game also meant building all the authoring tools to go along with it. With the advent and spread of game engines like Unity, Unreal, Godot (and literally hundreds of others) along with amazing tools like Photoshop and Blender, the need to make your own tooling has dramatically decreased. Almost to the point that in a majority of cases, you probably don’t need to write tools.

Even if you do find you can’t use an existing tool, others suggest using chatGPT to either extend an existing tool or a tool in the engine you’re using via their SDK. Let AI do the work for you since tools are not shipping code nor need to be overly performant.

Strict_Bench_6264 wrote up a whole blog article to describe what he learned:

3D CPU rendering with AVX-512

3D CPU rendering with AVX-512

AVX-512 was created as part of the Intel Larrabee project that I worked on and has made its way into client and high-end desktop systems.

Dannotech demonstrates it for some hard-core CPU graphics rendering – albeit on a 36 core Xeon W9-3475X. He also has other videos that are interesting experiments.

Or how about some AVX-512 ray marching?

Conway’s law

Conway’s law

Conway’s Law: Principle, often cited in software development, that states the design of a system will inevitably reflect the communication structures of the organization that created it.

Technical success is not just about having the smartest people/best design. Conway’s law was originally a sociological observation about how teams work. Technical goals can become un-achievable, or velocity will dramatically slow/collapse if the wrong structure, roles, and responsibilities are in place.

Therefore, it’s important to either set up an organization’s structure as well as the team roles and responsibilities (or reorganize the existing one) to achieve the desired design goals.

This law has been interpreted to work both ways. You can affect design through organizational structure, or affect organizational structure by changing design.

Long term OLED burnin testing results

Long term OLED burnin testing results

OLED displays are renown for their vibrant colors, popping contrast, perfectly dark blacks, wide viewing angles, and fast response times – making them great for game systems. The cons are that they are dramatically more expensive than LCD, tend to have a shorter lifespan as the organic elements degrade. Most importantly, though, like old plasma TV’s, they are notably susceptible to burn-in. But how much burnin?

Techspot has done a good job stress testing some popular OLED displays over the last 9 months. They have a great breakdown with lots of analysis and pretty good testing scheme.

They also do a good job of explaining things. They note that burn-in with OLEDs is directly related to hours of usage and is cumulative. Mixing in dynamic content between periods of static content usually won’t improve the burn-in results – it’s all related to the cumulative number of hours displaying the same static content on screen. Running at a lower brightness and using dark mode will extend the lifespan because burn-in is correlated to brightness output. The safety features in most OLEDs also seems to really help.

Conclusion: they give a relatively positive update on the burn-in after 9 months of heavy static content usage (around 2,000 to 2,300 hours of total use). They report visible signs of burn-in, but the level of degradation between their 6 month and 9 month reports have been relatively minimal.

The results are that for gaming and content consumption (watching movies/etc) – you should be fine. For those that are using it for work and lots of static work, they do note there are times you can see the burn-in on apps that have large sections of the same color. The task bar at the bottom has also shown to be problematic as are the way they arranged some side-by-side content as many do with large monitors.

But if we’re honest, we were expecting to see more burn-in after 9 months. The levels we’re seeing right now are still very tolerable, and with realistic, sensible usage, we think most people won’t run into proper burn-in problems within the first 12 to 18 months of usage on this sort of QD-OLED panel. Maybe some light burn-in here or there, a few edge cases where you’ll notice it, but nothing that ruins the experience.

I do agree with this statement though:

Getting two good years of usage out of an OLED, though… that’s probably not going to cut it when we’re talking about high-end, $1,000 monitors. 

That said all said, OLED is probably not for me right now. If I was just play games or watching movies it might be ok, but I do too much static productivity content all day and really love the flat, large Asus ROG 38″ 4K HDR 144Hz display I currently have (and was on a smoking sale for $499). I will probably keep the display for multiple years as I waterfall them down to other systems. The cost for a similar OLED is about $900$1200 right now – making it about 2x more expensive.

Links:

Still playing old games? Most other people are too

Still playing old games? Most other people are too

Valve’s Year In Review for Steam revealed interesting statistics about its player base.

Only 15 percent of Steam players spent their time on games released in 2024. 47 percent of players devoted their time to games from the past 1-7 years (called “recent favorites”), and 37 percent played titles from eight or more years ago (“classics”).

While that seems low, it’s actually in line with historical trends. In fact, 2023 was particularly bad with players only spending 9% of time on games released in 2023. In 2022, players spent time on 17% of games released in 2022.

The ratio for ‘recent favorites’ for 2022 and 2023 were 19 percent and 52 percent, respectively, and ‘classics’ for each year were 62 percent and 38 percent.

YearTime spent on games released that yearRecent Favorites (1-7 years old)Classics (8+ years old)
202415%47%37%
20239%52%38%
202217%19%62%

Articles:

No more software engineer hiring in 2025?

No more software engineer hiring in 2025?

Generative AI is reaching greater and greater heights. Google has proven that they can migrate software at rate of up to 80-90% faster by using AI assisted coding tools. More and more companies are finding AI assisted software development can dramatically help certain tasks and predict AI assisted development will soon be sweeping the industry.

Now Salesforce, Microsoft, Replit, Meta and other CEO’s are all saying they might not be hiring a lot more software engineers very soon.

Salesforce CEO Marc Benioff has gone so far as saying Salesforce is looking at essentially freezing hiring for software engineers in 2025 owing to agentic AI: 

I think in engineering this year at Salesforce, we’re seriously debating maybe weren’t gonna’ hire anybody this year because we’ve seen such incredible productivity gains because of the agents that work side-by-side with our engineers, making them more productive. And we can all agree, software engineering has become a lot more productive in the last two years with these basically new models.

Mark Zuckerberg also piped with similar sentiments

In 2025, AI systems at Meta and other companies will be capable of writing code like mid-level engineers. at first, it’s costly, but the systems will become more efficient as time passes. eventually, AI engineers will build most of the code and AI in apps, replacing human engineers.

Microsoft Copilot Studio can now create new agents through Copilot Studio and integrate them into Copilot. Microsoft’s CEO said:

The way to conceptualize the world going forward is everyone of us doing knowledge work will use Copilot to do our work and we will create a swarm of agents to help us with our work and drive the productivity of our organizations.

Replit’s CEO Amjad Masad went even further by saying that ‘We don’t care about professional coders anymore’. They have grown their revenue five-fold in the last 6 months thanks to artificial-intelligence capabilities that enabled their new product called “Agent,” a tool that can write a working software application with nothing but a natural language prompt. “It was a huge hit,” Masad said.

So what’s a soon to be out of work software engineer to do?

A recent report by The World Economic Forum (WEF) and AI expert Tak Lo both estimate about 92 million workers are about to be displaced by AI, but claim there will be the creation of 170 million new jobs.

I always approach these reports skeptically. Just because you claim it will create new jobs doesn’t mean those that lost their jobs have the skills or capability to take one of those new jobs. Jobs that often require very different skills or appeal to very different people. In fact, they predict 39% of current skill sets will become outdated by 2030.

The report indicates that manual labor jobs will be the safest: construction, farmers, laborers, medical/nursing, truck drivers, etc. They claim that knowledge workers should be largely safe, but that’s definitely not what technology leaders are clearly saying – implying this rosy report is probably not right.

Cognitive decline?

Additionally, a study by Uplevel found that the productivity of developers hasn’t improved because of AI coding assistants, and their code has become more buggy.

Research is also starting to show that AI may be contributing to a decline of critical thinking skills:

The effects of AI on cognitive development are already being identified in schools across the United States. In a report titled, “Generative AI Can Harm Learning”, researchers at the University of Pennsylvania found that students who relied on AI for practice problems performed worse on tests compared to students who completed assignments without AI assistance. This suggests that the use of AI in academic settings is not just an issue of convenience, but may be contributing to a decline in critical thinking skills.

Greg Isenberg had an interesting interaction with a young Stanford grad who claimed he was forgetting words now due to his near constant use of chatGPT to finish his thoughts. https://twitter.com/gregisenberg/status/1869202002783207622

The new skills

There’s also a great article by Dev that outlines the mindset and skills developers will need to develop to stay relevant.

Articles:

Google implements Spatial Memory Safety in C++

Google implements Spatial Memory Safety in C++

After analyzing nearly 10 years of CVEs, Google researchers calculated that at least 40% of safety exploits in C++ were related to spatial memory exploits like writing to an out-of-bounds memory location.

Google researchers showed they were able to “retrofit” spatial safety onto their C++ codebases, and to do it with a surprisingly low impact on performance. They used straightforward strategies such as bounds checking buffers and data structures – as is done in other languages and released a new, safer Hardened libc++

The results show up in this chart of segfaults across the entire fleet of computers before and after using the improvements. Their internal red team testing results were also much improved, uncovered over 1000 bugs and likely prevent 1000-2000 new bugs each year based on current development rate.

Here’s a blog post about their results.

Articles:

Input to display latency metering tool

Input to display latency metering tool

AMD has just unveiled Frame Latency Meter (FLM) – which allows you to determine keyboard to display latency. Normally, this was done with a high-speed camera, a mouse, and an FPS game with a visible muzzle flash. The camera would capture the moment the mouse was clicked, and you would count the frames until the muzzle flash or other on-screen reaction appeared.

This utility does not require any special equipment and works with any AMD, Nvidia, or Intel GPU that supports DirectX 11 or newer. For capturing data, AMD GPUs use the Advanced Media Framework or AMF codec, while other GPUs use the DirectX Graphics Infrastructure or DXGI codec. FLM can generate detailed latency and effective frame-rate statistics, which can be exported to CSV files for further data analysis.

The way it works is clever: FLM measures latency by continuously capturing frames and comparing each one to the previous frame within a selected region. It then generates a mouse movement event using standard Windows functionality and waits for the frame contents to change. The time between the mouse movement and the detected frame change is recorded as the latency.

FLM is available as a free download for Windows 10 and 11 users via GPU Open or the official GitHub repository

Links:

Hackers are targeting open-source

Hackers are targeting open-source

A Microsoft engineer became suspicious of performance problems while optimizing his code. After digging in, he discovered that a simple data compression library called XZ Utils was creating a secret backdoor. What made this discovery noteworthy is that the innocuous looking compression library is used in tons of open-source projects and Linux distributions.

The analysis of how the code got into XZ utils uncovered a fiendishly sophisticated operation. The XZ utility was understaffed with only one primary maintainer. He was increasingly catching flack for falling behind – an increasing problem with open source projects. An eager developer named Jia Tan had been a contributor to the XZ project since at least late 2021 and built trust with the community of developers working on it. Eventually Tan ascended to being co-maintainer of the project which allowed him to add code without needing the contributions to be approved.

Tan did this by what now appears to be a coordinated set of accounts and discussions that were aimed at installing him as a co-owner. Various accounts appeared and started complaining about the speed of updates, features, and questions. They coordinated questions and complaints as well as contributions by Tan appear to create pressure for the owner to elevate Tan as a co-owner. Whether this was done by one person or several, this mechanism is known as ‘persona management’ – something that’s been proposed as far back as 2010.

“I think the multiple green accounts seeming to coordinate on specific goals at key times fits the pattern of using networks of sock accounts for social engineering that we’ve seen all over social media,” said Molly, the EFF system administrator. “It’s very possible that the rogue dev, hacking group, or state sponsor employed this tactic as part of their plan to introduce the back door. Of course, it’s also possible these are just coincidences.”

The code introduced was sophisticated enough that analysis of its precise functionality and capability is still ongoing.

The National Counterintelligence and Security Center has defined this kind of attack as a ‘supply chain attack’; and open-source projects are particularly susceptible to it.

It’s definitely worth reading the article because these kinds of sophisticated social attacks are obviously now reality.

Articles: