Browsed by
Category: Programming

A whole game in a QR code and Crinkler – a demoscene compressor

A whole game in a QR code and Crinkler – a demoscene compressor

MattKC asked himself if he could put a whole game into a QR code. He actually succeeds at it, but with some fascinating turns along the way which include changing linker settings and creating a window in assembly.

One of his other adventures is compressing the executable using an old demoscene tool: Crinkler. Crinkler is not your normal RAR, ZIP, or other self-extracting executable compressors. Crinkler replaces the linker used to generate the executable by a combined linker and compressor. The result is an EXE file which does not do any kind of dropping and decompresses into memory like a traditional executable file compressor.

It also uses context modelling, which produces a far superior compression ratio than most other compressors. The disadvantage of context modelling is that it is extremely slow and needs quite a lot of memory for decompression, but this is not usually a problem with 4k demos.

Give his adventure a watch below. Also

Super Nintendo programming series

Super Nintendo programming series

Retro Game Mechanics Explained is a great series on retro game console programming. If you ever wanted to know how the cake is baked, this is a great channel.

One of the best series up so far is how to program the SNES system. His 16 part series talks about background effects, lag & blanking, DMA and HDMA, memory mapping, color math, hardware registers, background modes 0-6, and the infamous mode 7. It is one of the better explanations of mode 7 that I have seen (though folks with a more formal background in graphics might explain it with with affine transforms alone)

He also covers individual games and topics such as how the Atari 2600 ‘Raced the beam’, Atari quadrascan, pokemon sprite decompression, Pac-Man arcade’s famous kill screen, Mario’s wrong warp, and many other fun topics.

Coding Co-pilot

Coding Co-pilot

And just like that programmer’s were replaced by machine learning and pressing tab.

GitHub Copilot is a development plugin that uses AI to auto-complete what you’re coding. The AI was trained using github projects as its learning source. You start coding, press tab, and it gives you a list of what it thinks you might want next based on what it matches you might be developing.

Nick Chapsas tries out a number of programming tasks from basic data structures, creating an API, a calculator, and even fully implemented fizzbuzz. It does *shockingly* well.

I think this is the next obvious level of auto-completion we’ve had for years. I bet it almost certainly will come to mainline development tools in the next 5 years. It does, however, bring up some interesting legal points if someone unknowingly auto-completes a blob of code from an GPL or closed source project. This treads the fine line of auto-generated code and downright copying. My guess is that using IP violation code scanning tools to detect problems will be even more important.

Bitcoin, block chain currencies, and quantum computing

Bitcoin, block chain currencies, and quantum computing

With bitcoin hitting all time highs and lows, it’s interesting to hear self-described pundits go on and on about the promises of crypto-currency. Surprisingly, one thing you don’t hear about is that the life of these currencies might be very limited now that quantum computers are becoming a reality.

Quantum computers are excellent at breaking mathematically difficult problems – which is the underlying technology for almost all of cryptography and block-chain algorithms. In October 2019, Google announced they have achieved quantum supremacy on a certain class of problems. So what does this mean for crypto-currencies?

I found this very succinct and excellent examination of quantum computers on the security of the Bitcoin blockchain. The results are not encouraging. All coins in the p2pk addresses and any reused p2pkh addresses are vulnerable. This means one should definitely follow the best practices of not re-using p2pkh addresses.

Interestingly enough, the most vulnerable ones are the ones in the p2pk addresses. The coins in this address range were some of the earliest coins mined. The ones still in that range are largely considered to belong to people who have long since lost their keys. This means they could easily be mined by anyone with a sufficiently large quantum computer – and claim 2 million bitcoins worth almost 70 BILLION dollars (assuming bitcoin is worth the current market price of $35,000).

Not only that, if 25% of a currency is vulnerable to be quietly captured by a single investor with a quantum computer – it represents a tremendous amount of power to manipulate the currency.

So, unused p2pkh coins are safe, right? Not really. The moment you want to transfer coins from such a “safe” address, you reveal the public key, making the address vulnerable. From that moment until your transaction is “mined”, an attacker who possesses a quantum computer gets a window of opportunity to steal your coins. In such an attack, the adversary will first derive your private key from the public key and then initiate a competing transaction to their own address. They will try to get priority over the original transaction by offering a higher mining fee.

The time for mining a transaction is about 10 minutes, calculations show that a quantum computer would take about 30 minutes to break a Bitcoin key. So, as long as that is true, your bitcoin transaction is …probably… safe. But that won’t last forever. It is an almost certainty quantum computing will make crypto-currencies worthless at some point – maybe even in our lifetime at the rate quantum computing is making advances.

Sorting

Sorting

Computer scientists spend a lot of time thinking about the most optimal way of doing things. This guy stacks up 79 different kinds of ways of sorting things from smallest to largest and compares number of writes, compares, auxiliary array use, etc.

And it’s pretty hypnotic.

Out of Source Builds

Out of Source Builds

Build systems are certainly not the sexy parts of software development. However, no part of the development process impacts your team so much as it’s build system. Build systems that perform poorly, regularly break, or generate inconsistent or confusing output files are one of the fastest ways to introduce bugs, slow releases, and bring a whole team to a screeching halt. That’s why automated reproducible builds are a keystone of agile development.

Out-of-source builds are one way to improve your build system. Out-of-source building is a practice that keeps the generated/compiled intermediate and binary files out of the source file directories. Traditionally, most build systems would generate object, binary, and intermediate files mixed right next to their source files. This leads to a confusing hierarchy of files that made getting a consistent picture of your build and source nearly impossible on big projects.

It turns out CMake can help you create our of source builds with just a few little tricks. Unfortunately, there were few examples and many of them were overly complex. So, to help get folks started, I wrote up a very simple sample. It’s the perfect starting point for your next project. It works with both Linux and Visual Studio builds.

https://github.com/mattfife/BasicOutOfSourceCMake

Rapidly Exploring Random Tree

Rapidly Exploring Random Tree

Algorithm of the day: Rapidly exploring random trees (RRT) is an algorithm designed to efficiently search non-convex spaces by randomly building a space-filling tree. The tree is constructed incrementally from samples drawn randomly from the search space and is inherently biased to grow towards large unsearched areas of the problem. They easily handle problems with obstacles and differential constraints and have been widely used in autonomous robotic motion planning.