Browsed by
Category: Technical

Valve Steam Deck

Valve Steam Deck

Coming in Dec 2021, the Steam Deck. You can use it as a handheld or plug in any standard external display, keyboard, mouse – and use it almost as a full desktop.

Hardware:

  • Custom AMD based APU
    • CPU: Zen 2 4c/8t, 2.4-3.5GHz (up to 448 GFlops FP32)
    • GPU: 8 RDNA 2 CUs, 1.0-1.6GHz (up to 1.6 TFlops FP32)
    • APU power: 4-15W
  • Display:
    • 7″ 1280×800 (16:10 aspect ratio) 60Hz 400 nits Touchscreen IPS
  • 16 GB LPDDR5 on-board RAM (5500 MT/s dual-channel)
  • Storage (varies by model purchased):
    • 64 GB eMMC (PCIe Gen 2 x1) – $399
    • 256 GB NVMe SSD (PCIe Gen 3 x4) – $529
    • 512 GB high-speed NVMe SSD (PCIe Gen 3 x4) – $649
    • All models use socketed 2230 m.2 modules (not intended for end-user replacement)
    • All models include high-speed microSD card slot
  • Interface controls
    • Gamepad Controls:
      • A B X Y buttons
      • D-pad
      • L & R analog triggers
      • L & R bumpers
      • View & Menu buttons
      • 4x assignable grip buttons
    • Thumbsticks: 2x full-size analog sticks with capacitive touch
    • HD Haptics
    • Trackpads:
      • 2x 32.5mm square trackpads with haptic feedback
      • 55% better latency compared to Steam Controller
      • Pressure-sensitivity for configurable click strength
    • Gyro: 6-Axis IMU
  • Dual-band Wi-Fi radio, 2.4GHz and 5GHz, 2×2 MIMO, IEEE 802.11a/b/g/n/ac + Bluetooth 5.0
  • Ports/expansion:
    • UHS-I supports SD, SDXC and SDHC
    • USB-C with DisplayPort 1.4 Alt-mode support; up to 8K @60Hz or 4K @120Hz, USB 3.2 Gen 2
  • 40Whr battery + 45W USB Type-C PD3.0 power supply
  • Weight: Approx. 669 grams

Software

  • Steam OS – customized for platform, but you can even load other game stores or even wipe the device and install another OS like Windows.
  • Cloudsaves work – so you can play on pc, save, load on the Steam Deck and pick right up where you left off.
  • Suspend and resume a game – but only 1 at a time
Fix for your pasta box being half full of air

Fix for your pasta box being half full of air

Bulky pasta types (such as farfalle and fusilli) require more packaging which means they are trickier to transport and lead to more waste (and boxes that seem to be half full of air). Scientists tackled the problem by designing flat pastas that can transform into 3D shapes when cooked. They do this by simply scoring the flat dough with specific grooved patterns, whose depth and spacing determine how the pasta will form when boiled. They can not only create classic pasta shapes (even spirals/etc), but new shapes as well.

They fed their data into computer models, which they hope will allow them to automate the technique and make it easier for food manufacturers to produce and deliver a loaded menu of morphing pastas.

Goodbye grocery stores and warehouse jobs

Goodbye grocery stores and warehouse jobs

Ocado’s grocery warehouses in the UK don’t have people in them filling your orders. They consist of thousands of mechanical boxes that move all over the hive – a grid in which each box contains a specific product – and pick up your items and then deliver them to the shipping services at the edges. A grocery order can be filled in 5 hours.

They even have special robots for packing special situations since you don’t want to put a heavy item like a gallon of milk into the same bag as all your soft chips.

Honestly, this is almost certainly what we’ll be doing very soon; and in 10 years we won’t be making trips to the grocery store. Since COVID, many of us are already using automated checkout. in-store shoppers, and web order with pick up.

One of the use cases we talked about with autonomous cars was that they could drive themselves to the store, be loaded up, and then drive back to your place.

Read more about Ocado’s technology here: https://www.ocadogroup.com/technology/technology-pioneers

Bitcoin, block chain currencies, and quantum computing

Bitcoin, block chain currencies, and quantum computing

With bitcoin hitting all time highs and lows, it’s interesting to hear self-described pundits go on and on about the promises of crypto-currency. Surprisingly, one thing you don’t hear about is that the life of these currencies might be very limited now that quantum computers are becoming a reality.

Quantum computers are excellent at breaking mathematically difficult problems – which is the underlying technology for almost all of cryptography and block-chain algorithms. In October 2019, Google announced they have achieved quantum supremacy on a certain class of problems. So what does this mean for crypto-currencies?

I found this very succinct and excellent examination of quantum computers on the security of the Bitcoin blockchain. The results are not encouraging. All coins in the p2pk addresses and any reused p2pkh addresses are vulnerable. This means one should definitely follow the best practices of not re-using p2pkh addresses.

Interestingly enough, the most vulnerable ones are the ones in the p2pk addresses. The coins in this address range were some of the earliest coins mined. The ones still in that range are largely considered to belong to people who have long since lost their keys. This means they could easily be mined by anyone with a sufficiently large quantum computer – and claim 2 million bitcoins worth almost 70 BILLION dollars (assuming bitcoin is worth the current market price of $35,000).

Not only that, if 25% of a currency is vulnerable to be quietly captured by a single investor with a quantum computer – it represents a tremendous amount of power to manipulate the currency.

So, unused p2pkh coins are safe, right? Not really. The moment you want to transfer coins from such a “safe” address, you reveal the public key, making the address vulnerable. From that moment until your transaction is “mined”, an attacker who possesses a quantum computer gets a window of opportunity to steal your coins. In such an attack, the adversary will first derive your private key from the public key and then initiate a competing transaction to their own address. They will try to get priority over the original transaction by offering a higher mining fee.

The time for mining a transaction is about 10 minutes, calculations show that a quantum computer would take about 30 minutes to break a Bitcoin key. So, as long as that is true, your bitcoin transaction is …probably… safe. But that won’t last forever. It is an almost certainty quantum computing will make crypto-currencies worthless at some point – maybe even in our lifetime at the rate quantum computing is making advances.

Sorting

Sorting

Computer scientists spend a lot of time thinking about the most optimal way of doing things. This guy stacks up 79 different kinds of ways of sorting things from smallest to largest and compares number of writes, compares, auxiliary array use, etc.

And it’s pretty hypnotic.

Out of Source Builds

Out of Source Builds

Build systems are certainly not the sexy parts of software development. However, no part of the development process impacts your team so much as it’s build system. Build systems that perform poorly, regularly break, or generate inconsistent or confusing output files are one of the fastest ways to introduce bugs, slow releases, and bring a whole team to a screeching halt. That’s why automated reproducible builds are a keystone of agile development.

Out-of-source builds are one way to improve your build system. Out-of-source building is a practice that keeps the generated/compiled intermediate and binary files out of the source file directories. Traditionally, most build systems would generate object, binary, and intermediate files mixed right next to their source files. This leads to a confusing hierarchy of files that made getting a consistent picture of your build and source nearly impossible on big projects.

It turns out CMake can help you create our of source builds with just a few little tricks. Unfortunately, there were few examples and many of them were overly complex. So, to help get folks started, I wrote up a very simple sample. It’s the perfect starting point for your next project. It works with both Linux and Visual Studio builds.

https://github.com/mattfife/BasicOutOfSourceCMake

How your games get made

How your games get made

Here’s some fascinating footage from Resident Evil 8 – The Village. It shows just how far game development has come from it’s early days. We used to have little 8×8 pixel characters, now we have fully live-acted scenes.

When I was getting into computer science and gaming in the 80’s and 90’s, programmers were the rockstars of game development. They were the only ones talented enough with the limited resources of early computers to get games done. They were the ones that developed all the key innovations and gameplay mechanisms. They even created most/all of the art, characters, movement, stories, etc. In the early days, animations were hand-edited pixel sprites done one frame at a time.

Starting in the early 2000’s, indie developers started to slowly crop up. Technology finally reached a technical and price point that more and more people could start making games – often by using basic 2d engines like Shockwave 3D. There was also a slow but steady increase of indie developers from big studios/boring day jobs that sometimes spent years on a hobby game. Sadly, in a world in which most games flop, many would find their game wasn’t actually fun/didn’t sell and had to go back to their day jobs after having run through all their money. So the mantra then became “fail as quickly as possible”. Which means to do just enough to prove out your game idea and quickly discard those that didn’t work. Many people suggested the idea of ‘A game a week‘ in which you develop a game in one week – laser focusing on the gameplay/fun. The gameplay idea was then either good enough to continue, or you moved on to the next idea. In this way, you never lose more than a week on a bad idea. This was the first push away from games that simply were what was technically possible to a focus on gameplay itself before the technical questions.

As the decade continued, technology increased and so did the engines people used. Unity, Unreal, and many game engines were became more powerful, easily licenseable, and easily accessible to beginners. The engines worked on many platforms, making them much more attractive than the cost, difficulty, and time of developing your own engine for all the platforms you wanted to support. With tools open to designers and artists with just a little technical know-how, the focus then became to make a game FUN before developing the graphics engine pipeline. This reached its crowning moment when the game Journey won the 2012 Game of the Year – and was largely created and developed by designers. The democratization of game development by engines that could be picked up by non-programmers flipped the game dev world on its head.

“Make games, not engines” is the new mantra. Content and design is now the new king. The vast majority of game development staff, and cost, is now content: music, art, modeling, actors, and design. Programmers are usually a tiny minority on most game studios, and they often work together as a core engine team that moves from one game to the next.

But as they would say on Reading Rainbow, you don’t have to take MY word for it:
https://gist.github.com/raysan5/909dc6cf33ed40223eb0dfe625c0de74

Optimal Battleship

Optimal Battleship

Nick Berry, president of DataGenetics, meticulously analyzes different strategies to play the classic board game Battleship (he also has done Chutes & LaddersCandyland and Risk)

It’s a great example of how computer scientists often work. He explores a host of techniques and analyzes the results by calculating how often you’ll get a perfect game, median number of guesses, and how bad it gets in the worst case.

He examines 4 major strategies:

  1. Pure random searching
  2. Hunt and Target – Hunt randomly until you get a hit, then proceed methodically to sink the hit ship.
  3. Hunt and Target with parity – since the minimum length of a ship is 2 units, you need only search even or odd squares
  4. Hunt and Target with parity combined with a probability density function.

His fourth approach is the most fascinating. The system calculates every possible configuration of the remaining ships, and then sums up the probability of a ship on each square. At the beginning, all the squares are basically equally probable, but as more and more guesses are made, the number of possible configurations decreases. If you continually calculate the sum of these possibilities, pick the square with the highest probability and repeat this process, you get significantly better results.

How much better? Purely random guessing gives you a median of 97 moves. Using parity with the hunt+target method averages 64 moves. But using the probability density function increases that to a staggering 42 moves on average.

Turns out, I discussed the use of this kind of probability density function by speedrunners who used the same technique to beat the splosh-kaboom minigame in the Legend of Zelda Wind Waker.