Next question: can you do development to deployment completely online. I’m having flashbacks to 1970’s era dumb terminals again…
If you’re interested in learning intermediate level shader programming, The Art of Code YouTube channel has some really awesome videos to get you going.
As a good example, here’s a really great introduction to programming a shader for a ray marching renderer:
With bitcoin hitting all time highs and lows, it’s interesting to hear self-described pundits go on and on about the promises of crypto-currency. Surprisingly, one thing you don’t hear about is that the life of these currencies might be very limited now that quantum computers are becoming a reality.
Quantum computers are excellent at breaking mathematically difficult problems – which is the underlying technology for almost all of cryptography and block-chain algorithms. In October 2019, Google announced they have achieved quantum supremacy on a certain class of problems. So what does this mean for crypto-currencies?
I found this very succinct and excellent examination of quantum computers on the security of the Bitcoin blockchain. The results are not encouraging. All coins in the p2pk addresses and any reused p2pkh addresses are vulnerable. This means one should definitely follow the best practices of not re-using p2pkh addresses.
Interestingly enough, the most vulnerable ones are the ones in the p2pk addresses. The coins in this address range were some of the earliest coins mined. The ones still in that range are largely considered to belong to people who have long since lost their keys. This means they could easily be mined by anyone with a sufficiently large quantum computer – and claim 2 million bitcoins worth almost 70 BILLION dollars (assuming bitcoin is worth the current market price of $35,000).
Not only that, if 25% of a currency is vulnerable to be quietly captured by a single investor with a quantum computer – it represents a tremendous amount of power to manipulate the currency.
So, unused p2pkh coins are safe, right? Not really. The moment you want to transfer coins from such a “safe” address, you reveal the public key, making the address vulnerable. From that moment until your transaction is “mined”, an attacker who possesses a quantum computer gets a window of opportunity to steal your coins. In such an attack, the adversary will first derive your private key from the public key and then initiate a competing transaction to their own address. They will try to get priority over the original transaction by offering a higher mining fee.
The time for mining a transaction is about 10 minutes, calculations show that a quantum computer would take about 30 minutes to break a Bitcoin key. So, as long as that is true, your bitcoin transaction is …probably… safe. But that won’t last forever. It is an almost certainty quantum computing will make crypto-currencies worthless at some point – maybe even in our lifetime at the rate quantum computing is making advances.
Computer scientists spend a lot of time thinking about the most optimal way of doing things. This guy stacks up 79 different kinds of ways of sorting things from smallest to largest and compares number of writes, compares, auxiliary array use, etc.
And it’s pretty hypnotic.
Build systems are certainly not the sexy parts of software development. However, no part of the development process impacts your team so much as it’s build system. Build systems that perform poorly, regularly break, or generate inconsistent or confusing output files are one of the fastest ways to introduce bugs, slow releases, and bring a whole team to a screeching halt. That’s why automated reproducible builds are a keystone of agile development.
Out-of-source builds are one way to improve your build system. Out-of-source building is a practice that keeps the generated/compiled intermediate and binary files out of the source file directories. Traditionally, most build systems would generate object, binary, and intermediate files mixed right next to their source files. This leads to a confusing hierarchy of files that made getting a consistent picture of your build and source nearly impossible on big projects.
It turns out CMake can help you create our of source builds with just a few little tricks. Unfortunately, there were few examples and many of them were overly complex. So, to help get folks started, I wrote up a very simple sample. It’s the perfect starting point for your next project. It works with both Linux and Visual Studio builds.
Algorithm of the day: Rapidly exploring random trees (RRT) is an algorithm designed to efficiently search non-convex spaces by randomly building a space-filling tree. The tree is constructed incrementally from samples drawn randomly from the search space and is inherently biased to grow towards large unsearched areas of the problem. They easily handle problems with obstacles and differential constraints and have been widely used in autonomous robotic motion planning.
Traveling through hyperspace ain’t like dusting crops, boy! Without precise calculations we could fly right through a star or bounce too close to a supernova and that’d end your trip real quick, wouldn’t it?Han Solo – Star Wars Episode IV: A New Hope
Moving to Vulcan and DirectX 12 isn’t like going from DX9 to DX11, or Opengl 3.0 to OpenGL 4.0. These new API’s add quite a bit of work that used to be done by the graphics driver. This gives devs more control, but it also makes things a lot more tricky.
Microsoft has generated a good set of videos to teach some of the unique and tricky parts of DirectX12 to those with some graphics background. These videos help teach a number of tricky topics and usages that aren’t immediately apparent by reading the docs.
Presentation modes in Windows 10
This video has terrible audio quality, but it does a great job covering the various flip modes and delays that they introduce:
This is one of the big concepts that trips you up and causes a lot of confusion.
I had the rare treat to see an Apollo DSKY control pad used to control the lunar landing computer a few years back. I always wanted to know how it worked.
I can wonder no more, because Robert Wills introduces the amazing hardware and software that made up the Apollo Guidance Computer, walks you through the actual landing procedure step-by-step, and talks about the pioneering design principles that were used to make the landing software robust against any failure. He also explains the problems that occurred during the Apollo 11 landing, and shows you how the Apollo Guidance Computer played its part in saving the mission.
If you feel that isn’t cool enough – why not go download the software and look at the original printouts yourself?https://www.ibiblio.org/apollo/Luminary.html#Source_Code_and_Binary
If you want more information about the computer programming language, algorithms, and entire trip tour, watch this:
Finally, a early NASA technician managed to come across a pile of salvage that he recognized as old Apollo equipment. He bought the 2 tons of materials and in the following years, realized he had an actual Apollo guidance computer (likely used in the lab for testing/etc) and then got it working again!
He just recently did another talk on the topic with updated details
Mathematicians are a fascinating breed. They look at problems and new fields of study for discoveries and then plug away on a single problem or set of problems for amazing amounts of time. They do this by attacking the problems from every direction using every mathematical tool they have. They use intuition and experience to find patterns, similarities to other problems, and even brute force methods. The goal is to seek out patterns, make sense of those patterns by stating conjectures, and then prove those conjectures into theorems. This often takes mathematicians years or decades – if they ever solve it at all. If nothing else, mathematicians are a persistently curious lot.
The Ramanujan Machine
With all this potential tedium, is there a way to speed some of this up? Could one automate some of the work? AI algorithms are amazing at pattern matching, so what if we use machine learning to start the ball rolling? Enter the Ramanujan Machine – after the famous Indian mathematician that saw patterns where others did not (and had no less than 2 movies made about him). This kind of software may be transformative to how mathematics is done – and some are raising questions about what it means for the field.
The concern is that the Ramanujan Machine does much more than just pattern match. The machine consists of a set of algorithms that seek out conjectures, or mathematical conclusions that are likely true but have not been proved. Researchers have already used machine learning to turn conjectures into theorems on a limited basis — a process called automated theorem proving. The goal of the Ramanujan Machine is more ambitious. It tries to identify promising conjectures in the first place.
The algorithms in the Ramanujan Machine scan large numbers of potential equations in search of patterns that might indicate the existence of formulas to express them. The programs first scan a limited number of digits, perhaps five or 10, and then record any matches and expand upon those to see if the patterns repeat further. When a promising pattern appears, the conjecture is then available for an attempt at a proof.
So far, the Ramanujan Machine has generated more than 100 intriguing conjectures so far – and several dozen have been proved.
The question for the field is now: what does this tool mean for us.
I have already written about the problem of scientific discovery and Epistomology. Machines can now pattern match and come up with equations and descriptions that can describe physical realities, but at what point can we say that we ‘know’ something?
If a machine observes a system and spits out an answer/mathematical description, we often do not know how it arrived at that answer. Can we really say we ‘know’ a thing and are accurately describing it? Without understanding the interplay of the underlying principles that got us to that answer, it might only hold for that set of inputs.
Some would argue, that’s how we’ve always done science. Despite our best efforts, science pushes ever forward and sometimes refutes past theories. We have seen this most dramatically in medical discoveries and regularly in the fields of cosmology and quantum mechanics. However, in mathematics, this is not so. Proven theorems have held for millennium.
So where does this leave us
Honestly, I think software like the Ramanujan machine is the next logical step in mathematics and pure sciences. Just like the calculator became a tool that helped transform math 100 years ago, AI enhanced pattern matching is a next logical tool in the toolbox. Instead of relying on intuition and years of grunt work, it’s unbiased and methodical approach could help us see patterns we have missed, and do it massively faster. After all, correctly formulated mathematical proofs are proofs no matter what the source was.
While it likely cannot replace a well-trained expert, it certainly could help augment their efforts. Speeding up our rate of discoveries by orders of magnitude sounds like a very solid contribution to me.
Try out the machine here: https://www.ramanujanmachine.com/
Read more here: https://www.livescience.com/ramanujan-machine-created.html
Or even download the code here: https://github.com/ShaharGottlieb/MasseyRamanujan/