Earth shattering…

Earth shattering…

While it might not look like much, this software program called the Eureka machine uses standard video input, examines the behavior of a system, and with no previous knowledge of a system’s physics, kinetics, etc – it generates equations that accurately describe what is going on.

The program, from simple video input and a little massaging, it was able to generate the Hamiltonian equation for the difficult double-pendulum problem in about 30 minutes.  And a Lagrangian Equation that describes a double harmonic oscillator in another case – all in very short periods of time:



While this is very cool and to some degree just an expansion of what we have been able to do with neural net programming that ‘learns’ by trying out techniques and checking their results against reality – the ability for the program to generate equations takes this all a step further.To give an example of what this brings about – they recently applied the algorithm to some complex data collected in cell interaction.  While the scientists had struggled to make any meaning of the patterns, the program was able to come up with a formula that accurately described how these cells worked.  But this presented a new problem.  While the equation seemed to match exactly what was going on, the scientists who fed the data couldn’t figure out what physical components the variables the equation related too. They made the decision NOT to publish the equation in any papers with the accurate modeling equation because they didn’t actually understand how the equation modeled the system. While not unsurprising from an program that simply generates an equation from data; its the first time that these computers might actually be out-matching us for models of systems.  However, since they are unfettered by making the actual variables equate to real-world  phenomenon – they are free to generate equations who’s variables aren’t necessarily based on the underlying physical phenomenon.  THIS is the interesting part.It seems (rightly) that just modeling the situation isn’t sufficient to say you understanding it.  Does understanding of a phenomenon require the understanding of the underlying principles?  Should it? Sure, you might be able to come up with an equation that models what’s going on for the cases you have, but without understanding the principles behind it, you’re just putting your faith in the equations generated.  But is this what we do today?

I was taught since 6th grade science class that every scientific principle was only one repeatable converse case away from being refuted at any time.  History is full of such events – including the most deeply held ones such as Newton’s laws of motion. Depending on the size scale of use, they either work very well, or in the quantum/astrophysical realms – fall apart completely.  Those rules have been getting ‘touch-ups’ for years.  While Newton certainly isn’t categorically wrong – it’s clear we didn’t (and still don’t) have all the corners fleshed out.

So we find the crux of the matter -why shouldn’t the equations generated by this program be any more deserving of our trust than Newton’s?  I’d say the key lies in several ways: mostly in the requirement for rigorous review, numerous experimentally repeatable verifications, and apparently that the equation needs to be explainable with principles and terms that we DO understand.  The first part is very understandable.  No scientific statement worth it’s salt should be accepted without lots of peer review, repetition of the experiment by others in different conditions, public discussion, and confirmation via different methods.  This program required user intervention to get a balance between absolute accuracy and ‘simplicity’.  Which means it had to go through numerous iterations and a little bit of pre-known knowledge to get it to generate equations that corresponded to principles we understand. This implies it could generate different equations for the same phenomenon.  More on this later…

But the second reason, and the one the jury appears to be out on, appears that one needs to be able to explain WHY the equation works, or at least be based on terms we do understand.  In other words, just pulling the ‘answer’ out of the back of the book isn’t real understanding.  The right answer doesn’t seem to be sufficient by itself for science to classify as real knowledge.  For science, we also apparently need to be able to explain why it’s right too.  Only then can we actually say we have a decent understanding of something.

The unanswered question is if that requirement of being built on understood principles needlessly inhibits us.  What if we just ‘went with the flow’ and let machines like this generate those horribly difficult equations for us?  What would that look like/imply?  The equations that the software could generated didn’t always correspond to previously known/modeled phenomenon – and needed to be ‘guided’ by the user to answers in the form they wanted.  But this implies the computer in other circumstances might be revealing a different *kind* of thinking that we could backtrace?  What if those equations are just like another ‘culture’ or ‘language’ that sees the same reality in a different, but no less valid, way that we could explore and understand? I think that could be an interesting discussion for another entry.

This instance reminds us that there are very important philosophical principles behind what is considered scientifically known and not.  Principles that have real and interesting effects; and depending on when/where you lived, there were/are very different requirements for what is considered knowledge.

In case you’re interested, philosophically, this question of what is knowing is called Epistomology – and might be worth a look.  (Is my philosophy undergrad work showing?)

2 thoughts on “Earth shattering…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.