Atomic Accidents by James Mahaffey

Atomic Accidents by James Mahaffey

This has to be one of the most interesting reads (audiobook listen in my case) I’ve had on the subject of nuclear history. I also think it should be read by every engineer of any background.

Why? Jim Mahaffey, while a Ph.D. in Nuclear Physics and having spent over 25 years in governmental, military, and civilian nuclear projects, presents a narrative that isn’t stilted in the usual pro/anti-nuclear rhetoric. I found myself captivated for three reasons. Firstly, some of these events and details I had never heard of before – like the fact there was a cave full of natural radioactive ore that sickened some hunters who wandered into it. Secondly, and related, he knows exactly what he’s talking about. He cites the chemistry, physics, and even patent information for everything involved. There is no hyperbole. His information comes from actual studies, chemistry, nuclear physics, and the hard scientific data. Some of the facts in the book I’d never heard anywhere else before. Sometimes I wondered if he wasn’t leaking secrets.  Finally, he does all this with a captivating sense of storytelling and a fantastically dry sense of humor. I found myself sitting in my car listening to a story finish out – such as when he tells the story of a cable tray fire that breaks out during one particular accident:

“The fire continued to grow so the supervisor ran down the hall and grabbed a larger fire extinguisher. He emptied it into the blaze, but the fire was unimpressed.”

The thing that makes this book great is that he isn’t arguing for or against nuclear power. He explains the chemistry and physics of what is going on so well that it removes the fear and terror we often associate with nuclear reactions. So all you are left with are the accidents. This is really unbiased storytelling that does what it should: it doesn’t tell you what to think – it presents all the data and narrates the story so that it makes YOU think. What would I have done? What should be done?

This is why I suggest every engineer read this book. Even if you are not interested in nuclear accidents or nuclear power scares you. It’s not really about that. It’s about the difficulty of engineering – especially engineering where failure means serious consequences. It’s about the mental traps we as engineers fall into. We can be extremely intelligent and well versed, but get taken out by a simple rat chewing a cable. The fact that it deals with energies that can, and have, killed people crystallizes the importance of each design decision. You’ll often see yourself connecting the thought that the designers had with your own engineering principles – and realizing the weak points.

Highly recommend.

Spoilers below on what I ‘learned’ – don’t read if you want to come up with your own conclusions

After listening to it on audiobook over the last week or so, I found myself thinking about these topics:

  1. Anything that is foolproof is not. All mechanical systems have a useful lifetime and/or fail at some point. Maintenance doesn’t always happen when it should. Things that fail sometimes take time for people to realize they failed – especially if it is a backup system that is rarely used. People plug things in backward by accident, read the wrong gauge, use the wrong lubricant, etc. You cannot suppose that everything will be maintained as it was when it was perfect and new and the designer is right there watching it. Plants become unprofitable and corners get cut. Staff turns over all the time and information is often lost.
  2. The weakest system, not the strongest one is all that is needed to start a problem.
  3. In work around dangerous forces, you cannot have an ignorant workforce. People must understand WHY each procedure is there, or they’ll come up with shortcuts that may get themselves or others killed unexpectedly because they’re trying to save time/effort/money/etc.
  4. We should probably not run experiments on commercial reactors/preparation plants. That should only happen in the lab. But there are many ambitious, very smart people that want to make a name for themselves and do things they should not because they believe they are smart enough.
  5. When something must run constantly over a long time – given a long enough timeline – EVERY possible thing will happen. Every possible combination of failures will also happen. You simply can’t imagine it all.
  6. Even when you imagine and prepare for the worst, it can be worse.
  7. Luck plays a big part in disasters. Given exact same plants and the same accident, one will be ok and the other will not because of luck in the smallest detail/timing of how something happens.
  8. Real disasters usually involve 2 or more very unexpected and different things simultaneously going bad or failing in quick succession.
  9. Individual systems that are failsafe on their own can react in unexpected combinations when more than a few things at once start failing. You must look at how the system as a WHOLE handles an event that causes serious single and multiple system failures (i.e. a delivery truck loses its brakes, knocks over a power pole, and then hits the turbine building. It starts a fire that shuts down the turbines. With the external electrical wires down, the lights in the building go off. Unfortunately, that happens to be where the fire handling equipment is – that nobody can now find in the dark. You now have a nuclear meltdown because you can’t put out a fire because the lights went out.)
  10. You must design things to end in a state after such an event that you can recover from them without endangering lives. When the explosion/leak does happen, is there a way to clean it up?
  11. It is often the discounted/seemingly unimportant support systems that cause the accident to become a disaster. While massive amounts of effort are spent understanding nuclear forces and fission, most reactor accidents are caused by things like pump bearing failures, valves that get stuck, emergency generators that don’t kick on because the wiring went bad, or running ill-advised tests.
  12. An unexpected chain of failures is what sometimes causes a disaster. (ex: a failing data cable starts generating error messages. The error log starts filling the available hard drive. The drive fills and crashes because nobody thought there would be so many messages. The computer that crashed was also controlling some other subsystem, which now fails too. That causes a bigger failure that leads to a disaster.)
  13. An alarm that goes off all the time is as useless as if it hadn’t been installed at all.
  14. When what you’re doing trips a failsafe – this is not a good thing. It’s an indication you’re operating outside of the defined parameters of the design. It’s an indication that you were one step from disaster. It’s critical to examine exactly what happened and prevent that from happening again. Procedures or actions that regularly trip a failsafe indicates imminent disaster.
  15. The amount of thought and fail-safes you build must be directly proportional to how bad it will be if it goes wrong. Dropping a jar of tomato juice requires water, a mop, and bucket and buying a new bottle of tomato juice. Dropping a jar of fission products all over the floor is a whole other matter. From isolation, to clean up, to personal safety, to disposal, to handling what to do with the materials, clothes, and items used in the cleanup – all require complete thought and handling.
  16. One must stay a little paranoid and constantly vigilant when working with systems that involve deadly/dangerous forces. A regular schedule of checking and re-checking is the only way to know if things are working as they should. You cannot be laissez-faire until a failure to fix something. Even little anomalies are indications that must be checked out. One must stay curious, regularly check, and fix things that don’t even seem super-important at first. See #2.

Here’s a great example:

A liquid storage tank of olive oil that develops a tiny leak could be bad. You are losing product, money, and might cause an injury if someone slips. It is even worse if it was almost undetectable and goes dripping into the floor drains for weeks, months, or even a year.  It gets even worse when the oil coagulates in the underfloor drains and almost completely plugs them up. Then the 50,000-gallon vat of spaghetti sauce spills and the drains that should have saved everyone don’t work. It causes the spill to go everywhere and gets on everyone. It gets even worse when the sauce ruins tens of thousand dollars of product that was sitting on the floor and destroys electronic machinery drenched in it. It gets even worse when it runs into the basement, filling it 2 feet full of spaghetti sauce. Because of one dripping olive oil vessel, you now have tens of thousands in lost products, people drenched in goo, machinery destroyed, and a basement flooded with a sauce that may take hundreds of thousands to clean up. On top of the fact you can’t make one more jar of the product until it’s all cleaned up.

Now imagine that it was nuclear fission products that dripped. And when enough collects, it becomes an unshielded reactor in the floor that kills anyone that comes near it.

This is why every engineer should read this book. To understand how failures really happen, and why just making an individual system fool proof and having safety systems can be a false comfort. The nuclear world is full of extremely smart people that bragged nothing could go wrong. Humility and diligence are the values we should cultivate.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.