In the weeks since the Deepwater Horizon explosion, the political debate has fallen into predictably partisan and often puerile categories. Conservatives say this is Obama’s Katrina. Liberals say the spill is proof the government should have more control over industry.
But the real issue has to do with risk assessment. It has to do with the bloody crossroads where complex technical systems meet human psychology.
Over the past decades, we’ve come to depend on an ever-expanding array of intricate high-tech systems. These hardware and software systems are the guts of financial markets, energy exploration, space exploration, air travel, defense programs and modern production plants.
These systems, which allow us to live as well as we do, are too complex for any single person to understand. Yet every day, individuals are asked to monitor the health of these networks, weigh the risks of a system failure and take appropriate measures to reduce those risks.
If there is one thing we’ve learned, it is that humans are not great at measuring and responding to risk when placed in situations too complicated to understand.
In the first place, people have trouble imagining how small failings can combine to lead to catastrophic disasters. At the Three Mile Island nuclear facility, a series of small systems happened to fail at the same time. It was the interplay between these seemingly minor events that led to an unanticipated systemic crash.
Second, people have a tendency to get acclimated to risk. As the physicist Richard Feynman wrote in a report on the Challenger disaster, as years went by, NASA officials got used to living with small failures. If faulty O rings didn’t produce a catastrophe last time, they probably won’t this time, they figured.
Feynman compared this to playing Russian roulette. Success in the last round is not a good predictor of success this time. Nonetheless, as things seemed to be going well, people unconsciously adjust their definition of acceptable risk.