Chemicals & Materials Now!

From basic to specialty, and everything in between

Select category
Search this blog

Meteors and Black Swans: Worst Case Scenarios

Posted on November 10th, 2016 by in Chemical Manufacturing Excellence


“No matter how bad things are, you can always make things worse.” Randy Pausch

We really need to stop talking about “worst-case scenarios” when we do a process hazard analysis (PHA) or a quantitative risk assessment (QRA). It’s a nonsense term.  More often than not, it is an obstacle to good analysis, not an aid.

What does “worst-case scenario” mean?

The phrase itself has no actionable meaning. There are several definitions available, all very similar;  “the most unpleasant or serious thing that could happen in a situation,”[1] “the worst possible future outcome,”[2] and “any situation or conclusion which could not be any worse; the worst possible outcome.”[3]  These definitions all have three things in common.  First, they are all speculative about outcomes that may happen in the future.  Second, they all suggest that the analyst will recognize what all possible outcomes are and will know that nothing worse could happen.  Third, they presume to know that of all possible outcomes, there is agreement on what the worst thing that could happen is.

It is possible to look at a finite set of potential outcomes and rank them from best to worst. There is nothing about the term “worst-case scenario,” however, that suggests that analysts are being asked to rank a finite, predefined set of potential outcomes.  Instead, a PHA or a QRA team is asked to speculate what the worst case would be.  You needn’t have spent much time with a bunch of engineers to know that they have incredibly active imaginations.  No matter how bad someone imagines the outcome to be, someone else can imagine something worse.  At what point do we reach “the worst”?  Without trying very hard, all hazards become fatal hazards with the potential for multiple fatalities.

Consider office supplies. In one scenario, someone goes to the office supply cabinet and gets a box of paper clips.  On the way back to their cubicle, they drop the box of paperclips, which spills open, scattering one hundred #1 paper clips across the freshly polished linoleum floor.  Before they get a chance to pick them up, someone else comes walking down the hallway, fails to see the spilled paper clips on the floor, slips and falls, hits the back of their head on the corner of a desk, and dies.  Clearly, paper clips are a fatal hazard.

Credible worst-case scenarios

“But that’s not credible!” So goes the argument.  What’s not credible?  That paper clips could be spilled on the floor?  That someone could slip on paper clips spilled on the floor?  That when someone slips, they could fall in such a way as to die?  We have all witnessed spilled paper clips and we have all witnessed people falling.  Every year, the Bureau of Labor Statistics reports fatalities, including fatalities from falls at the same level.  In 2014, the last year for which the BLS has reported data, there were 138 fatalities from falling at the same level—almost 3% of all workplace fatalities.  That is more than died from fires and explosions, combined.  So what exactly does “credible” mean?

For some, credible means “possible” or “not impossible.” By this definition, scenarios like “the gravity field failed,” are not scenarios that we have to worry about as credible.  On the other hand, a facility being swarmed by insects is not impossible, hence credible by this definition.

For some, credible means something that has happened, as in “if it happened once, it could happen again.” I have worked with a plant that was once struck by an aircraft.  As unlikely as that is to happen again, personnel at that plant consider any hazard review that does not consider being struck by an airplane as incomplete.

I have also worked with a plant where a shift leader once inadvertently drained flammable solvent into a storm water sewer. Meanwhile, an electrical crew was removing downed wires over a quarter mile away.  The wires scraped across the edge of a sewer culvert, creating a spark that ignited the solvent floating on the storm water.  A flame front accelerated down the sewer and blew a series of manhole covers into the air.  One of those manhole covers struck the lone operator in the area when it came down, seriously injuring him.  Had anyone suggested this scenario worthy of serious consideration before it happened, they would have been laughed out of the room.  Now that it has actually happened, by this definition, it is a credible scenario.

The EPA and Worst Case

When the EPA promulgated the Risk Management Planning (RMP) rule, 40 CFR 68, it required that the owner or operator of a stationary source consider a “worst case release scenario.” To address questions about what exactly was the “worst case,” the EPA went to great pains to define the conditions of a worst case.  Any reasonable person could look at the EPA worst case and imagine a release that would be even worse.  At the same time, there is almost universal agreement that the EPA worst cases are not credible by any definition.

The RMP worst case scenarios are not intended to be credible. They are intended to assure that event consequences posed by different stationary sources are compared on an equal basis.  They are intended to assure that hazards are not trivialized.  There is a reason that the guidance for the Risk Management Plan is called the “Guidance for Off-Site Consequence Analysis” and not the “Guidance for Off-Site Risk Analysis”.

Even the EPA tacitly acknowledged this by developing a framework for developing alternate release scenarios, scenarios that are “more likely to occur than the worst-case release scenario.”

The effect is that stationary sources covered by the RMP rule develop the worst case release scenarios required by the EPA, but those scenarios are never used in any serious assessment of the hazards of a site, or in any evaluation of the risks of those hazards.

Black Swans

Nassim Taleb’s Black Swan Theory has become a favorite of some risk analysts. It can be used to explain or excuse all sorts of catastrophic events.  When Taleb repurposed the term “black swan” in 2007, he described it this way: “First, it is an outlier, as it lies outside the realm of regular expectations, because nothing in the past can convincingly point to its possibility. Second, it carries an extreme impact. Third, in spite of its outlier status, human nature makes us concoct explanations for its occurrence after the fact, making it explainable and predictable.”[4]  In other words, it is not simply rare but considered outside the realm of possibility.  It is not credible by any definition of the word.  The fact that we can construct an explanation after the event, in hindsight, does not mean that it was in any way predictable.

As a result, the black swan theory is an interesting construct, but has no practical place in risk assessments. By their very nature, black swans are not something we can assess prospectively.  We can only study them retrospectively.


Catastrophic meteor strikes are not black swans. They are quite rare, but rarity is not what qualifies an event as a black swan.  While not impossible, a black swan is inconceivable.  Meteor strikes are not inconceivable.  There is ample evidence from the past to convincingly point to their possibility.

Meteor shower

Photo Credit: NASA Meteoroid Environment Office/Bill Cooke

Most bolides (meteors, asteroids) that enter the earth’s atmosphere explode in the upper atmosphere, where the solids are vaporized, posing no threat to human activity on the earth’s surface. However, bolides larger than 20 m across (about the size of a barn) can and have made it close enough that either the airburst or the crater would have an effect.  The energy released by a 20 m bolide is at least 10 times greater than that of the atomic bomb dropped on Hiroshima.

Can they happen? Yes.  Have they happened?  Yes.  In fact, as recently as 2013, the Chelybinsk meteor that exploded over Russia had an estimated diameter of approximately 20 m; the airburst was about 500 kTons, around 30 times greater than that of the Hiroshima bomb.  Clearly, 20 m bolides are credible.

Risk: Consequence x Likelihood

When we talk about risk, it is important to keep in mind that risk has two components, consequence and likelihood. Consequences also have two elements.  There are events which in turn lead to impacts.  Events include fires, explosions, toxic releases.  While they are the incidents we try to prevent, events are not what people care about.  People care about impacts, such as schedule delays, fish kills, community hospitalizations, and workplace fatalities.  Imagine calling the spouse of a worker and simply informing them that there has been a fire at the plant.

Likelihood is best expressed as a frequency or a rate. When likelihood is expressed as a probability, it is only meaningful when a specific span of time is included.  Once the span of time is included, that probability is once again being expressed as a frequency or a rate.  When some describes the likelihood as one in a million, you have to ask, “One in a million what?”

An analysis that only addresses consequences does not address risk. In the absence of likelihood, a consequence is meaningless. Worst-case scenarios describe consequences, not likelihoods.  Because they only describe consequences, they do not describe risk.  Without an associated likelihood, a “worst-case scenario” is meaningless in terms of risk assessment.  Otherwise, every plant in the world would have an anti-meteor task force.

Why doesn’t every plant in the world have an anti-meteor task force? Bolides that are 20 m across or bigger strike the earth about once every 40 years. Typically, a once-every-40-year event will get the attention of a HazOp team.  However, the event usually will not affect the entire world—the affected area is more like 100 square miles.  The surface of the earth is about 200 million square miles, meaning that the likelihood an asteroid strike affecting a particular plant is 0.0000000125 per year.  Not credible.

Doing Good Analysis

Technical people are admired for the rigor of their analyses. The term “worst case,” however, lacks rigor.  Let’s stop using it.

Modifying the term to “worst credible case” is no better. It simply shifts the lack of rigor from “worst” to “credible”.  For it to mean anything, the term “credible” must be defined.

Neither “could happen” nor “has happened” adequately define “credible.” Neither term takes into account likelihood.  A risk assessment, however, requires that we consider likelihood as well as consequence if it is to be meaningful.  When a consequence is identified, it is imperative that the likelihood that it is associated with be the likelihood of that event.

Make sure that there is solid agreement of what needs to be considered in a risk assessment. Personally, I am partial to considering the consequences that are most likely—what is in the middle of the distribution—and their associated likelihoods before I feel the need to venture further down the distribution tail.  However, even when venturing further out along the distribution, it is important to adjust the estimates of likelihood accordingly.

Mostly, though, let us make sure the analyses we do and the conclusions we draw have the rigor they deserve.

[1]Cambridge Dictionary, Cambridge University Press, 2016.  Accessed 27-Sep-2016 at

[2] McGraw-Hill’s Dictionary of American Slang and Colloquil Expression, McGraw-Hill Companies, Inc., 2006.  Accessed 27-Sep-2016 at

[3] Wictionary, last modified 25-Sep-2016.  Accessed 27-Sep-2016 at

[4] Taleb, Nassim Nicholas, The Black Swan: The Impact of the Highly Improbable, Random House, 2007, reviewed by N.N.Taleb in the New York Times, 22-Apr-2007.  Accessed on 04-Oct-2016 at


All opinions shared in this post are the author’s own.

R&D Solutions for Chemicals & Materials

We're happy to discuss your needs and show you how Elsevier's Solution can help.

Contact Sales