Bayesian Theory in Geological Estimates of Success for Petroleum Prospects
In oil exploration, geologists assign a probability of commercial success to each prospect. Over time, the cumulative experience of success and failure provides a means to review the accuracy of the predictions, provided that the predictions were made using the same methodology.
The following chart shows 74 exploration prospects drilled between 1996 and 2000. The probability of success is the vertical axis, and prospects are shown in rank order, according to geological chance of success. Successes are color coded in red, and failures in blue. The chart demonstrates that geologists' estimates have merit; successful prospects generally occur on the left side of the chart. But let's take a closer look. Are the estimates quantitatively correct? Can the estimates be improved with the Bayesian technique?
(Engineers at Rocketdyne, the manufacturer, estimate the total probability
[of catastrophic failure]as 1/10,000. Engineers at marshal estimate it as 1/300, while NASA management, to whom these engineers report, claims it is 1/100,000. An independent engineer consulting for NASA thought 1 or 2 per 100 a reasonable estimate.)
The actual rate of failure was 2 disasters out of 135 missions, or about 1/67.
How is it possible that such wildly differing estimates existed, regarding the safety of such an important project?
Part of the problem is sampling. For rare events, we must make a large number of observations to detect and quantify a possibility. If we walk across a lake on thin ice ten times and do not fall through the ice, we can conclude that the chance of falling through the ice is probably less than 1:10. It does not mean that walking on thin ice is safe, or that we can safely cross the lake 100 times. For rare events, we simply cannot gain enough experience to adequate grasp the true probability. This is particularly problematic for risky and dangerous events.
There are many other facets to the problem, including self-interest of management, and various other sources of bias which Feynman discusses in his report.
But the simple lesson that I take away is that people simply cannot understand risk in the range of low probability events.