91B0FBB4-04A9-D5D7-16F0F3976AA697ED
C9A22247-E776-B892-2D807E7555171534
Jeffrey Friedman
Jeffrey Friedman

On a daily basis, we weigh one decision against another using layman’s probability and statistics. Unfortunately, such estimates are often skewed, whether by misinformation or overconfidence; which, although not a problem when deciding which new ice cream flavor to try, has dramatic consequences when the stakes include military intervention in the form of American troops and innocent foreign nationals. One might hope that such decisions are more precise, or weighed more heavily, than comparably trivial matters; however, Jeffrey Friedman, assistant professor of government at Dartmouth College, maintains that they are not.

Friedman studies probability assessments, particularly those employed by the US government for the purposes of national security decision making, and yesterday evening, he presented on this work during an event co-sponsored by the Dean of Faculty’s Office and the Government Department. Visiting Assistant Professor of Government Ivan Rasmussen, who met Friedman while the two were working together on an MIT National Security Analysis, gave the introductions.

Friedman began by describing the decision making process before the Bay of Pigs Invasion. Desiring not to use the mathematical “a three-out-of-ten chance,” the Joint Chiefs of Staff described the plan as having “a fair chance of success.” Allen Dulles, Director of the CIA, believed that unclear communication surrounding the probability of the invasion’s success led to defeat; this led to a schism between the Pentagon and the White House, which has not yet been mended.

This problem of rhetoric has not been resolved over the past five decades for several reasons. The most ethereal is that, unlike quantifiable probabilities based in actuarial data, national security has no concrete probabilities. Furthermore, because the nature of such decisions have such high stakes, the “pathologies of probability assessment,” as Friedman refers to them, manifest in three main obstacles for decisionmakers.

The first of these is imprecision, such as “a fair chance of success” seen during the Bay of Pigs. The second is relative probability, which is likely the most pernicious of the three and has a tremendous capability of biasing decision making. It is commonplace in National Security decisions, and is often exploited by pharmaceutical companies (who claim, for example, that “this medicine reduces the risk of heart attack,” without providing comparison data, such as, lowering the risk from 1 in 1000 to 1 in 1005). Relative probability will determine what the best decision is, but not whether the benefits of that option outweigh its costs. The final obstacle is assessing an option’s necessity but not its sufficiency.

The National Intelligence Estimates (NIEs), federal documents outlining assessments of national security issues, attempt to stem the possibility for miscommunications by implementing various scales and guidelines for using terms such as “rare, unlikely, likely, very likely, and almost certain,” which are actually encouraged in intelligence reports, despite varying widely in meaning from person to person. The most recent guidelines are “sort of using numbers, sort of not, [to try and] stop miscommunication, but in a very ineffective way,” Friedman stated. The underlying problem, he explained, is that many people confound the likelihood of something happening and the confidence in that likelihood.

“Probability assessments cannot be avoided,” Friedman said, “but vagueness is a choice. There is a precise and direct way of dealing with these things, but receiving feedback is crucial.” Probability assessment as a skill can be cultivated, but it requires going through a process of calibration. One way of doing this is by taking several tests from several sources and receive a Brier Score, a proper score function that measures the accuracy of probabilistic predictions.

At first, 85% of people would have performed better had they guessed via coin flip, but after receiving feedback, they get closer and closer to the line of best fit. Philip Tetlock, author of Superforecasting, first noticed this tendency and explained it by saying, “Most people, receiving no concrete feedback, are unaware of just how overconfident they are.”

Interestingly, numeracy does not affect this ability, nor does national origin appear to. However, “people do seem less perceptible to biases when they process something in a second language because of something to do with active translation of the terms,” Friedman stated. “There is also a statistically significant loss of precision when probabilities are rounded,” he added, “and a similar tendency for estimates to cluster in the middle when people use words.” To mitigate the effects of overconfidence, Friedman suggests being like the Wise Fool, who knows nothing but that they know nothing, and thus anchors on 50 rather than zero or 100.

Help us provide an accessible education, offer innovative resources and programs, and foster intellectual exploration.

Site Search