top of page
Search
  • Writer's pictureRichard M. Adler

The Perils of Predictions

Updated: Mar 20, 2020

Critical decisions in business and government often go awry, thanks to the Law of Unintended Consequences (LUC). Unanticipated events occur. Stakeholders, competitors, and other parties our decisions impact—or those who are affected by them—behave in unexpected ways. Implementations of decisions don’t proceed as planned, or they fail to produce the results that we believed they would. In short, the world does not follow the path that we predicted.


Decision outcomes could be improved if we could increase the reliability of our predictions about future events and conditions. More accurate predictions would reduce the number of errors committed when we analyze problems and opportunities, formulate and evaluate decision options, and execute our chosen alternative. Unfortunately, predicting the future states of economic markets or societies reliably is far easier said than done.


Numerous cognitive biases distort our predictive judgments. People tend to extrapolate from unsuitable precedents; we rely on statistically invalid samples or draw unwarranted inferences from vivid, but often weakly relevant evidence or analogs. In addition, our intuitions about dynamics—how situations are likely to change over time—are notably deficient. Most people can’t anticipate how quantities like money, temperature, or inventories accumulate or deplete given specified rates of flow, especially if effects are delayed from their causes or they build up non-linearly (e.g., compound interest, global warming, balancing changing demand with lags to fulfill orders across product supply chains). Even assuming that we could compensate for flawed intuitions, bounded rationality constrains our abilities to predict future events and conditions. We are rarely able to gather complete situational data, and available social scientific laws lack the horsepower to forecast market or societal behaviors with any precision.


Sadly, organizations can’t buy their way out of these difficulties by hiring expert consultants. Studies by Philip Tetlock showed that economic and political predictions by stock brokers and pundits are no more accurate than those of non-experts. Tetlock analyzed over 82,000 predictions about future events by several hundred seasoned professionals. Regardless of specialty or experience, these experts failed at both long- and short-term forecasting. Despite expert predictions that certain events were “impossible,” they occurred 15% of the time. Conversely, events that were deemed “certain” failed to occur more than a quarter of the time. Tetlock also found that virtually all of the opinions offered by experts about the future of the Soviet Union failed to anticipate its swift collapse as Gorbachev attempted to reform the country. Tetlock conducted a subsequent study of expert predictions for financial exchange rates that confirmed his earlier results of their mediocre performance.


Tetlock found that generalists outperformed specialists, particularly in long-term predictions. Specialists, who he labeled “hedgehogs,” focus on one problem or topic, study it intensively, and develop theories about how things worked, to which they rigidly adhere. When failed predictions surprise them, they tend to look for circumstances that create “exceptions” to their approach, interpreting any new facts to fit with their world views. Generalists (or “foxes”) know “many little things” and seem to be less dogmatic. When their predictions fail, generalists are more likely to admit error and adjust their beliefs. Tetlock later conducted experiments on predictions for financial exchange rates, which again showed that foxes outperformed hedgehogs, individually and in groups (since foxes collaborate better).


Of course, businesses make successful predictions routinely, such as forecasting demand or sales growth. This means that predictability does not hinge solely on the fact that the future is uncertain. In addition, Tetlock’s research does not prove that no experts could make reliable predictions. We recognize expertise in many fields; chess grandmasters in chess, professional athletes, soldiers, first responders, doctors, and lawyers accumulate knowledge, skills and specialized intuitions over many years of practice. They anticipate and respond effectively to complex situations all of the time. Instead, as Duncan Watts puts it, “The real problem of prediction, in other words, is not that we are universally good or bad at it, but rather that we are bad at distinguishing predictions that we can make reliably from those that we can’t.” That is, predictability is more of a “landscape” that varies according to the kind and level of uncertainty in situations we encounter.


Tetlock’s research offers strong evidence that expert predictions about the economic, social, and political backdrops for critical decisions in business and government are largely unreliable. His findings are reinforced by cognitive psychologists who study expert intuitions and decision-making. Daniel Kahneman and Robin Hogarth have hypothesized that several factors are likely to impede or preclude the type of learning required to develop robust predictive expertise: fields that are highly dynamic and continually evolving; that require open-ended predictions about human behavior; that offer a restricted number and variety of experiences (i.e., trials); and that provide limited opportunity for immediate feedback. For example, the fact that employers can’t assess performance outcomes for candidates who are rejected or turned down disrupts the formation of solid predictive expertise in recruiting. Critical decisions in economic markets, large organizations, and societies manifest most or all of these markers.


So where does this leave leaders looking to improve the quality and outcomes of their critical decisions? Relying on groups of foxes to make predictions does not look all that promising. Granted, foxes appear to learn from mistakes better than hedgehog experts. That said, they don’t outperform hedgehogs by wide margins in making correct predictions.


In my book, Bending the Law of Unintended Consequences, I propose an alternative strategy. Leaders should strive to avoid decision options in which success depends on a single set of “point” predictions about future events or conditions. LUC foretells the brittleness of such choices. Instead, they should concede the fallibility of experts and rely upon them solely to generate a broad range of plausible alternative predictions about future events and conditions. Decision options should then be formulated to take into account that range of possible futures, and evaluated with respect how they are likely to perform across that range. The “best” decision is the option that avoids “train wreck” outcomes and produces the most attractive results relative to other alternatives across a diverse set of alternative predictions. I call this approach a “decision test drive,” because it resembles the process that consumers use to select a car or truck to buy. The test drive method doesn’t guarantee an “optimal” decision choice or a rosy outcome, but it produces a satisfactory result that is robust in the face of our uncertainty about the future.


Selected Reading:

See Thinking Fast and Slow by Daniel Kahneman and The Undoing Project by Michael Lewis to learn more about cognitive biases, The Sciences of the Artificial by Herbert Simon on bounded rationality, and David Epstein’s recent article “The Peculiar Blindness of Experts” in The Atlantic on Tetlock’s research. Tetlock borrowed the distinction between the two cognitive styles introduced by philosopher Isaiah Berlin in his essay “The Hedgehog and the Fox.”

Recent Posts

See All
bottom of page