top of page
Search
  • Writer's pictureRichard M. Adler

Debiasing Critical Decisions: Necessary but not Sufficient

Updated: Nov 7, 2019


A new buzzword has recently entered the lexicon of management consultants – “debiasing.”


This term refers to techniques for combating the cognitive biases that distort people’s judgments and choices. Cognitive biases are elements of human thought – intuitions, beliefs, values, and emotions – that cause people to deviate from making decisions based purely on logical reasoning, facts, and scientific knowledge. For example, people tend to ignore or discount evidence that doesn’t support their beliefs or plans (confirmation bias), while investors hold on losing stocks too long (risk aversion).


The theory of cognitive biases was developed by two cognitive psychologists, Amos Tversky and Daniel Kahneman. The pair argued that our minds apply heuristic intuitions automatically to deal with uncertainty as we interact with our environments. These mental shortcuts or rules of thumb enable us to size up other people, situations, and risks quickly, using minimal information and mental effort. We are generally unaware that we employ heuristics because they operate below the conscious level. Heuristic reasoning is generally very successfully in everyday life. However, Tversky and Kahneman’s experiments highlighted situations where heuristics tend to produce erroneous judgments or choices. For example, we rely on memories of events and people that are familiar, vivid, or recent to guide judgments rather than less noteworthy or frequently accessed memories that might be more relevant. People also depend on occupational stereotypes and other apparent resemblances rather than proper sampling methods to classify things or estimate magnitudes. Our intuitions are particularly unreliable in the context of critical decisions, which involve complex situations and extended durations. Tversky and Kahneman assign most of the blame for decisions going awry on cognitive biases as opposed to random errors caused by fatigue, distractions, and variations in knowledge and skill. (For more

information, see The Undoing Project by Michael Lewis, Thinking Fast and Slow by Daniel Kahneman, and Misbehaving by Richard Thaler).


Debiasing techniques act as counter-measures to heuristic intuitions, overriding distorted judgments and choices after the fact with more deliberate reasoning. For example, businesses can compensate for employee biases to act in their own self-interest by ensuring that personal incentives align closely with corporate performance goals. “Pre-mortems” are exercises in which leaders imagine that their decisions were disasters and then try to reconstruct what could have gone wrong. This technique counteracts biases such as over-optimism about the future and over-confidence in the company’s abilities to execute complicated plans in fluid situations. Other techniques compensate for biases relating to gathering and interpreting situational data, pursuing overly-aggressive or overly-conservative strategies, and groupthink.


Consultants such as McKinsey & Company champion (and market) debiasing as the key to reducing errors in business decision-making. This advice is certainly sound, but debiasing is only half of the solution for improving decisions and their outcomes.


Suppose that decision makers debiased perfectly, and compensated fully for errors induced by cognitive biases. They would still have to contend with a second problem called bounded rationality, or constraints on human capabilities to reason deliberately about complex social or business situations. The information that we can obtain about critical decisions is generally incomplete and imprecise (e.g., market data, competitor strategies, and customer preferences). Our (social) scientific knowledge is imperfect and incapable of predicting individual, group, or societal behaviors. This second constraint is aggravated by our innate uncertainty about future trends, forces, and events.


Bounded rationality restricts our ability to assess our current situation, craft a complete set of decision options, and anticipate and compare their consequences in an accurate, timely, and affordable manner. This theory was developed by Herbert Simon, an economist, cognitive psychologist, and computer scientist. He concluded that bounded rationality condemns people to make fallible decisions that are at best satisfactory rather than optimal.


Thus, decisions turn out badly for two distinct reasons. Cognitive biases cause individuals (and groups) to depart from the idealized rational decision-making assumed by traditional economics. Bounded rationality ensures that our brains, science, and technology lack the “horsepower” to make and execute critical business decisions optimally, producing the consequences that we intend. In short, debiasing is necessary to improve decision-making, but not sufficient.


So, what should decision-makers do? In my book, Bending the Law of Unintended Consequences, I propose a method for “test driving” critical decisions. It allows businesses to practice decisions before committing to them, much as consumers test drive cars prior to buying them. Decision test drives exploit advanced modeling and “what-if” simulation techniques to push back against bounded rationality: they help leaders define more decision options and broader ranges of assumptions about the future, and then analyze projected outcomes more thoroughly than alternative methods. Test drive models incorporate expert knowledge about particular kinds of critical decisions (e.g., what data should be gathered, what assumptions need to be made, what are the dynamics that drive decision outcomes). This “baked-in” knowledge promotes deliberate reasoning that counters many types of biased judgments and choices in decision analysis. In short, critical decision-making requires a coordinated approach that attack the bounds of rationality and cognitive biases at the same time.

Recent Posts

See All
bottom of page