Why do critical decisions in business and government go awry so frequently? Two psychologists, Amos Tversky and Daniel Kahneman, proposed a compelling answer to this question: people make flawed decisions because cognitive biases distort their judgments and choices. (My book, Bending the Law of Unintended Consequences, argues that biases are only half of the story, but they are important causes of failed decisions nonetheless.)
Traditional economics rests on the idealization of human beings as perfectly rational decision-makers. This model—rational actor theory—holds that people can specify their preferences for a set of decision options in a manner that is complete and consistent. Preferences can be expressed as numbers called utilities, which quantify the value or net benefits that the decision-maker assigns to the outcomes for each of his or her options. The theory further posits that people act based on full and relevant information and that individuals (or groups) always select the decision alternative that maximizes their utility.
The good news about rational actor theory is that it enables modeling of all manner of decisions—societal, organizational, and individual—that concern the social sciences, law, and moral philosophy. The bad news is that the theory is demonstrably false: people deviate from its core assumptions in a myriad of ways when they make decisions. Tversky and Kahneman’s research explains how and why many of these departures occur. Economists have long recognized that people fail to live up to the exemplar of perfect rationality. However, they maintained that the theory remained a viable approximation for analyzing decision-making behavior—any observed deviations from “rational” judgments or choices originate in random errors across individuals and specific situations can be discounted for modeling purposes.
Tversky and Kahneman didn’t deny that people make mistakes in their judgments and choices owing to distractions, fatigue, gaps in knowledge, and so on. However, the pair argued that the primary cause for deviation from the rational ideal was more fundamental than occasional lapses and variations in performance: our cognitive processes display structural properties that systematically dispose us to err in specifiable ways and situations. These distortions are sufficiently pervasive, severe, and consistent to reject rational actor theory as a workable approximation for modeling real-world decision behaviors. Tversky and Kahneman’s work catalyzed the development of behavioral economics, which offers an alternative, more realistic approach for modeling actual decision-making.
Tversky was born in British Palestine to Russian Zionist émigré parents. Kahneman was a European Jew who immigrated to Palestine from France as a child shortly after the end of World War II. They independently gravitated to study psychology in Israel. Both joined the Israeli military and participated in several wars that followed the creation of the Israeli state. Tversky enlisted as a paratrooper and became a decorated officer, while Kahneman provided non-combat support as a psychologist. But the two did not meet until they were faculty members at the University of Michigan. Unlike in personality and temperament, their partnership was unexpected. Tversky was outgoing, confident, courageous, often intimidating even friends and colleagues with his keen intelligence and insights. Kahneman was similarly brilliant but socially retiring and sometimes insecure and moody.
Prior to their collaboration, Tversky co-authored a major reference work on measurement in psychology while Kahneman studied perception and its interaction with reasoning. Both also dabbled in the study of human judgment. During his initial stint in the Israeli military, Kahneman studied problems that interviewers experienced in evaluating recruits and assigning them to suitable services and positions. He traced these problems to stereotypical judgments about personality and education, and officers’ over-confidence in their interviewing and predictive abilities. In 1954, he developed a standardized personality test that minimized the impact of these negative influences, which was strikingly effective in predicting success for assignments of personnel (and very unpopular among interviewers!) and is still in use today. While at Michigan, Tversky developed a novel theory about similarity judgments, or how people compare things such as people, cities, or colors to determine their degree of resemblance.
Thus primed, Tversky and Kahneman began their collaboration in the early 1970s by studying errors in judgments involving uncertainty, such as predicting the likelihoods of events and estimating unknown values. Kahneman was highly skeptical of the prevailing theory that people resorted to intuitive variants of formal statistical (Bayesian) reasoning to estimate probabilities. He convinced Tversky, and the two designed an experimental questionnaire to test the theory. They found that test subjects made consistent kinds of errors in estimating probabilities and mean values. More surprisingly, they found little variation in results across students, psychologists, and more expert subjects like statisticians and mathematical psychologists despite large differences in analytical training and sophistication. Subjects’ errors on the test did not match with Bayesian reasoning, but they could be explained by the subjects making assumptions that that random samples are highly similar to the populations from which they were drawn. This notion of resemblance reinforced Kahneman’s earlier experience with Israeli military interviewers, who assigned jobs by matching their intuitive impressions of draftees’ personalities and skills to occupational stereotypes. The pair explored various aspects of similarity intuitions, which they christened representativeness. People are insensitive to sample sizes when making estimates. They also tend to ignore available statistical data information in favor of stereotypes, are influenced by supplemental data even if it is extraneous, and generally misjudge the true nature of randomness.
Tversky and Kahneman conducted follow-on experiments to investigate judgments about the frequency and subjective probability of events. They found that people’s estimates depend strongly on availability—the ease of recalling events or facts from long-term memory—whether they are actually relevant or not. Events that are vivid are more accessible for recall than less striking ones. For example, highly dramatic events such as terrorist attacks or airplane crashes dominate our estimates of danger, regardless of their actual frequencies of occurrence relative to other accidents. Tversky and Kahneman also identified a third type of intuition that guides quantitative judgments such as predicting values of stock or settling on acceptable prices. They posited that we first arrive at a starting point or anchor, usually based on available data about the situation or an initial calculation, which we then adjust to get a final answer. For example, managers and salespeople often shape negotiations to their advantage by preemptively setting low anchors (for offering salaries) or high anchors (for selling houses or cars). Customers and employees tend to make counter-offers by adjusting from those anchors rather than their own starting points or market baselines.
Tversky and Kahneman argued that representativeness, availability, and anchoring/adjustment are forms of intuitive reasoning that our minds apply automatically as we interact with our environments. These mental rules of thumb, or heuristics, enable us to size up people, situations, and risks very quickly, with minimal information, mental effort, or delays. We are generally unaware that we employ heuristics when making judgments or choices because they operate reflexively, below conscious awareness. Heuristic reasoning is generally very successful in everyday life. However, Tversky and Kahneman argued that their experiments highlighted situations where heuristic shortcuts produce predominantly erroneous results, such as when the data set at hand doesn’t constitute a statistically valid sample. They labeled decision flaws induced by heuristics gone bad cognitive biases to differentiate them from random performance errors. They published their findings for a broad audience in an influential 1974 paper for Science.
Tversky and Kahneman subsequently shifted their research focus to how our attitudes towards risk bias our decisions under uncertainty. This was a natural progression for their inquiry: decisions to take action (or refrain from it) presuppose assessments of the current situation and predictions about the future. These judgments, in turn, drive how we estimate risks for alternate courses of action. And our assessment of risk is a pivotal factor when we evaluate competing alternatives in order to select a preferred decision option. The pair demonstrated that we tend to be averse to risk in situations where we perceive ourselves to be facing potential gain, such as buying a lottery ticket or shares of stock. However, we tend to seek rather than avoid risk when we face alternatives that entail likely losses, such as selling a stock whose price has declined or shutting down a failed project. In fact, they demonstrated that you can change people’s perception of risk simply by changing the wording or framing of the situation to emphasize loss or gain. The pair quantified the degree to which our preferences vary across these two perspectives and developed a predictive model for analyzing decisions called prospect theory. Prospect theory is important because it quantifies the impact of emotions, such as fear and regret, on preferences in a standardized “built-in” way that applies to all decisions. By contrast, the leading competing approach, called utility theory, requires a cumbersome process to interrogate decision-makers about numerous risk-reward trade-offs in order to quantify the impact of risk tolerance on utilities for a specific decision.
Tversky and Kahneman’s partnership fractured over personal differences not long after they relocated to the United States from Israel and completed their work on prospect theory in the early 1980s. Tversky died of cancer in 1996 at age 59. In 2002, Kahneman was awarded the Nobel Prize in Economics for his work (with Tversky) on cognitive biases and prospect theory. He is the only psychologist to receive this honor.
Tversky and Kahneman’s collaboration was noteworthy for its duration and productivity and the degree to which their intellects and styles complimented one another’s so perfectly. Tversky was extremely focused and adept at formal quantitative methods. Kahneman was more qualitatively oriented, wide-ranging in his interests, and exhibited a penchant towards practical applications of psychology. Most important, they shared a common research approach based on skepticism, targeting problematic features of existing theories and investigating psychological phenomena by studying failures rather than successes. For example, Kahneman observed that to study how memory works, you want to study forgetting. This contrarian attitude was particularly well-suited for uncovering our deviations from perfect rationality when making judgments and choices.
Suggested Reading
See Michael Lewis’s The Undoing Project for a highly approachable account of Tversky and Kahneman’s pioneering collaboration. Kahneman’s Thinking Fast and Slow provides a detailed explanation of cognitive biases and how intuitive thinking figures in current theories of human cognition. Richard Thaler, one of the founders of behavioral economics, describes the field’s history from a biographical perspective in Misbehaving.
Comments