Search
  • Richard M. Adler

Test-Drive Your Critical Decisions

Organizations make critical decisions to address strategic threats and opportunities or enterprise-level operational issues (e.g., financing, organizational design, core processes and infrastructure). Critical decisions impact—and are impacted by—internal and external stakeholder groups with diverse needs and agendas. Their effects extend over months, years, or even decades, often spreading beyond internal boundaries into markets or societies. Additionally, they carry high stakes: poor outcomes can damage or destroy organizations, as well as the careers of decision-makers.


Unfortunately, critical decisions go awry with depressing frequency. New business ventures and products flounder. Mergers and acquisitions run aground, destroying rather than boosting shareholder value. Government laws, regulations, and policies fail to alleviate the social and economic problems they target. Diplomatic and military interventions aggravate rather than resolve tensions or crises. These costly and painful failures can be explained by the Law of Unintended Consequences, which states that decisions to intervene in complex situations create unanticipated and often undesirable outcomes.


Two potent forces power the Law—cognitive biases and bounded rationality. Biases refer to intuitions, beliefs, feelings, and emotions that distort critical judgments and choices (e.g., over-confidence, peer pressure, fear). For example, we tend to interpret situations and predict outcomes of decision options using unreliable “shortcuts” such as stereotypes, rules of thumb, flawed analogies, and vivid but weakly relevant evidence. We must also contend with constraints on our capacities to reason deliberately about critical decisions: incomplete and imprecise information about complex situations and stakeholders, innate uncertainty about future events and forces, and imperfect social scientific knowledge for predicting outcomes.


The Law of Unintended Consequences is a congenital affliction—part of the human condition. You cannot violate or break it. That said, you can bend it (with some effort), and thereby reduce the frequency and severity of its ill effects.


Decision-makers can bend the Law using a method that derives from the familiar process of test driving cars. Consumers rarely buy cars or trucks based solely on showroom inspections or reviews. Instead, they test drive vehicles of interest to help identify the one that best meets their wants and needs. Test drives enable buyers to experience first-hand the performance, handling, and comfort of candidate vehicles before buying them. Granted, road tests are imperfect predictors of success: the vehicle you purchase may contain hidden defects or age badly. Nevertheless, test drives offer a valuable means to reduce the risk of mistakes and disappointment for costly purchases.


By analogy, test driving a critical decision offers insights into its consequences—whether a prospective course of action is likely to meet your organization’s wants, needs, and expectations—before committing to it. Of course, a vehicle is a physical object whereas decisions consist of actions, so decision “test drives” must be virtual than tangible. Instead of road testing cars or trucks of interest, you simulate the execution of multiple decision options. And rather than performing various driving maneuvers on different roads, your simulations must explore the likely outcomes of decision alternatives against the backdrop of diverse plausible futures.

Diversity is crucial because the Law rears its ugly head when you bet on decisions that are brittle: that is, their success depends on a particular future coming to pass. It’s much more feasible to formulate a range of plausible futures than try to predict the correct one. And anticipating how decision options might play out across those futures dramatically reduces vulnerability to unintended consequences. Additionally, comparing likely outcomes across plausible futures highlights the strengths and weaknesses of competing alternatives. This allows you to improve promising options by eliminating under-performing decision elements and adding stronger ones from other alternatives. Decision test drives thereby reduce your risk of poor outcomes and stakeholder disappointments.


The test drive method hinges on improving your anticipation of outcomes—what could happen to you (and key stakeholders) if you do X and the world evolves along path Y? Enhanced anticipation enables you to avoid decision options that produce undesirable consequences -- or at least refine your options to try to mitigate them. To be effective, decision test drives must account for three distinct types of dynamics, or factors that drive how situations evolve over time:

1. What actions is your organization currently taking & how will they be altered as you execute your critical decision?

2. How might environmental conditions change over time while you implement your decision?

3. What activities are key stakeholders currently performing and how might they modify their behaviors to respond to situational changes—and your decision?

Thus, anticipating decision outcomes is like plotting a sailboat’s course across a lake or river. Experienced sailors don’t aim directly for their desired destination on the opposite shore; rather, they plan courses that factor in the influences of prevailing winds and currents. Leaders, similarly, must account for all three drivers of change from their current situation when evaluating decision alternatives.

To illustrate, consider a test drive of competitive marketing strategies for prescription drugs. Drug companies spend on the order of $2B to develop and bring a new drug to market. Amazingly, they spend equal or greater amounts to market and sell these drugs following FDA approval. Surprisingly few tools exist to help brand managers model and analyze these critical investment decisions for their drugs. And they rarely take into account the adaptive behaviors of competitors: if a marketing and sales strategy proves successful and gains market share, rivals will become aware of their losses and respond by changing their strategies to recover their positions.

Our test drive model simulates the performance of drug company strategies for their prescription drugs in particular markets, such as treatments for acid reflex disease or specific cancers. To improve anticipation of outcomes and combat the Law, it incorporates the following important dynamics:

  • A predictive model that projects the overall growth of a drug market and relative market shares of competing drugs over time (using a statistical regression equation featuring attributes such as order of market entry, price/Rx, # of adverse side effects, and brand awareness for each drug in a given market)

  • Assumptions about events and trends that will—or might—impact the drug market of interest over the duration of the simulated decision: expiring patents, changes in regulations governing healthcare insurance payers, the annual rates of inflation and of growth in the population of patients requiring treatment using the drugs of interest

  • Marketing and sales strategies for branded drugs in the target market, including both the alternatives being considered by a drug brand manager and what they know about competitor strategies. Each such strategy is specified in terms of three types of decision rules:

- Pricing rules: which map out scheduled prescription price actions over time

- Marketing mix” rules, which define the spend rates for three “channels” over time: sales calls to

physicians (aka “detailing & sampling”), direct to consumer ads, and rebates to healthcare payers

- Competitive behaviors, captured as adaptive (agent-based) decision rules.

Competitor rules capture how drug brand managers adjust their marketing mixes to respond to changes in their market positions. These rules are modeled using stimulus-response statements, where the “if” clauses describe triggering conditions and the “then” clauses actions, as illustrated below. Each competitor is assigned a set of rules to model its adaptive responses to a variety of changes in market position (over some period of months to sense them). Brand managers can infer these rules about rivals’ behavior patterns from business intelligence collected in-house or purchased from third-party drug market provider subscription services.

Example decision rule for adaptive competitor behavior

The test drive method makes all of this information directly actionable, by simulating the outcomes of a brand manager’s decision options against the backdrop of a range of plausible futures defined by assumptions about events and trends that will (or might) impact the market and potential responses by competitors. The brand manager can then compare projected market outcomes to identify the most attractive option, guided by the following heuristic rule: select the decision option that avoids “train wreck” results and produces the most attractive outcomes across the range of futures. This decision is likely non-optimal (from the perspective of economists or operations researchers), but is robust in the face of uncertainty and The Law of Unintended Consequences


Regrettably, applying the test driving method at the point of decision isn’t sufficient to defend against the Law, because critical decisions often fail afterwards, during implementation: unanticipated events occur; execution doesn’t proceed according to plan or fails to produce expected results; and stakeholders behave in unanticipated ways.


Fortunately, the test drive method is easily reconfigured to help anticipate the outcomes of your decision mid-execution. Here’s how it works. First, discard all scenarios of futures that feature decision options you didn’t adopt. Clearly, there’s no point in simulating and analyzing them again. Leverage the information you’ve acquired during implementation to prune any of your original possible futures that no longer appear plausible. Replace them with new sets of assumptions that reflect your current uncertainties about the future. Next, update the description of your initial situation and chosen decision to reflect planned actions that you’ve already executed, performance results to date, and any other relevant changes in situational conditions. Simulate the remaining actions in your decision against these updated plausible futures. If the projected outcomes all remain favorable, you can relax; if not, you need to diagnose emerging problems and adjust your decision mid-course to avoid or mitigate them. In essence, the test drive morphs into a monitoring tool, much like the radar-based Early Warning Systems used to detect missile attacks during the Cold War. This transition is not surprising: there’s no point in test driving a car once you’ve bought it!


At first glance, the decision test drive method looks like an intimidating amount of work. Fortunately, most of the “heavy lifting” is already completed before you start, thanks to two supporting tools: a test drive software platform and decision modeling templates created in collaboration with subject matter experts. The platform consists of a modeling framework, simulation engine, and comparative analytics tools. Templates provide best practice “skeletons” for making particular types of decisions (e.g., competitive marketing strategy, managing enterprise risks, and enabling transformational change). Each template specifies the minimal set of data inputs to gather; performance metrics for evaluating outcomes; decision-specific events, trends, and forces; and the components for formulating competent decision options. These templates defend against many important cognitive biases, while the software platform pushes back against the bound of rationality to improve anticipation of outcomes of critical decisions in complex situations. The resulting level of effort required to test drive decisions is comparable to using a traditional spreadsheet model.


In summary, the test drive method uses simulations to project and explore the likely outcomes of decision options across a range of plausible futures. This process uncovers unintended consequences, enabling organizations to avoid or correct costly mistakes prior to execution and improve outcomes. In essence, it enables leaders to practice critical decisions and improve them by learning from virtual rather than real mistakes. The method also process uncovers emerging problems as decisions are being executed, allowing for prompt mid-course adjustments. Finally, test drive simulations provide an audit trail that contributes to institutional memory and provides a basis for learning and improving decision-making in the future.

©2019 by Richard M. Adler

  • LinkedIn Social Icon
  • Facebook