top of page
Search
  • Writer's pictureRichard M. Adler

Knowledge Management Strategies for Critical Decision-Making

Updated: Dec 4, 2019

According to humorist Garrison Keillor, the children in his fictitious town of Lake Wobegon are all above average. Businesses undoubtedly wish that their decision-makers were similarly gifted. Unfortunately, statistics in the real world don’t work this way. Levels of knowledge and skills generally vary, often dramatically, which entails that some individuals are better at decision-making than others. How can businesses manage this variance of competency to maximize the likelihood of sound decisions and positive outcomes?


A decision is critical if it relates to enterprise-wide operations or strategies, plays out over months or years, and its outcome affects the well-being or survival of the organization.

Critical decision-making is a knowledge-intensive process. All stages of this process are informed by various kinds of expert knowledge:

  • What data should be collected and what isn’t relevant for a particular type of critical decision

  • How to interpret relevant information about critical situations (i.e., sensemaking)

  • What constitutes a “complete” decision option to respond to the challenges at hand

  • How to project the likely outcomes for decision options

  • How to compare projected outcomes to identify the “best” decision option

  • What metrics to watch when monitoring execution results.

Unfortunately, much of this performance knowledge is not immediately accessible to businesses facing critical decisions. The desired experience and skills may be distributed across many workers and locations, or it may reside external to the company in universities or consulting firms. Moreover, companies often make some types of critical decisions so infrequently that they find it hard to build a critical mass of expert knowledge in-house and retain it over time. Other relevant data and knowledge are generally dispersed across databases, spreadsheets and other analytical tools, and documents such as proposals, product designs, methodologies, and reports.


Knowledge Management (KM) targets many of these accessibility problems. In Working Knowledge, Tom Davenport and Larry Prusak define KM as “the process of capturing, distributing, and effectively using knowledge.” Thus, KM helps organizations identify what they know and how they know it, and expedites workers’ access to these resources on demand. The primary KM approach to managing human expertise is to inventory who knows what within an organization (e.g., by mining resumes and performance reviews and conducting interviews or surveys). These results are stored in online directories of individual and group skills, work experiences, and social networks. Similarly, KM manages information and know-how in digital formats by storing and indexing documents in centralized repositories for rapid retrieval through search engines and database query tools. Thus, traditional KM doesn’t actually manage knowledge about how to perform decision-making activities. Instead, it productizes information about expertise and other knowledge assets, and enables workers to locate those data quickly and uniformly across the enterprise.


Expediting access to relevant expertise and assets is certainly beneficial to decision-makers. But traditional KM leaves important gaps. Experts, by their nature, tend to be in short supply; their capacity is limited and demand for their knowledge often runs high. They may be unavailable to support a critical decision, or calling them away from their current task to help with an urgent decision robs Peter to pay Paul. In addition, the assets curated in KM repositories generally serve as templates or exemplars, but they rarely articulate performance know-how in ways that enable non-expert workers to apply it on their own in novel situations. Finally, how-to knowledge that is embedded in spreadsheet macros or other software code is difficult to locate, decipher, and adapt to process critical situations.


My book, Bending the Law of Unintended Consequences, describes a method for “test driving” critical decisions that addresses these problems. This method employs “what-if” simulations to practice decisions, projecting their outcomes against a range of assumptions about possible futures. It also provides a guideline for identifying decision options that are robust in the face of uncertainty and refining those options through virtual trial-and-error.


The test drive method also embodies a KM strategy: it captures and packages “how-to” knowledge as software and models. Employees who are competent, but not experts, can leverage these tools for particular decision-making tasks. The test drive method partitions performance knowledge about critical decision-making into three sets of categories and worker roles, shown in Figure 1 below.

Figure 1. Categories and Roles for Critical Decision-Making Knowledge


Tier 1 captures expertise about decision support in general. Software support tools are necessary to make it practical and affordable to develop and deliver test drive models tailored to particular types of critical decisions. Think of these tools as spreadsheet engines on steroids. Out of the box, spreadsheet programs are both powerful and useless. They offer diverse modeling, simulation, and analysis capabilities, but these tools only produce value when they are leveraged to build models that solve particular business or engineering tasks like projecting sales, tracking cash flows, or optimizing product designs.


Developers that lack access to reusable decision support tools must build decision models from scratch. But custom programming (and testing and maintenance) is expensive, time-consuming, and highly variable in quality. So, Tier 1 decision support knowledge is packaged as a generalized engine or platform for conducting test drives of critical decisions models. This platform provides:

  • Editors and databases for specifying and managing data about decisions

  • Simulation tools that project outcomes for decision options

  • Analytical tools that produce summary reports and graphs for comparing projected outcomes.

These sophisticated suites of tools must either be purchased or developed and maintained by senior software architects and developers.


Tier 2 consists of test drive models for particular types of critical decisions. Each model defines different types of entities, attributes, relationships, and dynamics specific to the type of decision of interest, using the modeling and simulation tools provided by a Tier 1 tool set. These building blocks are assembled into templates called dynamic scenarios for representing and analyzing decisions. Some models will be “vertical,” or specific to particular industry or government sectors and decisions such as marketing pharmaceuticals or counter-terrorism investment strategies. Other models will be “horizontal,” cutting across industry verticals, such as our test drive model for enabling organizational change. Software developers with application-level skills collaborate with subject matter experts in particular sectors and types of critical decisions to design, build, and validate these decision models.


Tier 3 captures and exploits data and knowledge about specific critical decisions confronting particular organizations. Work at this level consists of populating the Tier 2 templates, running simulations of those dynamic scenarios, analyzing their projected outcomes, and iterating this test drive process as required to identify and refine a robust decision option. A dynamic scenario specifies three kinds of information: what is known about the current situation, one decision option, and one set of “what-if” assumptions about how the business environment and relevant parties of interest such as stakeholders and competitors might behave while the decision is being implemented. Work roles at this level consist of decision-makers supported by “power users” of the test drive decision model, such as analysts or consultants trained to use the model.


From a KM perspective, Tier 2, decision-specific expertise, is the most interesting of the three categories. The goal at this level is to capture and package the knowledge enabling expert performance in critical decision-making within an interactive software program. Users of such decision support systems can be competent practitioners, but need not be experts. Dynamic scenarios supply expert knowledge to users about metrics, inputs, decision options, and “what-if” assumptions. They act as templates that extend or amplify an individual’s knowledge about the minimal set of data required to describe a decision and its context, what constitutes a complete decision, how to measure performance, what kinds of variations in future conditions should be explored, and how decisions and background conditions are likely to evolve over time. Codifying all of this knowledge is crucial because of the scarcity, demand, limited bandwidth, and cost of expertise.


In effect, decision test drive models provide knowledge “engines” to standardize performance and raise the least common denominator of skill. The test drive method can’t ensure that all decisions will be above average. However, it does help companies marshal available knowledge to raise the average level of decision quality and outcomes.

Recent Posts

See All

Comments


bottom of page