Decision Intel
How It WorksProofBias GenomeCase StudiesHomeRequest a pilot
The Bias Genome · Quarterly

Which cognitive biases predict
which kinds of strategic failure.

A public map drawn from 146 real strategic decisions — 131 failures and 15 successes — across 12 industries. Methodology open. Data cite-able. Refreshed as consenting customer orgs opt in.

Baseline failure rate in this dataset: 90%. Every “failure lift” below is multiplied against this baseline.

Named biases
20
DI-B-001 → DI-B-020
Case studies
146
hand-curated
Industries
12
with meaningful coverage
Data source
Seed
live on customer opt-in
Refreshed
2026-04-16
ISO date
Most dangerous
GroupthinkDI-B-004
1.1x
Failure lift vs baseline · n=63
Most prevalent
Overconfidence BiasDI-B-007
60%
Appears in 87 of 146 cases
Most costly when uncaught
Availability HeuristicDI-B-003
92
Avg impact score across failures · n=10
Most underestimated
Recency BiasDI-B-015
1.1x
Low prevalence (10%) but outsized failure lift

The risk landscape.

Every bias plotted on two axes — how often it appears, how much it lifts the failure rate. The top-right quadrant is where your audit energy pays off.

The risk landscape
Prevalence × Failure lift · 21 biases
Bubble size reflects sample size (trust the dot as it gets bigger). Quadrants are directional — treat small-n points as signal, not statistic.
Common & dangerous
Rare but deadly
Common, containable
Low concern
0%15%30%45%60%Prevalence — share of cases containing the bias0.0x0.5x1.0x1.5x2.0xFailure lift vs baselinebaseline 1.0xGroupthinkAuthority BiasConfirmation BiasOverconfidence BiasPlanning FallacyCommon & dangerousprioritize theseRare but deadlywatch for n growthCommon, containablesurfaced often, usually caughtLow concernrare & non-dangerous
Max lift in view: 2.2x · Max prevalence: 60%Hover any bubble to see n and exact lift.

The leaderboard.

Sorted by failure lift — how much more often a decision fails when this bias is present, relative to the baseline. Filter by industry to narrow the slice.

Scored across all 146 seed cases. n = sample size; ⚠ marks biases with n<3 (directional only).
#BiasFailure liftPrevalencenInsight
01
GroupthinkDI-B-004
Peaks in financial services
1.1x
43%63modest failure lift vs baseline · n=63 · often paired in Echo Chamber
02
Recency BiasDI-B-015
Peaks in financial services
1.1x
10%15modest failure lift vs baseline · n=15
03
Cognitive MiseringDI-B-016
Peaks in financial services
1.1x
23%33modest failure lift vs baseline · n=33
04
Gamblers FallacyDI-B-018
Peaks in financial services
1.1x
2%3modest failure lift vs baseline · n=3
05
Halo EffectDI-B-017
Peaks in financial services
1.1x
12%18modest failure lift vs baseline · n=18
06
Selective PerceptionDI-B-014
Peaks in financial services
1.1x
11%16modest failure lift vs baseline · n=16
07
Bandwagon EffectDI-B-006
Peaks in technology
1.1x
12%17modest failure lift vs baseline · n=17
08
Zeigarnik EffectDI-B-019
Peaks in aerospace
1.1x
6%9modest failure lift vs baseline · n=9
09
Availability HeuristicDI-B-003
Peaks in aerospace
1.1x
7%10modest failure lift vs baseline · n=10
10
Optimism Bias
Peaks in government
1.1x
16%24modest failure lift vs baseline · n=24
11
Paradox Of ChoiceDI-B-020
Peaks in government
1.1x
2%3modest failure lift vs baseline · n=3
12
Hindsight BiasDI-B-008
Peaks in financial services
1.1x
10%15modest failure lift vs baseline · n=15
13
Authority BiasDI-B-005
Peaks in financial services
1.1x
42%62modest failure lift vs baseline · n=62 · often paired in Yes Committee
14
Framing EffectDI-B-013
Peaks in technology
1.1x
16%23modest failure lift vs baseline · n=23
15
Confirmation BiasDI-B-001
Peaks in technology
1.1x
50%73modest failure lift vs baseline · n=73 · often paired in Echo Chamber
16
Overconfidence BiasDI-B-007
Peaks in technology
1.0x
60%87no clear failure signal · n=87 · often paired in Optimism Trap
17
Planning FallacyDI-B-009
Peaks in technology
1.0x
31%45no clear failure signal · n=45 · often paired in Blind Sprint
18
Sunk Cost FallacyDI-B-011
Peaks in government
1.0x
23%34no clear failure signal · n=34 · often paired in Sunk Ship
19
Status Quo BiasDI-B-012
Peaks in technology
0.9x
36%52no clear failure signal · n=52 · often paired in Status Quo Lock
20
Anchoring BiasDI-B-002
Peaks in technology
0.9x
52%76no clear failure signal · n=76 · often paired in Optimism Trap
21
Loss AversionDI-B-010
Peaks in technology
0.9x
28%41no clear failure signal · n=41 · often paired in Status Quo Lock

Toxic combinations.

Named patterns where two biases compound. Detection in live memos is 8x worse than either bias alone — the product category our toxic-combination engine was built for.

Toxic network
How the biases combine
Inner ring: biases that participate in multiple toxic patterns. Edge color = pattern. Hover a pattern below to isolate its edges.
2Confirmation Bias2Groupthink2Overconfidence Bias2Loss AversionSunk Cost FallacyAnchoring BiasStatus Quo BiasAuthority BiasPlanning FallacyDecisionsat compound risk
9 biases · 7 named patterns · 7 toxic edgesHub numbers = pattern participation count.
Toxic combination
Echo Chamber
n=54

Confirmation bias amplified by unchallenged consensus. Teams hear what they already believe.

Confirmation BiasGroupthink
Appears in
The Coca-Cola Company· 1985U.S. Navy· 1988NASA / Perkin-Elmer· 1990
Toxic combination
Optimism Trap
n=52

Favorable initial estimates become reference points; downside scenarios are discounted.

Anchoring BiasOverconfidence Bias
Appears in
Long-Term Capital Management· 1998Blockbuster· 2000AOL Time Warner· 2000
Toxic combination
Status Quo Lock
n=37

The fear of loss from any change outweighs the documented cost of inaction.

Status Quo BiasLoss Aversion
Appears in
Johnson & Johnson· 1970Eastman Kodak· 1975Xerox· 1979
Toxic combination
Yes Committee
n=33

Deference to authority suppresses dissent; decisions ratified rather than debated.

GroupthinkAuthority Bias
Appears in
Continental Illinois National Bank· 1984NASA· 1986Barings Bank· 1995
Toxic combination
Blind Sprint
n=30

Overconfidence meets systematic underestimation of time and complexity.

Overconfidence BiasPlanning Fallacy
Appears in
Continental Illinois National Bank· 1984The Coca-Cola Company· 1985U.S. Navy· 1988
Toxic combination
Sunk Ship
n=24

Past investment justifies continued commitment — the 'we're too deep to stop' pattern.

Sunk Cost FallacyConfirmation Bias
Appears in
Eastman Kodak· 1975Xerox· 1979US Department of Defense / Lockheed Martin· 2001
Toxic combination
Doubling Down
n=3

Escalating commitment to a losing course to avoid realizing the loss.

Sunk Cost FallacyLoss Aversion
Appears in
Barings Bank· 1995Long-Term Capital Management· 1998Amaranth Advisors· 2006
Method · How this dataset is built
  • Each case is a real, documented strategic decision drawn from SEC filings, NTSB reports, FDA actions, post-mortems, or academic case studies.
  • Biases are assigned per-case by applying the Decision Intel taxonomy (DI-B-001 → DI-B-020). Every named bias links to peer-reviewed academic sources at /taxonomy.
  • Failure lift = failure rate among cases with this bias ÷ baseline failure rate across the full dataset (90%).
  • Sample-size gate:headline rankings require n ≥ 5. Rows with n < 3 are shown dimmed with a ⚠ — they are directional only.
  • Honest selection bias: famous strategic failures dominate the public record. Industries with small coverage (aerospace, entertainment) should be read as signal, not statistic.
  • As consenting customer orgs opt into anonymized outcome sharing (see Settings → Privacy when logged in), their data supplements this seed. The live genome endpoint at /api/intelligence/bias-genome will take over once n ≥ 3 consenting orgs have reported outcomes.
Want to see this on your own memo?

Run your next strategic memo through the same taxonomy.

Upload takes 60 seconds. Your data stays yours — anonymized aggregation is opt-in.

Audit your memo See the full taxonomy
Seed snapshot · 2026-04-16·Live updates begin when ≥ 3 customer orgs have opted into anonymized outcome sharing.