Decision Intel
How It WorksProofBias GenomeCase StudiesHomeRequest a pilot
Inside the engine

How we audit a strategic memo.

Twelve specialized agents. Thirty cognitive biases. A ten-pattern interaction model grounded in Kahneman, Klein, and Tetlock. Under sixty seconds.

This is a general-but-detailed walk-through of our methodology. It omits proprietary weights and prompts by design — everything you see here is public-safe, citable, and reproducible against the academic record.

Try it on a memo See the taxonomyRead the research
12-node pipeline · under 60 seconds
What happens when you press audit.
01 · Preprocessing3 nodes
Redact · Structure · Contextualize
02 · Analysis (parallel)7 nodes
Seven specialized agents run at once
03 · Synthesis2 nodes
Reconcile · Score deterministically
Scroll for the full diagram →DQI · 0–100 · A–F
The pipeline

Twelve specialized agents. Three zones.

Every memo passes through a sequential preprocessing chain, then a parallel fan-out of seven analysis agents that reason over the same shared context, then a two-step synthesis that reconciles the signals and computes a deterministic score. Click any node to see what it does.
PreprocessingAnalysis · 7 parallel agentsSynthesisOutputDQI · 0–100 · A–F
GDPR AnonymizerPII redacted before any LLM see…
Data StructurerParses sections, speakers, and…
Intelligence GathererExtracts topic, industry, and r…
Bias DetectiveDetects 30+ cognitive biases wi…
VerificationFact-checks claims and maps com…
SimulationFive steering-committee persona…
Forgotten QuestionsSurfaces the questions the memo…
Noise JudgeThree independent judges, measu…
Deep AnalysisLinguistic, logical, and strate…
RPD RecognitionPattern-matches against a histo…
Meta JudgeReconciles the seven parallel s…
Risk ScorerComputes the final DQI — determ…
01 · Preprocessing02 · Analysis (parallel)03 · SynthesisEach zone runs in order. Inside Analysis, all seven agents run simultaneously against the same shared context.
Bias detection

Thirty-plus cognitive biases. Every detection citable.

Our taxonomy is published openly at /taxonomy (DI-B-001 through DI-B-020), extended with eleven strategy-specific biases drawn from Stanford VC and PE decision research. Every detection comes back with an excerpt, a severity, and a confidence score.
30+cognitive biases
20general (DI-B-001–020)
11strategy-specific
0detections without an excerpt
DI-B-001
Confirmation Bias

Selectively citing evidence that supports the dominant hypothesis and dismissing evidence that would reverse the call.

Kodak· 1975–2012See the case
DI-B-004
Groupthink

Suppressing dissent to maintain group harmony — the memo reads like unanimous consensus where there should be friction.

U.S. Government· 1961See the case
DI-B-005
Authority Bias

Senior-voice framing replaces independent judgment. The argument defers to position, not evidence.

Yale University· 1963See the case
DI-B-007
Overconfidence

Stated certainty far exceeds what the evidence supports. Confidence language without commensurate calibration.

LTCM· 1998See the case
DI-B-009
Planning Fallacy

Timelines and costs estimated bottom-up instead of against comparable reference classes. Understated by design.

NSW Government· 1957–1973See the case
DI-B-012
Status Quo Bias

Preference for the current path dressed up as strategic discipline. Inaction gets the benefit of every doubt.

Blockbuster· 2000–2010See the case
See all 20 biases with academic citations
Toxic combinations

Individual biases are features. Combinations are catastrophic.

Our twenty-by-twenty interaction matrix scores every bias pair against the others. Context amplifiers multiply the score when monetary stakes are high, dissent is absent, or time pressure is active. False-positive damping kicks in when a pattern gets flagged but the outcome succeeded. Over time, each organization calibrates its own weights from its own outcomes — which is why this section of the engine is the hardest to replicate.
Toxic network
How the biases combine
Inner ring: biases that participate in multiple toxic patterns. Edge color = pattern. Hover a pattern below to isolate its edges.
2Confirmation Bias2Groupthink2Overconfidence Bias2Loss AversionSunk Cost FallacyAnchoring BiasStatus Quo BiasAuthority BiasPlanning FallacyDecisionsat compound risk
9 biases · 7 named patterns · 7 toxic edgesHub numbers = pattern participation count.
Decision Quality Index

A FICO score for decisions. Zero to a hundred, A through F.

The final DQI is a weighted composite across six components. The weights are fixed and transparent; the scores inside each component are computed deterministically from the earlier pipeline outputs. Same inputs always produce the same DQI.
DQI v2.0.0 · six weighted components
How a memo becomes a score between 0 and 100.
Bias Load28%

Severity-weighted count of detected cognitive biases, normalized to document complexity.

Noise Level18%

Inter-judge variance from the three-judge noise panel. Low variance = stable reasoning.

Evidence Quality18%

Share of quantitative claims that verify against grounded search, plus source reliability.

Process Maturity13%

Was a prior submitted, outcomes tracked, dissent present, right committee size?

Compliance Risk13%

Inverse of the seven-framework regulatory exposure score from the Verification node.

Historical Alignment10%

Pattern match against 146 historical cases. Prior failure signatures drag the score down.

Grade scale
A85+

Board-ready. Strong reasoning across the stack.

B70–84

Mostly solid. Address the flagged biases before the vote.

C55–69

Mixed. Several reasoning gaps need explicit treatment.

D40–54

Weak. Rework required before the committee reviews.

F0–39

Critical. Reset the memo before re-circulating.

Sample score
Enron (Aug 2001)
38D

Groupthink + authority + off-balance-sheet masking.

Sample score
Apple iPhone (Jan 2007)
86A

Explicit risks, dissent tracked, reference class cited.

Sample score
WeWork S-1 (Aug 2019)
24F

Narrative fallacy, founder halo, undefined unit economics.

Two parallel stability checks

Stable reasoning. Surviving dissent.

Two of the seven analysis agents run at the same time and answer complementary questions. Would a second read of the memo come to the same conclusion? And would it survive the real room?
Noise decomposition · three independent judges
Is the reasoning stable under rewording?
Low noiseReliable
Mean
78/100
Std Dev
4

Three judges converge on the same score. The reasoning holds up — the memo says what it means.

High noiseUnstable
Mean
62/100
Std Dev
22

Same memo, same prompt, three different reads. Something is ambiguous — rewrite before the board sees it.

Boardroom simulation · five role-primed personas
Would this memo survive the room?
CF
Skeptical CFO· Capital discipline
Counter-case not stress-tested.
REVISE
CE
Ambitious CEO· Growth bias
Timing fits the cycle.
APPROVE
BC
Board Chair· Governance
Dissent absent from the memo.
REVISE
OP
Operator· Execution risk
Delivery path undefined.
REJECT
CO
Compliance Officer· Regulatory exposure
Frameworks handled.
APPROVE
Overall verdictMIXED
2approve2revise1reject3dissent tracked
Academic foundation

Standing on shoulders.

None of this methodology is invented in a vacuum. Every node in the pipeline cites a specific academic lineage — and every detected bias on your memo links back to the peer-reviewed paper that first named it.
DK
Daniel Kahneman, Olivier Sibony, Cass Sunstein
2021
Noise: A Flaw in Human Judgment

The three-judge jury and inter-judge variance scoring inside the Noise Judge node come directly from this framework.

PowersNoise Judge
GK
Gary Klein
1998
Sources of Power: How People Make Decisions

Recognition-Primed Decision theory grounds the RPD Recognition node — pattern matching against a labeled historical library.

PowersRPD Recognition
KT
Daniel Kahneman & Amos Tversky
1974 / 1979
Prospect Theory & Judgment under Uncertainty

The foundational taxonomy for framing, loss aversion, anchoring, and availability biases detected by the Bias Detective.

PowersBias Detective
PT
Philip Tetlock
2005 / 2015
Superforecasting & Expert Political Judgment

Calibration methodology for the outcome flywheel and the Forgotten Questions node — what are you not asking?

PowersForgotten Questions
AD
Annie Duke
2018
Thinking in Bets

Probabilistic decision framing behind the blind-prior capture and per-analysis confidence model.

PowersBlind priors
IS
Ilya Strebulaev
ongoing
Stanford VC Initiative — Corporate Decision Research

Source for the 11 strategy-specific biases (entry-price anchor, thesis confirmation, winner’s curse, management halo).

Powers11 strategy biases
Security · privacy · compliance

The same standard we audit your memos with, we hold ourselves to.

  • GDPR anonymizer is the first node, not an afterthought. PII is redacted before any analysis LLM sees the document. If anonymization fails, the pipeline short-circuits to the risk scorer rather than transmitting raw content.
  • AES-256-GCM document encryption at rest; per-record keys; rotating encryption envelopes.
  • Seven-framework compliance mapping built into the Verification node — FCA Consumer Duty, SOX, Basel III, EU AI Act, SEC Reg D, GDPR, plus an internal framework. The same detection runs on your memos and on our own shipping policies.
  • Anonymized aggregation is opt-in. Your data never contributes to the public Bias Genome or cross-org causal weights unless the org admin flips the switch inside Settings → Privacy.

Full data-handling and sub-processor list: /privacy.

Your turn

Run the engine on your next strategic memo.

Upload takes 60 seconds. The twelve-node pipeline you just read about runs on your document and returns a DQI, flagged biases, toxic combinations, and the questions the memo didn't ask.

Audit your memo See the proof