Regulatory alignment
Every Decision Provenance Record maps onto the 11 internationally-recognised AI governance principlescodified by AI Verify — Singapore IMDA’s governance framework, cross-aligned with the EU AI Act and the OECD AI Principles. The reference implementation a Fortune 500 procurement team can paste into a vendor risk assessment.
AI Verify is a self-assessment governance framework under the AI Verify Foundation, a subsidiary of Singapore’s Infocomm Media Development Authority (IMDA). It does not certify products. “Aligned with” is the accurate claim; we state this openly.
Each row names the AI Verify principle, defines it in one sentence, describes the mechanism inside Decision Intel that satisfies it, and points at the specific Decision Provenance Record field that makes the mechanism verifiable.
The AI system discloses information about itself to relevant stakeholders.
Every audit ships with the SHA-256 fingerprint of the exact prompt version used, plus the model lineage — which Gemini tier ran on which of the 12 pipeline nodes, with temperature and top-p per node. Nothing about the model or the prompt is hidden.
The AI system’s outputs can be understood in human terms.
Every flagged bias carries a stable taxonomy ID (DI-B-001 through DI-B-020, plus 11 strategy-specific biases) and a primary APA academic reference with DOI where available. A GC reading the DPR can trace every flag back to its peer-reviewed source.
The AI system’s behavior can be reproduced given the same inputs.
Input-document hash + prompt fingerprint + model lineage together make every analysis reproducible from the same inputs. The risk-scorer node is deterministic (not LLM-generated), so the final score is stable for identical inputs.
The AI system behaves safely during deployment.
GDPR anonymiser runs as the first node of the pipeline — no analysis LLM ever sees raw PII. Content is encrypted at rest with AES-256-GCM and a keyVersion rotation protocol. The three-judge noise jury (bias detective + noise judge + statistical jury) bounds individual-model failures.
The AI system resists unauthorised access and tampering.
TLS 1.2+ in transit, AES-256-GCM at rest with keyVersion rotation, Supabase SOC 2-adjacent infrastructure, CSRF protection via middleware, signed cryptographic fingerprints on the DPR itself. Every encrypted row carries a keyVersion stamp so keys rotate without bricking historical data.
The AI system remains reliable under perturbation or partial failure.
Three-judge noise jury arbitrated by a meta-judge — individual LLM failures do not cascade. Model routing classifies errors as transient vs permanent and fails over to a second provider (Anthropic Claude) when thresholds are exceeded. Exponential-backoff retries + atomic rate limiting on every call.
The AI system mitigates unintended discrimination across groups.
The 30+ cognitive-bias taxonomy covers multiple fairness-relevant biases (authority bias, in-group favouritism, halo effect, availability bias). Cross-framework regulatory mapping includes GDPR Article 22 (non-discrimination on automated decisions) and the EU AI Act’s high-risk fairness provisions. Recalibration learns per-org failure patterns so fairness is auditable per customer.
The AI system handles data lawfully and in line with governance policy.
No-training contract with every AI processor engaged. Per-org data isolation. Signed Data Processing Addendum on every paid tier. The GDPR anonymiser redacts PII before any third-party LLM receives the content. Encryption keys rotate with a documented protocol.
Responsibility for the AI system’s outputs is clear and documented.
Every DPR includes a reviewer counter-signature block for the CSO or General Counsel to sign on receipt. Immutable audit log captures every action — who exported, who viewed, who edited — with filters, date range, and CSV export for downstream compliance tooling. Chain-of-custody timestamp on the record.
The AI system supports, rather than replaces, human judgment.
The Recognition-Rigor Framework (R²F) is designed around this principle. Kahneman’s rigor (debiasing) and Klein’s recognition (expert-intuition amplification) are both applied — but the CSO’s judgment stays in the centre, reinforced from both sides, never replaced. The DPR is the evidence of their oversight, not a substitute for it.
The AI system contributes to outcomes that are socially and environmentally positive.
Cross-framework regulatory mapping across seven frameworks (Basel III, EU AI Act, SEC Reg D, FCA Consumer Duty, SOX, GDPR Art 22, LPOA) aligns Decision Intel with societal governance objectives. Decision-quality audits reduce the strategic-decision failures that cascade into stakeholder harm. Cost-tier model routing reduces inference energy per audit where decision quality allows.
What alignment does — and does not — mean
AI Verify is a self-assessmentgovernance framework. The AI Verify Foundation does not certify products. No “AI Verify certified” label exists. Claims of full compliance or certification would be inaccurate.
What Decision Intel claims: every field of the Decision Provenance Record maps onto one or more of the 11 principles codified by AI Verify. The mechanism that satisfies each principle is named above. A procurement team, General Counsel, or internal auditor can verify the mapping row by row against the product.
The AI Verify Foundation’s own FAQ states that the framework “does not guarantee that any AI system tested will be free from risks or biases or is completely safe.” Decision Intel makes the same disclaimer: a bias-audit tool is a control, not a guarantee.
The design-partner cohort gets the Decision Provenance Record bundled on every audit at $1,999/mo — 20% off the $2,499 Strategy list — so the mapping above stops being a reference doc and starts being the artifact your General Counsel forwards to the audit committee.