AI RiskManagement Hub
Identify, score, and treat AI risk. Mapped to ISO 23894, NIST AI RMF, and EU AI Act. Built from 130+ authoritative sources.
Derrick D. Jackson | CISSP, CRISC, CCSP | Updated April 2026
What Is AI Risk Management?
AI risk management is the continuous process of identifying, scoring, treating, and monitoring risks that AI systems create for your organization, your customers, and society. It answers: how dangerous is each AI system, what controls match that danger level, and how do you prove it to auditors, regulators, and your board.
Most organizations rate everything "medium risk"
and call it governance.
Only 35% of organizations have a formal AI governance framework (Source: Consilien). The rest rely on ad-hoc reviews, inherited IT risk frameworks that don't account for AI-specific failures like hallucinations, bias, or drift, and vendor promises that no one validates. The gap between "we assessed the risk" and "we can prove it to a regulator" is where fines, liability, and trust failures live.
- "Rate everything medium risk" -- no scoring methodology or evidence
- "Annual risk review" -- no continuous monitoring or drift detection triggers
- "IT owns risk" -- no cross-functional accountability or RACI
- "One-size-fits-all controls" -- same oversight for a chatbot and a credit model
- 5x5 likelihood x impact matrix with 4 tolerance thresholds backed by evidence
- Continuous monitoring with drift detection + incident-triggered reassessment
- RACI-mapped ownership: C-Suite, Compliance, IT, Legal, BU Leads
- Proportionate controls: EU AI Act risk tier determines governance intensity
Your Risk Management Ecosystem
Six lanes of AI risk management -- each with dedicated guides, tools, and methodology.
Your AI Risk Assessment Roadmap
Five steps from "we don't know what AI we're running" to "defensible risk posture."
Step 1: Inventory Your AI Systems
Before you can score risk, you need to know what's running. Research shows 84% of internal audit departments lack an AI audit framework (ECIIA 2024), meaning most Shadow AI goes completely undetected. That's Shadow AI, and it's your biggest blind spot. Start with your highest-risk departments -- HR, finance, customer-facing operations -- and document what's running, who owns it, and what data it touches.
Our 40-field tracking template covers everything from data access permissions to EU AI Act risk tier classification. Most organizations discover 2-3x more AI usage than they expected during their first inventory.
Step 2: Classify by EU AI Act Risk Tier
For each inventoried system, determine its EU AI Act risk classification: Unacceptable (banned), High (Art. 9-15 compliance), Limited (transparency only), or Minimal (no obligations). This classification drives everything downstream -- it determines the intensity of your risk assessment, the documentation required, and whether a conformity assessment is mandatory.
Our Risk Tier Decision Tree walks through 7 questions to classify any AI system. Annex II and III of the EU AI Act list the sectors and use cases that are automatically classified as high-risk.
Step 3: Score with the 5x5 Risk Matrix
Calculate a quantitative Risk Score by multiplying Likelihood (1-5: Rare to Almost Certain) by Impact (1-5: Negligible to Critical/Catastrophic). The resulting score (1-25) maps to four tolerance thresholds: Low (1-6, monitor), Medium (7-12, mitigation plan), High (13-18, senior oversight), Critical (19-25, immediate action or halt).
Impact should be assessed across seven dimensions: financial, operational, reputational, safety, ethical, legal, and fundamental rights. A system processing millions of SSNs scores differently than an internal chatbot answering HR questions.
Step 4: Apply Proportionate Controls
Risk treatment follows four options from the NIST AI RMF: Mitigate (reduce likelihood or impact), Transfer (shift via insurance or indemnification), Avoid (halt or redesign), or Accept (document residual risk). The key insight: a low-risk internal chatbot doesn't need the same tollgates as a high-risk credit decisioning model.
For high-risk systems under the EU AI Act, Art. 9 requires a continuous, iterative risk management system covering the full lifecycle. For minimal-risk systems, monitoring may be sufficient.
Step 5: Monitor and Reassess
Risk assessment is not a one-time exercise. Models drift, regulations evolve, business contexts change. Continuous monitoring includes performance degradation alerts, bias drift detection, incident-triggered reassessment, and scheduled periodic reviews. EU AI Act Art. 72 requires post-market monitoring for high-risk systems. Art. 73 mandates serious incident reporting: 2 days for widespread infringements or serious operational disruption, 10 days for death, 15 days for other serious incidents.
The risk register must be updated iteratively: new risks added post-deployment, retired risks marked when systems are decommissioned, treatment effectiveness tracked over time.
EU AI Act Risk Classification
Four tiers determine your compliance obligations. Click each tier to expand.
Mandatory Requirements
- Art. 9: Continuous, iterative risk management system covering full lifecycle
- Art. 10: Data governance -- training data quality, representativeness, bias management
- Art. 11: Technical documentation to demonstrate compliance
- Art. 12: Automatic logging for traceability
- Art. 13: Transparency -- clear instructions on capabilities and limitations
- Art. 14: Human oversight -- intervention and override capability
- Art. 15: Accuracy, robustness, and cybersecurity throughout lifecycle
- Art. 17: Quality Management System (QMS)
- Art. 43: Mandatory conformity assessment before market placement
- Art. 72: Post-market monitoring system
- Art. 73: Serious incident reporting (2 days widespread infringement / 10 days death / 15 days other)
TJS Tools for High-Risk Compliance
- Risk Tier Decision Tree -- classify your systems
- Regulatory Mapping Cheat Sheet -- 40 fields x 4 frameworks
- Committee Implementation Guide -- 8-stage oversight framework
Requirements
- Disclose to users they are interacting with an AI system
- Label AI-generated content (deepfakes, synthetic media)
- No conformity assessment required
Guidance
- No mandatory compliance requirements
- Voluntary codes of conduct encouraged
- Still recommended: basic risk documentation and monitoring per ISO 42001 best practice
The 5x5 AI Risk Matrix
Hover any cell to see the risk level, required response, and example AI system. Score = Likelihood x Impact.
Source: NIST AI RMF MAP 5.1-5.2, ISO/IEC 23894:2023 Cl. 6.5-6.6
AI Harm Taxonomy
Five categories of potential harm. Every risk assessment should evaluate impact across all five.
Harm to Individuals
- Civil liberties and rights violations
- Physical or psychological safety threats
- Economic opportunity loss (hiring, credit)
- Privacy violations and data exposure
Harm to Groups
- Discrimination against population sub-groups
- Disparate impact across protected classes
- Community trust erosion
Harm to Society
- Democratic participation undermined
- Educational access affected
- Information ecosystem corrupted
- Public trust in institutions eroded
Harm to Organization
- Security breaches and data loss
- Monetary loss and regulatory fines
- Reputational damage
- Business operations disrupted
Harm to Ecosystem
- Global financial system instability
- Supply chain cascade failures
- Environmental resource depletion (compute)
- Cross-border regulatory spillover
NIST AI RMF Risk Functions
72 subcategories across 4 functions. Click each function to see risk-specific requirements.
GOVERN: Risk Oversight & Accountability
MAP: Risk Identification & Context
MEASURE: Risk Analysis & Quantification
MANAGE: Risk Treatment & Response
Framework Risk Crosswalk
How ISO 23894, ISO 42001, NIST AI RMF, and EU AI Act map to each risk management activity.
| Risk Activity | ISO 23894 | ISO 42001 | NIST AI RMF | EU AI Act |
|---|---|---|---|---|
| Scope, Context, Criteria | Cl. 6.3 | Cl. 6.1.1 | GOVERN 1.3 | Art. 9(2)(a) |
| Risk Identification | Cl. 6.4.2 | Cl. 6.1.2 | MAP 2.1-2.3 | Art. 9(2)(b) |
| Risk Analysis | Cl. 6.4.3 | Cl. 6.1.2 | MEAS 1.1-2.13 | Art. 9(2)(c) |
| Risk Evaluation | Cl. 6.4.4 | Cl. 6.1.2 | MEAS 3.1 | Art. 9(2)(d) |
| Risk Treatment | Cl. 6.5 | Cl. 8.4 | MNG 1.3-2.2 | Art. 9(4) |
| Risk Monitoring | Cl. 6.6 | Cl. 9.1 | MNG 3.1-4.1 | Art. 9(3), 72 |
| Recording & Reporting | Cl. 6.7 | Cl. 7.4 | MEAS 4.1 | Art. 13 |
| Third-Party Risk | -- | Annex A.10 | GOV 6.1-6.2 | Art. 25-27 |
| Incident Reporting | -- | Cl. 10.2 | MNG 3.1 | Art. 73 |
Source: NIST AI RMF to ISO/IEC 42001 Crosswalk + ISO/IEC 23894:2023 + EU AI Act Official Journal
AI Risk Management Toolkit
Practical tools derived from 130+ primary sources. Score, document, and report AI risk.
AI Threat Landscape
General AI threats and agentic AI risks. Agent-specific deep dives live in our Agentic AI Security Hub.