AI Risk Assessment Guide: 5x5 Matrix Methodology for AI Systems
A structured approach to identifying, analyzing, evaluating, and treating AI risks using a 5x5 likelihood-impact matrix aligned to ISO 23894, NIST AI RMF, and EU AI Act Art. 9
What Is AI Risk Assessment?
AI risk assessment is the structured process of identifying what can go wrong with an AI system, determining how likely it is and how badly it would hurt, then deciding what to do about it. Unlike traditional IT risk assessments, AI-specific assessments must account for emergent behavior, training data bias, opacity of model decisions, and the potential for harm to individuals, groups, and society at large.
Every AI system your organization builds or buys needs a risk assessment. Not a checkbox exercise, but a genuine evaluation of who uses the system, what data it accesses, what it integrates into, what permissions it holds, and what happens when it fails. The AI governance frameworks from ISO, NIST, and the EU all mandate this process, and for good reason: you cannot govern what you have not assessed.
This guide walks through the complete methodology, from stakeholder roles through the 5x5 matrix to building a living risk register. Every step maps back to ISO 42001, NIST AI RMF, ISO 23894, and EU AI Act Art. 9.
Who Should Be Involved
AI risk assessment is not a solo exercise. It requires cross-functional input mapped to clear RACI roles.
C-Suite
A - AccountableAccountable for risk appetite and strategic direction. Sets the threshold for acceptable risk, allocates resources, and ensures AI risk management aligns with enterprise risk strategy. Signs off on high and critical risk decisions.
Compliance Officers
R - ResponsibleResponsible for regulatory mapping and policy enforcement. Maps each AI system to applicable regulations (EU AI Act, GDPR, sector-specific rules), validates that risk assessments meet compliance requirements, and flags gaps.
IT & Data Leaders
C - ConsultedConsulted on technical implementation and monitoring. Provide insight into data pipelines, model architecture, integration points, and operational constraints. Own the technical controls that mitigate identified risks.
Legal & Risk
I - InformedInformed on liability exposure and contract implications. Evaluate legal risk from AI outputs, review vendor agreements for AI provisions, assess intellectual property concerns, and advise on regulatory penalties.
The 5-Step Risk Assessment Process
A repeatable process that runs at ideation, before deployment, and on a recurring cycle post-launch.
Step 1: Identify Risks
Catalog every risk associated with the AI system. Go beyond technical failures to include ethical, legal, operational, and reputational risks. Interview stakeholders, review incident databases, and map the system's data flows and decision points.
- Document the AI system's purpose, scope, and intended users
- Map all data inputs, processing logic, and output destinations
- Identify affected populations and potential for disparate impact
- Review similar system failures and near-misses from industry databases
- Catalog integration points, permissions, and access controls
Step 2: Analyze Risks
For each identified risk, determine the likelihood of occurrence and the potential impact. Use the 5x5 matrix scales defined in Section 4. Consider both the inherent risk (before controls) and the residual risk (after controls).
- Score likelihood on 1-5 scale using historical data and expert judgment
- Score impact across all 7 dimensions (financial, operational, reputational, safety, ethical, legal, fundamental rights)
- Document the rationale for each score to ensure auditability
- Distinguish between inherent risk and residual risk after existing controls
- Consider cascading effects where one failure triggers others
Step 3: Evaluate Risks
Compare the calculated risk scores against your organization's tolerance thresholds. This step determines which risks need treatment, which require monitoring, and which can be accepted. Prioritize by score and by the criticality of the affected system.
- Apply the tolerance thresholds: Low (1-6), Medium (7-12), High (13-18), Critical (19-25)
- Rank risks to prioritize treatment efforts and resource allocation
- Flag any risk touching prohibited AI practices under EU AI Act Art. 5
- Present evaluation results to the appropriate decision authority
- Document the risk appetite statement and escalation triggers
Step 4: Treat Risks
Select and implement the appropriate treatment for each risk based on its evaluation. Not every risk needs the same response. The 5 treatment options (mitigate, transfer, avoid, accept, disengage) give you a proportionate toolkit.
- Select treatment option(s) per risk from the 5 categories
- Define specific controls, owners, timelines, and success criteria
- Ensure proportionality: treatment cost should not exceed risk exposure
- Map selected treatments to framework requirements (NIST MANAGE, ISO 42001 Cl. 8)
- Update the risk register with treatment plans and target residual risk scores
Step 5: Monitor & Review
Risk assessment is not a one-time event. Establish continuous monitoring for all assessed risks, with review cadence proportionate to risk level. Critical risks get real-time monitoring. Low risks get quarterly reviews. Every risk register entry has a review date.
- Set monitoring frequency by risk tier: Critical (continuous), High (monthly), Medium (quarterly), Low (semi-annually)
- Define key risk indicators (KRIs) and automated alerting thresholds
- Conduct full reassessment on any material change (model update, data source change, regulatory shift)
- Feed incident data and near-miss reports back into the identification step
- Report risk status to the AI governance committee at each meeting
5x5 Risk Matrix Deep Dive
Score = Likelihood (1-5) x Impact (1-5), yielding values from 1 to 25. Each cell maps to a tolerance threshold that determines the required governance response.
| Impact | ||||||
|---|---|---|---|---|---|---|
| 1 - Negligible | 2 - Minor | 3 - Moderate | 4 - Major | 5 - Critical | ||
| Likelihood | 5 - Almost Certain (>90%) | 5 | 10 | 15 | 20 | 25 |
| 4 - Likely (61-90%) | 4 | 8 | 12 | 16 | 20 | |
| 3 - Possible (31-60%) | 3 | 6 | 9 | 12 | 15 | |
| 2 - Unlikely (10-30%) | 2 | 4 | 6 | 8 | 10 | |
| 1 - Rare (<10%) | 1 | 2 | 3 | 4 | 5 | |
Impact Assessment: 7 Dimensions
Impact is not a single number. Each AI risk must be evaluated across all 7 dimensions, with the highest score used as the impact rating. This prevents under-counting risks that score low on financial impact but high on ethical or fundamental rights impact.
Risk Tolerance Thresholds
Each score band triggers a different governance response. Higher scores demand faster action, more senior oversight, and stronger controls.
Low
Acceptable risk. Monitor through standard review cycles. Team-level oversight. No additional controls required beyond baseline governance.
Medium
Mitigation plan required. Management review and sign-off. Quarterly reassessment. Document treatment strategy and target residual score.
High
Senior oversight required. Enhanced controls and monthly monitoring. Formal risk treatment plan with named owner. Escalation to governance committee.
Critical
Immediate action required. Executive decision authority. Possible halt of the AI system. Real-time monitoring until risk is reduced below threshold.
Risk Register Structure
The risk register is the single source of truth for every AI risk your organization has identified, scored, and treated. It must be initiated at ideation, updated during development, finalized during validation, and continuously updated post-deployment.
| Field | Description | Example |
|---|---|---|
| Risk ID | Unique identifier for tracking and audit trail | RISK-2026-042 |
| AI System Name | Name of the AI system this risk applies to | Customer Support Chatbot v3 |
| Category | Risk domain (technical, ethical, legal, operational, safety) | Ethical / Bias |
| Description | Plain-language explanation of the risk scenario | Model may produce biased responses for non-English speakers |
| Impact Score (1-5) | Highest score across 7 impact dimensions | 4 (Major) - Ethical dimension |
| Likelihood Score (1-5) | Probability of occurrence based on scale definitions | 3 (Possible, 31-60%) |
| Total Score | Impact x Likelihood | 12 (High) |
| Mitigation Plan | Selected treatment and specific control actions | Mitigate: Add multilingual test suite, bias audit quarterly |
| Owner | Named individual accountable for this risk | J. Martinez, ML Engineering Lead |
| Status | Current state of the risk and treatment | In Treatment - Controls 60% implemented |
| Review Date | Next scheduled reassessment | 2026-07-15 (Monthly - High tier) |
The register connects directly to your AI lifecycle framework. Stage gates should verify that all risks for the system are documented, scored, and have active treatment plans before a system advances to the next stage.
Regulatory & Standards Mapping
Every step in this methodology maps to specific clauses and functions in the major AI governance frameworks.
| Assessment Step | ISO 23894 Clause |
|---|---|
| Risk Identification | Cl. 6.4.2 Risk identification - systematic identification of AI-related risks |
| Risk Analysis | Cl. 6.4.3 Risk analysis - likelihood and consequence estimation |
| Risk Evaluation | Cl. 6.4.4 Risk evaluation - comparison against criteria and prioritization |
| Risk Treatment | Cl. 6.5 Risk treatment - selection and implementation of options |
| Monitoring & Review | Cl. 6.6-6.7 Monitoring, review, recording, and reporting |
| Context Establishment | Cl. 6.1 Scope, context and criteria for AI risk management |
| Assessment Step | ISO 42001 Clause |
|---|---|
| Risk Planning | Cl. 6.1.2 AI risk assessment - planning actions to address risks |
| Risk Assessment Execution | Cl. 8.2 AI risk assessment - operational execution |
| Risk Treatment | Cl. 8.3 AI risk treatment - implementing selected options |
| Control Selection | Annex A / Annex B - control objectives and implementation guidance |
| Monitoring | Cl. 9.1 Monitoring, measurement, analysis and evaluation |
| Improvement | Cl. 10.1 Nonconformity and corrective action |
| Assessment Step | NIST AI RMF Function |
|---|---|
| Risk Identification | MAP 5.1 - Likelihood and severity of potential impacts identified |
| Risk Analysis | MAP 5.2 - Practices and personnel for regular engagement with AI actors and integrating feedback about impacts |
| Risk Measurement | MEASURE 1.1 - Approaches for measurement of AI risks documented |
| Testing & Validation | MEASURE 2.1 through 2.13 - Testing AI systems against performance criteria |
| Risk Treatment | MANAGE 2.1 - Strategies to maximize benefits and minimize negative impacts |
| Monitoring | MEASURE 2.6 - AI system evaluated for safety risks, demonstrated safe, can fail safely |
| Assessment Step | EU AI Act Article |
|---|---|
| Risk Management System | Art. 9(1) - Establishment of risk management system for high-risk AI |
| Risk Identification | Art. 9(2)(a) - Identification and analysis of known and foreseeable risks |
| Risk Estimation | Art. 9(2)(b) - Estimation and evaluation of risks from intended use and misuse |
| Residual Risk Evaluation | Art. 9(4) - Residual risks judged acceptable after mitigation |
| Testing | Art. 9(5-7) - Testing to ensure appropriate measures and performance consistency |
| Prohibited Practices | Art. 5 - Risk screening against prohibited AI practices (mandatory gate) |
Risk Treatment Options
Five treatment strategies, each appropriate for different risk profiles and organizational contexts. Most risks require a combination.
Mitigate
Reduce likelihood or impact through controls, testing, monitoring, or design changes. The most common treatment for medium and high risks.
Transfer
Shift risk via insurance, contractual indemnification, or outsourcing to a party better positioned to manage it.
Avoid
Halt or redesign the AI system to eliminate the risk entirely. Appropriate when risk exceeds organizational appetite.
Accept
Document the residual risk and proceed. Requires formal sign-off from the appropriate authority level based on risk score.
Disengage
Turn off systems with inconsistent performance or risks that cannot be reduced to acceptable levels. The last resort.
Risk Assessment Tools
Apply this methodology with the right tools. Start with the Decision Tree to classify risk, then build your register.
Risk Tier Decision Tree
7-question interactive flow to classify any AI system from Critical to Low risk. Outputs the EU AI Act risk tier and required governance actions per tier.
Download Decision Tree →AI Risk Register Template
Pre-built register with all 11 fields, auto-calculated risk scores, conditional formatting by tier, and lifecycle stage tracking columns.
View Risk Register Guide →Regulatory Mapping Cheat Sheet
40 fields mapped across 4 frameworks (ISO 42001, NIST AI RMF, EU AI Act, ISO 23894). Know exactly which clause applies to each governance activity.
Download Cheat Sheet →Built from primary sources, not opinions. Every methodology in this guide traces directly to ISO, NIST, and EU regulatory text.
AI risk assessment is not optional. Whether you are building or buying AI, a structured risk process is the foundation that every other governance control depends on. Start with the matrix. Build the register. Treat what matters.