Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

AI Risk Assessment Guide: 5x5 Matrix Methodology for AI Systems

A structured approach to identifying, analyzing, evaluating, and treating AI risks using a 5x5 likelihood-impact matrix aligned to ISO 23894, NIST AI RMF, and EU AI Act Art. 9

Derrick D. Jackson | CISSP, CRISC, CCSP April 2026 ~18 min read
5x5Risk Matrix
5Process Steps
7Impact Dimensions
4Frameworks

What Is AI Risk Assessment?

AI risk assessment is the structured process of identifying what can go wrong with an AI system, determining how likely it is and how badly it would hurt, then deciding what to do about it. Unlike traditional IT risk assessments, AI-specific assessments must account for emergent behavior, training data bias, opacity of model decisions, and the potential for harm to individuals, groups, and society at large.

Every AI system your organization builds or buys needs a risk assessment. Not a checkbox exercise, but a genuine evaluation of who uses the system, what data it accesses, what it integrates into, what permissions it holds, and what happens when it fails. The AI governance frameworks from ISO, NIST, and the EU all mandate this process, and for good reason: you cannot govern what you have not assessed.

This guide walks through the complete methodology, from stakeholder roles through the 5x5 matrix to building a living risk register. Every step maps back to ISO 42001, NIST AI RMF, ISO 23894, and EU AI Act Art. 9.

Who Should Be Involved

AI risk assessment is not a solo exercise. It requires cross-functional input mapped to clear RACI roles.

🎯

C-Suite

A - Accountable

Accountable for risk appetite and strategic direction. Sets the threshold for acceptable risk, allocates resources, and ensures AI risk management aligns with enterprise risk strategy. Signs off on high and critical risk decisions.

📋

Compliance Officers

R - Responsible

Responsible for regulatory mapping and policy enforcement. Maps each AI system to applicable regulations (EU AI Act, GDPR, sector-specific rules), validates that risk assessments meet compliance requirements, and flags gaps.

🔧

IT & Data Leaders

C - Consulted

Consulted on technical implementation and monitoring. Provide insight into data pipelines, model architecture, integration points, and operational constraints. Own the technical controls that mitigate identified risks.

The 5-Step Risk Assessment Process

A repeatable process that runs at ideation, before deployment, and on a recurring cycle post-launch.

1
Identify
2
Analyze
3
Evaluate
4
Treat
5
Monitor

Step 1: Identify Risks

Catalog every risk associated with the AI system. Go beyond technical failures to include ethical, legal, operational, and reputational risks. Interview stakeholders, review incident databases, and map the system's data flows and decision points.

  • Document the AI system's purpose, scope, and intended users
  • Map all data inputs, processing logic, and output destinations
  • Identify affected populations and potential for disparate impact
  • Review similar system failures and near-misses from industry databases
  • Catalog integration points, permissions, and access controls
ISO 23894 Cl. 6.3-6.4 NIST MAP 5.1

Step 2: Analyze Risks

For each identified risk, determine the likelihood of occurrence and the potential impact. Use the 5x5 matrix scales defined in Section 4. Consider both the inherent risk (before controls) and the residual risk (after controls).

  • Score likelihood on 1-5 scale using historical data and expert judgment
  • Score impact across all 7 dimensions (financial, operational, reputational, safety, ethical, legal, fundamental rights)
  • Document the rationale for each score to ensure auditability
  • Distinguish between inherent risk and residual risk after existing controls
  • Consider cascading effects where one failure triggers others
ISO 23894 Cl. 6.4.3 NIST MEASURE 1.1

Step 3: Evaluate Risks

Compare the calculated risk scores against your organization's tolerance thresholds. This step determines which risks need treatment, which require monitoring, and which can be accepted. Prioritize by score and by the criticality of the affected system.

  • Apply the tolerance thresholds: Low (1-6), Medium (7-12), High (13-18), Critical (19-25)
  • Rank risks to prioritize treatment efforts and resource allocation
  • Flag any risk touching prohibited AI practices under EU AI Act Art. 5
  • Present evaluation results to the appropriate decision authority
  • Document the risk appetite statement and escalation triggers
EU AI Act Art. 9 ISO 42001 Cl. 6.1.2

Step 4: Treat Risks

Select and implement the appropriate treatment for each risk based on its evaluation. Not every risk needs the same response. The 5 treatment options (mitigate, transfer, avoid, accept, disengage) give you a proportionate toolkit.

  • Select treatment option(s) per risk from the 5 categories
  • Define specific controls, owners, timelines, and success criteria
  • Ensure proportionality: treatment cost should not exceed risk exposure
  • Map selected treatments to framework requirements (NIST MANAGE, ISO 42001 Cl. 8)
  • Update the risk register with treatment plans and target residual risk scores
NIST MANAGE 1.3 ISO 23894 Cl. 6.5

Step 5: Monitor & Review

Risk assessment is not a one-time event. Establish continuous monitoring for all assessed risks, with review cadence proportionate to risk level. Critical risks get real-time monitoring. Low risks get quarterly reviews. Every risk register entry has a review date.

  • Set monitoring frequency by risk tier: Critical (continuous), High (monthly), Medium (quarterly), Low (semi-annually)
  • Define key risk indicators (KRIs) and automated alerting thresholds
  • Conduct full reassessment on any material change (model update, data source change, regulatory shift)
  • Feed incident data and near-miss reports back into the identification step
  • Report risk status to the AI governance committee at each meeting
NIST MEASURE 2.6 ISO 23894 Cl. 6.6-6.7

5x5 Risk Matrix Deep Dive

Score = Likelihood (1-5) x Impact (1-5), yielding values from 1 to 25. Each cell maps to a tolerance threshold that determines the required governance response.

Impact
1 - Negligible 2 - Minor 3 - Moderate 4 - Major 5 - Critical
Likelihood 5 - Almost Certain (>90%) 5 10 15 20 25
4 - Likely (61-90%) 4 8 12 16 20
3 - Possible (31-60%) 3 6 9 12 15
2 - Unlikely (10-30%) 2 4 6 8 10
1 - Rare (<10%) 1 2 3 4 5
Low (1-6)
Medium (7-12)
High (13-18)
Critical (19-25)

Impact Assessment: 7 Dimensions

Impact is not a single number. Each AI risk must be evaluated across all 7 dimensions, with the highest score used as the impact rating. This prevents under-counting risks that score low on financial impact but high on ethical or fundamental rights impact.

💰 Financial ⚙️ Operational 📣 Reputational 🛡️ Safety ⚖️ Ethical 📜 Legal 🌐 Fundamental Rights

Risk Tolerance Thresholds

Each score band triggers a different governance response. Higher scores demand faster action, more senior oversight, and stronger controls.

1-6

Low

Acceptable risk. Monitor through standard review cycles. Team-level oversight. No additional controls required beyond baseline governance.

7-12

Medium

Mitigation plan required. Management review and sign-off. Quarterly reassessment. Document treatment strategy and target residual score.

13-18

High

Senior oversight required. Enhanced controls and monthly monitoring. Formal risk treatment plan with named owner. Escalation to governance committee.

19-25

Critical

Immediate action required. Executive decision authority. Possible halt of the AI system. Real-time monitoring until risk is reduced below threshold.

Free Download
Risk Tier Decision Tree
7-question interactive flow to classify any AI system from Critical to Low risk, with EU AI Act obligations per tier.
Download the Decision Tree →

Risk Register Structure

The risk register is the single source of truth for every AI risk your organization has identified, scored, and treated. It must be initiated at ideation, updated during development, finalized during validation, and continuously updated post-deployment.

Field Description Example
Risk ID Unique identifier for tracking and audit trail RISK-2026-042
AI System Name Name of the AI system this risk applies to Customer Support Chatbot v3
Category Risk domain (technical, ethical, legal, operational, safety) Ethical / Bias
Description Plain-language explanation of the risk scenario Model may produce biased responses for non-English speakers
Impact Score (1-5) Highest score across 7 impact dimensions 4 (Major) - Ethical dimension
Likelihood Score (1-5) Probability of occurrence based on scale definitions 3 (Possible, 31-60%)
Total Score Impact x Likelihood 12 (High)
Mitigation Plan Selected treatment and specific control actions Mitigate: Add multilingual test suite, bias audit quarterly
Owner Named individual accountable for this risk J. Martinez, ML Engineering Lead
Status Current state of the risk and treatment In Treatment - Controls 60% implemented
Review Date Next scheduled reassessment 2026-07-15 (Monthly - High tier)
💡
Lifecycle Integration

The register connects directly to your AI lifecycle framework. Stage gates should verify that all risks for the system are documented, scored, and have active treatment plans before a system advances to the next stage.

Regulatory & Standards Mapping

Every step in this methodology maps to specific clauses and functions in the major AI governance frameworks.

Assessment StepISO 23894 Clause
Risk IdentificationCl. 6.4.2 Risk identification - systematic identification of AI-related risks
Risk AnalysisCl. 6.4.3 Risk analysis - likelihood and consequence estimation
Risk EvaluationCl. 6.4.4 Risk evaluation - comparison against criteria and prioritization
Risk TreatmentCl. 6.5 Risk treatment - selection and implementation of options
Monitoring & ReviewCl. 6.6-6.7 Monitoring, review, recording, and reporting
Context EstablishmentCl. 6.1 Scope, context and criteria for AI risk management
Assessment StepISO 42001 Clause
Risk PlanningCl. 6.1.2 AI risk assessment - planning actions to address risks
Risk Assessment ExecutionCl. 8.2 AI risk assessment - operational execution
Risk TreatmentCl. 8.3 AI risk treatment - implementing selected options
Control SelectionAnnex A / Annex B - control objectives and implementation guidance
MonitoringCl. 9.1 Monitoring, measurement, analysis and evaluation
ImprovementCl. 10.1 Nonconformity and corrective action
Assessment StepNIST AI RMF Function
Risk IdentificationMAP 5.1 - Likelihood and severity of potential impacts identified
Risk AnalysisMAP 5.2 - Practices and personnel for regular engagement with AI actors and integrating feedback about impacts
Risk MeasurementMEASURE 1.1 - Approaches for measurement of AI risks documented
Testing & ValidationMEASURE 2.1 through 2.13 - Testing AI systems against performance criteria
Risk TreatmentMANAGE 2.1 - Strategies to maximize benefits and minimize negative impacts
MonitoringMEASURE 2.6 - AI system evaluated for safety risks, demonstrated safe, can fail safely
Assessment StepEU AI Act Article
Risk Management SystemArt. 9(1) - Establishment of risk management system for high-risk AI
Risk IdentificationArt. 9(2)(a) - Identification and analysis of known and foreseeable risks
Risk EstimationArt. 9(2)(b) - Estimation and evaluation of risks from intended use and misuse
Residual Risk EvaluationArt. 9(4) - Residual risks judged acceptable after mitigation
TestingArt. 9(5-7) - Testing to ensure appropriate measures and performance consistency
Prohibited PracticesArt. 5 - Risk screening against prohibited AI practices (mandatory gate)

Risk Treatment Options

Five treatment strategies, each appropriate for different risk profiles and organizational contexts. Most risks require a combination.

🛡️

Mitigate

Reduce likelihood or impact through controls, testing, monitoring, or design changes. The most common treatment for medium and high risks.

📤

Transfer

Shift risk via insurance, contractual indemnification, or outsourcing to a party better positioned to manage it.

🚫

Avoid

Halt or redesign the AI system to eliminate the risk entirely. Appropriate when risk exceeds organizational appetite.

Accept

Document the residual risk and proceed. Requires formal sign-off from the appropriate authority level based on risk score.

⚠️

Disengage

Turn off systems with inconsistent performance or risks that cannot be reduced to acceptable levels. The last resort.

Treatment Selection by Risk Tier
Low (1-6):Accept or monitor. No active treatment required. Medium (7-12):Mitigate or transfer. Documented plan with quarterly review. High (13-18):Mitigate with enhanced controls. Consider avoidance or transfer. Monthly review. Critical (19-25):Avoid or disengage unless mitigation can reduce to High or below. Executive authority required.

Risk Assessment Tools

Apply this methodology with the right tools. Start with the Decision Tree to classify risk, then build your register.

Free Download

Risk Tier Decision Tree

7-question interactive flow to classify any AI system from Critical to Low risk. Outputs the EU AI Act risk tier and required governance actions per tier.

Download Decision Tree →
Coming Soon

AI Risk Register Template

Pre-built register with all 11 fields, auto-calculated risk scores, conditional formatting by tier, and lifecycle stage tracking columns.

View Risk Register Guide →
Free Download

Regulatory Mapping Cheat Sheet

40 fields mapped across 4 frameworks (ISO 42001, NIST AI RMF, EU AI Act, ISO 23894). Know exactly which clause applies to each governance activity.

Download Cheat Sheet →
All-in-One Bundle
Download All Governance Tools Free
Every community template and checklist in one download. One email, everything you need.
Get the Bundle →