Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

SYSTEMS OPERATIONAL
Risk Operations Center

AI RiskManagement Hub

Identify, score, and treat AI risk. Mapped to ISO 23894, NIST AI RMF, and EU AI Act. Built from 130+ authoritative sources.

Derrick D. Jackson | CISSP, CRISC, CCSP | Updated April 2026

0EU AI Act Risk Tiers
0NIST RMF Subcategories
0x5 Risk Matrix
0Source Documents

What Is AI Risk Management?

AI risk management is the continuous process of identifying, scoring, treating, and monitoring risks that AI systems create for your organization, your customers, and society. It answers: how dangerous is each AI system, what controls match that danger level, and how do you prove it to auditors, regulators, and your board.

Most organizations rate everything "medium risk"
and call it governance.

Only 35% of organizations have a formal AI governance framework (Source: Consilien). The rest rely on ad-hoc reviews, inherited IT risk frameworks that don't account for AI-specific failures like hallucinations, bias, or drift, and vendor promises that no one validates. The gap between "we assessed the risk" and "we can prove it to a regulator" is where fines, liability, and trust failures live.

📋
Checkbox Compliance
What most organizations actually do
  • "Rate everything medium risk" -- no scoring methodology or evidence
  • "Annual risk review" -- no continuous monitoring or drift detection triggers
  • "IT owns risk" -- no cross-functional accountability or RACI
  • "One-size-fits-all controls" -- same oversight for a chatbot and a credit model
Checkbox compliance that fails the first audit
TJS Risk Management
Built from ISO 23894, NIST AI RMF, EU AI Act
Defensible risk posture from day one

Your AI Risk Assessment Roadmap

Five steps from "we don't know what AI we're running" to "defensible risk posture."

1
Inventory AI Systems
2
Classify by Risk Tier
3
Score with 5x5 Matrix
4
Apply Controls
5
Monitor & Reassess

Step 1: Inventory Your AI Systems

Before you can score risk, you need to know what's running. Research shows 84% of internal audit departments lack an AI audit framework (ECIIA 2024), meaning most Shadow AI goes completely undetected. That's Shadow AI, and it's your biggest blind spot. Start with your highest-risk departments -- HR, finance, customer-facing operations -- and document what's running, who owns it, and what data it touches.

Our 40-field tracking template covers everything from data access permissions to EU AI Act risk tier classification. Most organizations discover 2-3x more AI usage than they expected during their first inventory.

Step 2: Classify by EU AI Act Risk Tier

For each inventoried system, determine its EU AI Act risk classification: Unacceptable (banned), High (Art. 9-15 compliance), Limited (transparency only), or Minimal (no obligations). This classification drives everything downstream -- it determines the intensity of your risk assessment, the documentation required, and whether a conformity assessment is mandatory.

Our Risk Tier Decision Tree walks through 7 questions to classify any AI system. Annex II and III of the EU AI Act list the sectors and use cases that are automatically classified as high-risk.

Step 3: Score with the 5x5 Risk Matrix

Calculate a quantitative Risk Score by multiplying Likelihood (1-5: Rare to Almost Certain) by Impact (1-5: Negligible to Critical/Catastrophic). The resulting score (1-25) maps to four tolerance thresholds: Low (1-6, monitor), Medium (7-12, mitigation plan), High (13-18, senior oversight), Critical (19-25, immediate action or halt).

Impact should be assessed across seven dimensions: financial, operational, reputational, safety, ethical, legal, and fundamental rights. A system processing millions of SSNs scores differently than an internal chatbot answering HR questions.

Step 4: Apply Proportionate Controls

Risk treatment follows four options from the NIST AI RMF: Mitigate (reduce likelihood or impact), Transfer (shift via insurance or indemnification), Avoid (halt or redesign), or Accept (document residual risk). The key insight: a low-risk internal chatbot doesn't need the same tollgates as a high-risk credit decisioning model.

For high-risk systems under the EU AI Act, Art. 9 requires a continuous, iterative risk management system covering the full lifecycle. For minimal-risk systems, monitoring may be sufficient.

Step 5: Monitor and Reassess

Risk assessment is not a one-time exercise. Models drift, regulations evolve, business contexts change. Continuous monitoring includes performance degradation alerts, bias drift detection, incident-triggered reassessment, and scheduled periodic reviews. EU AI Act Art. 72 requires post-market monitoring for high-risk systems. Art. 73 mandates serious incident reporting: 2 days for widespread infringements or serious operational disruption, 10 days for death, 15 days for other serious incidents.

The risk register must be updated iteratively: new risks added post-deployment, retired risks marked when systems are decommissioned, treatment effectiveness tracked over time.

EU AI Act Risk Classification

Four tiers determine your compliance obligations. Click each tier to expand.

Unacceptable RiskBANNED -- Art. 5
AI systems that pose a clear threat to fundamental rights or safety. Prohibited outright in the EU.
Prohibited Practices
  • Subliminal manipulation techniques that cause harm
  • Exploitation of vulnerabilities of specific groups (age, disability, socio-economic)
  • Social scoring that leads to detrimental treatment
  • Predictive policing based solely on profiling
  • Untargeted scraping of facial images from internet or CCTV
  • Emotion recognition in workplace and educational institutions
  • Biometric categorization using sensitive attributes (race, political opinions, religion, sexual orientation)
  • Real-time remote biometric identification for law enforcement (with narrow exceptions)
Your Obligation
  • Do not deploy, procure, or develop these systems
  • Screen all AI proposals against Art. 5 before any development begins
High RiskArt. 9-15 + Conformity Assessment
Systems in critical infrastructure, medical devices, employment, education, finance, law enforcement, and migration. Strictest compliance burden.
Mandatory Requirements
  • Art. 9: Continuous, iterative risk management system covering full lifecycle
  • Art. 10: Data governance -- training data quality, representativeness, bias management
  • Art. 11: Technical documentation to demonstrate compliance
  • Art. 12: Automatic logging for traceability
  • Art. 13: Transparency -- clear instructions on capabilities and limitations
  • Art. 14: Human oversight -- intervention and override capability
  • Art. 15: Accuracy, robustness, and cybersecurity throughout lifecycle
  • Art. 17: Quality Management System (QMS)
  • Art. 43: Mandatory conformity assessment before market placement
  • Art. 72: Post-market monitoring system
  • Art. 73: Serious incident reporting (2 days widespread infringement / 10 days death / 15 days other)
TJS Tools for High-Risk Compliance
Limited RiskTransparency Only
Chatbots, deepfake generators, and systems interacting with humans. Transparency obligations apply.
Requirements
  • Disclose to users they are interacting with an AI system
  • Label AI-generated content (deepfakes, synthetic media)
  • No conformity assessment required
Minimal RiskNo Obligations
Most AI systems -- spam filters, video game AI, recommendation engines. No specific legal obligations under the EU AI Act.
Guidance
  • No mandatory compliance requirements
  • Voluntary codes of conduct encouraged
  • Still recommended: basic risk documentation and monitoring per ISO 42001 best practice

The 5x5 AI Risk Matrix

Hover any cell to see the risk level, required response, and example AI system. Score = Likelihood x Impact.

Impact →
Likelihood →
Negligible
Minor
Moderate
Major
Critical
Almost Certain
5
5
Low
Acceptable, monitor
10
10
Medium
Mitigation plan required
15
15
High
Senior management oversight
20
20
Critical
Immediate action / possible halt
25
25
Critical
Executive decision required
Likely
4
4
Low
Acceptable, monitor
8
8
Medium
Mitigation plan required
12
12
Medium
Management attention needed
16
16
High
Senior management oversight
20
20
Critical
Immediate action / possible halt
Possible
3
3
Low
Acceptable, monitor
6
6
Low
Acceptable, monitor
9
9
Medium
Mitigation plan required
12
12
Medium
Management attention needed
15
15
High
Senior management oversight
Unlikely
2
2
Low
Acceptable, monitor
4
4
Low
Acceptable, monitor
6
6
Low
Acceptable, monitor
8
8
Medium
Mitigation plan required
10
10
Medium
Management attention needed
Rare
1
1
Low
Acceptable, monitor
2
2
Low
Acceptable, monitor
3
3
Low
Acceptable, monitor
4
4
Low
Acceptable, monitor
5
5
Low
Acceptable, monitor
Low (1-6)
Medium (7-12)
High (13-18)
Critical (19-25)
1 - 6Low RiskAcceptable. Monitor ongoing.
7 - 12Medium RiskMitigation plan. Management review.
13 - 18High RiskSenior oversight. Enhanced controls.
19 - 25Critical RiskImmediate action. Executive decision.

Source: NIST AI RMF MAP 5.1-5.2, ISO/IEC 23894:2023 Cl. 6.5-6.6

AI Harm Taxonomy

Five categories of potential harm. Every risk assessment should evaluate impact across all five.

👤

Harm to Individuals

  • Civil liberties and rights violations
  • Physical or psychological safety threats
  • Economic opportunity loss (hiring, credit)
  • Privacy violations and data exposure
NIST AI 100-1 Fig. 1
👥

Harm to Groups

  • Discrimination against population sub-groups
  • Disparate impact across protected classes
  • Community trust erosion
NIST AI 100-1 Fig. 1, EU AI Act Recital 47
🏛

Harm to Society

  • Democratic participation undermined
  • Educational access affected
  • Information ecosystem corrupted
  • Public trust in institutions eroded
NIST AI 100-1 Fig. 1
🏢

Harm to Organization

  • Security breaches and data loss
  • Monetary loss and regulatory fines
  • Reputational damage
  • Business operations disrupted
NIST AI 100-1 Fig. 1, ISO 42001 Cl. 6
🌍

Harm to Ecosystem

  • Global financial system instability
  • Supply chain cascade failures
  • Environmental resource depletion (compute)
  • Cross-border regulatory spillover
NIST AI 100-1 Fig. 1, ISO 42001 Cl. 8.4

NIST AI RMF Risk Functions

72 subcategories across 4 functions. Click each function to see risk-specific requirements.

GOVERN: Risk Oversight & Accountability

GOV 1.3Processes determine the needed level of risk management activities based on the organization's risk tolerance
GOV 1.5Ongoing monitoring and periodic review of the risk management process are planned with defined frequency
GOV 2.3Executive leadership takes responsibility for decisions about risks associated with AI system development and deployment
GOV 4.1Organizational culture fosters critical thinking and safety-first mindset across AI development and deployment
GOV 5.1Organizational policies and practices collect, consider, prioritize, and integrate feedback from external stakeholders regarding AI risks
GOV 6.1Policies and procedures address AI risks from third-party entities including vendors, partners, and open-source dependencies

MAP: Risk Identification & Context

MAP 1.1Intended purpose, context of use, and potential impact of the AI system are documented
MAP 1.5Organizational risk tolerances are determined and mapped to specific AI system contexts
MAP 2.1Specific tasks and methods used to implement the tasks that the AI system will support are defined
MAP 2.3Scientific integrity and TEVV considerations are identified and documented, including data quality and representativeness
MAP 3.1Potential benefits and costs of AI systems compared, including opportunity costs of non-deployment
MAP 5.1Likelihood and magnitude of each identified impact based on expected use, past incidents, and external feedback are identified and documented
MAP 5.2Practices and personnel for regular engagement with relevant AI actors and integrating feedback about positive, negative, and unanticipated impacts are documented

MEASURE: Risk Analysis & Quantification

MEAS 1.1Appropriate methods for measuring AI risk identified and applied (quantitative and qualitative)
MEAS 2.1-2.13All 13 trustworthiness characteristics measured: validity, reliability, safety, security, resilience, accountability, transparency, explainability, interpretability, privacy, fairness (bias managed), and robustness
MEAS 2.6Adversarial testing conducted: bias audit, robustness testing, security assessment, and fail-safe validation
MEAS 3.1Approaches for tracking identified risks over time are established with defined metrics and triggers
MEAS 4.1Measurement results communicated to relevant stakeholders in understandable formats

MANAGE: Risk Treatment & Response

MNG 1.3Responses to identified AI risks include mitigating, transferring, avoiding, or accepting, with plans documented and implemented
MNG 2.1Response options for identified risks documented with cost-benefit analysis
MNG 2.2Mechanisms to sustain value and safety of deployed AI systems maintained throughout lifecycle
MNG 3.1AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented
MNG 4.1Post-deployment monitoring plans implemented including user input capture, appeal and override mechanisms, incident response, and change management

Framework Risk Crosswalk

How ISO 23894, ISO 42001, NIST AI RMF, and EU AI Act map to each risk management activity.

Risk ActivityISO 23894ISO 42001NIST AI RMFEU AI Act
Scope, Context, CriteriaCl. 6.3Cl. 6.1.1GOVERN 1.3Art. 9(2)(a)
Risk IdentificationCl. 6.4.2Cl. 6.1.2MAP 2.1-2.3Art. 9(2)(b)
Risk AnalysisCl. 6.4.3Cl. 6.1.2MEAS 1.1-2.13Art. 9(2)(c)
Risk EvaluationCl. 6.4.4Cl. 6.1.2MEAS 3.1Art. 9(2)(d)
Risk TreatmentCl. 6.5Cl. 8.4MNG 1.3-2.2Art. 9(4)
Risk MonitoringCl. 6.6Cl. 9.1MNG 3.1-4.1Art. 9(3), 72
Recording & ReportingCl. 6.7Cl. 7.4MEAS 4.1Art. 13
Third-Party Risk--Annex A.10GOV 6.1-6.2Art. 25-27
Incident Reporting--Cl. 10.2MNG 3.1Art. 73

Source: NIST AI RMF to ISO/IEC 42001 Crosswalk + ISO/IEC 23894:2023 + EU AI Act Official Journal

AI Risk Management Toolkit

Practical tools derived from 130+ primary sources. Score, document, and report AI risk.

FREE
Risk Tier Decision Tree
7-question flow to classify any AI system by EU AI Act risk tier
FREE
40-Field AI Use Case Tracker
Fillable template covering all governance and risk fields per AI system
FREE
Regulatory Mapping Cheat Sheet
40 fields mapped to NIST, ISO 42001, EU AI Act, and GDPR
FREE
Board AI Governance Summary
9-section quarterly report with risk KPIs, charts, and action items
FREE
Quick-Start Governance Checklist
3-tier checklist: 15/27/40 fields scaled by risk level
FREE
Charter Implementation Checklist
55 items across 5 phases + 90-day operationalization
All-in-One
Download Every Risk & Governance Tool -- Free
Every template and checklist in one download. One email, everything you need.
Get Bundle →