Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

EU AI Act Risk Classification: 4-Tier System, Obligations & Compliance Guide

The EU AI Act sorts every AI system into one of four risk tiers. Each tier carries specific obligations, timelines, and penalties. This guide breaks down exactly where your systems fall and what you need to do about it.

Derrick D. Jackson | CISSP, CRISC, CCSP April 2026 ~18 min read
4Risk Tiers
8Banned Practices
9High-Risk Articles
35MEUR Max Fine

The EU AI Act (Regulation 2024/1689) is the first binding horizontal regulation on artificial intelligence. Its core mechanism is a risk-based classification system that assigns obligations proportionate to the potential harm an AI system can cause.

This is not optional. If your AI system touches EU citizens, operates in the EU market, or produces outputs used within the EU, you fall within scope. The regulation entered into force on August 1, 2024, with phased enforcement beginning February 2, 2025 (prohibited practices) and August 2, 2025 (GPAI obligations), through to August 2, 2027 (full high-risk compliance).

Understanding where each of your AI systems sits within this classification is the first step toward compliance. Get it wrong, and you face fines up to 35 million EUR or 7% of global annual turnover for prohibited practice violations.

The 4 Risk Tiers

Every AI system falls into one of these categories. Click each tier to see the full details, examples, and what the regulation demands.

Unacceptable Risk

BANNED

AI practices that pose a clear threat to fundamental rights. Prohibited outright under Article 5, with enforcement starting February 2, 2025.

8 Prohibited Practices (Art. 5)

  • Social scoring by public authorities or on their behalf, leading to detrimental treatment unrelated to the context of data collection
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions for serious crime)
  • Subliminal manipulation techniques that deploy beyond a person's consciousness to materially distort behavior and cause significant harm
  • Exploitation of vulnerabilities of specific groups due to age, disability, or social/economic situation to distort behavior causing significant harm
  • Untargeted scraping of facial images from the internet or CCTV to build or expand facial recognition databases
  • Emotion recognition in the workplace and educational institutions (except for medical or safety reasons)
  • Biometric categorization using sensitive attributes (race, political opinions, trade union membership, religious beliefs, sexual orientation) to infer those characteristics
  • Predictive policing based solely on profiling or personality traits to assess the risk of a person committing a criminal offense
Penalty: Up to 35M EUR or 7% of global annual turnover

High Risk

REGULATED

AI systems with significant potential impact on health, safety, or fundamental rights. Full compliance requirements under Articles 6-15, Annex I and III.

High-Risk Domains (Annex III)

  • Biometric identification and categorization of natural persons
  • Critical infrastructure management and operation (energy, transport, water, digital)
  • Education and training for determining access or assessing students
  • Employment and workers management including recruitment, screening, promotion, termination
  • Access to essential services including creditworthiness, insurance pricing, emergency dispatch
  • Law enforcement including risk assessment of individuals, polygraphs, evidence evaluation
  • Migration and border control including visa application assessment, security risk screening
  • Administration of justice including sentencing recommendations, case outcome prediction

Also High-Risk (Annex I)

  • Product safety legislation where the AI is a safety component of a product already covered by EU harmonization legislation (medical devices, machinery, vehicles, aviation, railway, marine equipment, toys, lifts)
Penalty: Up to 15M EUR or 3% of global annual turnover

Limited Risk

TRANSPARENCY

AI systems that interact with people or generate content. Transparency obligations only: users must know they are interacting with AI or viewing AI-generated content.

Examples & Requirements

  • Chatbots and virtual assistants that interact with natural persons must disclose they are AI
  • Deepfakes (AI-generated or manipulated images, audio, video) must be labeled as artificially generated
  • Emotion recognition systems (where not banned) must inform the person being analyzed
  • Biometric categorization systems (where not banned) must provide adequate transparency
  • AI-generated text published to inform the public on matters of public interest must be labeled as AI-generated
Obligation: Disclosure and labeling only

Minimal Risk

NO OBLIGATIONS

The vast majority of AI systems fall here. No specific regulatory requirements, though voluntary codes of conduct are encouraged.

Examples

  • Spam filters for email services
  • AI-enabled video games (NPC behavior, difficulty scaling)
  • Inventory management systems using AI for demand forecasting
  • AI-assisted spell checkers and grammar tools
  • Content recommendation for entertainment platforms (non-manipulative)
  • Manufacturing process optimization without safety-critical functions
Obligation: None (voluntary codes of conduct encouraged)

How to Determine Your Risk Tier

Answer these questions to narrow down which tier applies to your AI system. This is a simplified guide. Final classification should involve your legal and compliance teams.

1. Does your AI system perform any of the prohibited practices listed in Article 5?
Social scoring, real-time biometric ID in public spaces, subliminal manipulation, exploitation of vulnerable groups, untargeted facial scraping, workplace emotion recognition, or predictive policing based solely on profiling.

Unacceptable Risk - PROHIBITED

This AI practice is banned under Article 5. It cannot be placed on the EU market or used within the EU. Violations carry fines up to 35M EUR or 7% of global annual turnover. Consult legal counsel immediately.

High Risk - Full Compliance Required

Your system must meet all requirements under Articles 9-15 and 17: risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, robustness, and security. Conformity assessment required before market placement. Full compliance deadline: August 2, 2027.

Limited Risk - Transparency Obligations

Your system must clearly disclose its AI nature to users. For generated or manipulated content, label it as AI-generated. The specific disclosure format depends on the interaction type. Lighter than high-risk, but still mandatory.

Minimal Risk - No Specific Obligations

No mandatory regulatory requirements apply under the EU AI Act. The Commission encourages voluntary codes of conduct. Consider adopting good practices from the NIST AI RMF or ISO 42001 anyway. Governance is smart business even when not legally required.

Free Download
Risk Tier Decision Tree
A more detailed 7-question interactive tool that maps your AI system to the correct EU AI Act risk tier, with per-tier obligation summaries and next steps.
Download the Full Decision Tree →

High-Risk Obligations Breakdown

If your AI system is classified as high-risk, these are the specific requirements you must meet. Each maps to a numbered Article in the regulation.

Establish, implement, document, and maintain a continuous risk management system throughout the AI system's entire lifecycle.

  • Identify and analyze known and reasonably foreseeable risks
  • Estimate and evaluate risks that may emerge when the system is used as intended and under conditions of reasonably foreseeable misuse
  • Adopt suitable risk management measures, including design choices and testing
  • Ensure residual risk is acceptable, with appropriate mitigation and communication
  • Test the system to identify the most appropriate risk management measures
NIST MAP 1-5 ISO 42001 Cl. 6.1

Training, validation, and testing data sets must meet quality criteria appropriate to the intended purpose of the system.

  • Implement data governance and management practices covering design choices, data collection, preparation, and labeling
  • Ensure training data is relevant, sufficiently representative, and free of errors to the extent possible
  • Account for the specific geographical, contextual, behavioral, or functional setting of the system
  • Take measures to detect, prevent, and mitigate possible biases in data sets
  • Where special categories of personal data are processed, ensure appropriate safeguards
NIST MEASURE 2.1-2.6 ISO 42001 Cl. 8.4

Maintain technical documentation that demonstrates compliance and provides national authorities with the information needed for assessment.

  • General description of the AI system (intended purpose, developer, version)
  • Detailed description of elements, development process, and system architecture
  • Monitoring, functioning, and control of the system, including human oversight measures
  • Description of the risk management system applied
  • Documentation must be kept up-to-date throughout the system's lifecycle
ISO 42001 Cl. 7.5

High-risk AI systems must include automatic logging capabilities to ensure traceability throughout the system's lifecycle.

  • Record events relevant to identifying situations that may result in risk, including events relating to the functioning period
  • Logging must enable monitoring of the system's operation and post-market monitoring
  • Logs must be accessible to deployers for compliance with their own obligations
  • Retention period must be appropriate to the intended purpose and applicable law
NIST MANAGE 4.1

Design and develop the system to ensure its operation is sufficiently transparent for deployers to interpret and use the output appropriately.

  • Instructions for use in an appropriate digital or non-digital format
  • Identity and contact details of the provider
  • Characteristics, capabilities, and limitations of performance (accuracy, robustness, cybersecurity)
  • Intended purpose and any known or foreseeable circumstances of misuse
  • Human oversight measures, including technical measures to facilitate interpretation
NIST MAP 1.1 ISO 42001 Annex A.6

High-risk AI systems must be designed to allow effective human oversight during the period the system is in use.

  • Humans must be able to fully understand the capacities and limitations of the system
  • Humans must be able to correctly interpret the system's output
  • Humans must be able to decide not to use, override, or reverse the output
  • Humans must be able to intervene or interrupt the system through a "stop" button or similar
  • For biometric identification: at least two qualified humans must verify results before action
NIST GOVERN 3.2 EU AI Act Art. 14

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity, and perform consistently throughout their lifecycle.

  • Accuracy levels must be declared in instructions for use and measurable
  • Resilience against errors, faults, or inconsistencies within the system or its operating environment
  • Technical redundancy including backup or fail-safe plans
  • Protection against unauthorized third-party attempts to exploit vulnerabilities
  • Resistance to adversarial attacks, data poisoning, model manipulation, and input perturbation
NIST MEASURE 2.6 ISO 42001 Cl. 8.4

Providers of high-risk AI systems must establish a quality management system that ensures compliance with the regulation in a systematic and documented manner.

  • Strategy for regulatory compliance, including conformity assessment procedures
  • Techniques, procedures, and systematic actions for design, control, and verification
  • Techniques for development, quality control, and quality assurance
  • Examination, test, and validation procedures before, during, and after development
  • Systems and procedures for data management, including collection, analysis, labeling, storage, filtering, and aggregation
  • Risk management procedures, including post-market monitoring
ISO 42001 Cl. 4-10

Before placing a high-risk AI system on the market or putting it into service, the provider must ensure it undergoes the relevant conformity assessment procedure.

  • For Annex III systems (standalone high-risk): internal control procedure (Annex VI) is sufficient in most cases
  • For biometric identification systems: third-party conformity assessment by a Notified Body is required
  • For Annex I systems (product safety): follows the conformity assessment procedure of the relevant product legislation
  • CE marking must be affixed after successful conformity assessment
  • EU declaration of conformity must be drawn up and kept for 10 years
EU AI Act Art. 43-49

Conformity Assessment (Art. 43)

Before any high-risk AI system can be placed on the EU market or put into service, it must pass a conformity assessment. The type of assessment depends on how your system is classified.

Path A

Internal Control (Annex VI)

Self-assessment by the provider. Applies to most Annex III high-risk systems. The provider verifies and documents compliance internally.

  • No external audit required
  • Provider draws up EU Declaration of Conformity
  • CE marking affixed by provider
Path B

Notified Body (Annex VII)

Third-party assessment required for remote biometric identification systems. An independent Notified Body evaluates the system and QMS.

  • Assessment of QMS and technical documentation
  • Notified Body issues certificate of conformity
  • Periodic surveillance audits
Path C

Product Legislation (Annex I)

For AI embedded in products already regulated by EU safety legislation. Follows the existing product conformity assessment, with AI Act requirements added.

  • Integrated into existing CE marking process
  • AI Act requirements treated as additional criteria
  • Single conformity assessment covers both product and AI

Documentation retention: All conformity assessment documentation, EU Declarations of Conformity, and technical documentation must be kept for at least 10 years after the AI system has been placed on the market or put into service.

Fundamental Rights Impact Assessment (FRIA, Art. 27)

Beyond conformity assessment, certain deployers of high-risk AI systems must conduct a Fundamental Rights Impact Assessment before putting the system into use.

Who Must Conduct a FRIA?

  • Bodies governed by public law (government agencies, public authorities)
  • Private entities providing public services (utilities, healthcare, education)
  • Deployers using high-risk AI for creditworthiness assessment or risk pricing
  • Deployers using high-risk AI for life and health insurance risk assessment

What Must the FRIA Cover?

  • Description of the deployer's processes using the AI system
  • Time period and frequency of intended use
  • Categories of natural persons and groups likely affected
  • Specific risks of harm to identified groups
  • Human oversight measures in place
  • Measures to act on risks identified, including governance arrangements

The FRIA must be performed before first use and updated when circumstances change materially. Results must be notified to the relevant market surveillance authority. Where a Data Protection Impact Assessment (DPIA) is already required under GDPR Article 35, the FRIA can be conducted alongside it.

Incident Reporting (Art. 73)

Providers of high-risk AI systems must report serious incidents to market surveillance authorities. The timelines are strict and non-negotiable.

2
Calendar Days

Critical Infrastructure Disruption

Report within 2 days of becoming aware. Covers widespread infringements or serious and irreversible disruption to the management and operation of critical infrastructure.

10
Calendar Days

Death or Serious Harm to Health

Report within 10 days of becoming aware. Covers incidents causing death or serious damage to the health of a person.

15
Calendar Days

Other Serious Incidents

Report within 15 days of becoming aware. Covers serious breach of fundamental rights obligations, serious damage to property, environment, or other serious incidents not resulting in death.

Who Must Report?

Providers of high-risk AI systems placed on the EU market. If the deployer identifies the incident first, they must notify the provider who then reports to authorities.

Report to: Market surveillance authority of the Member State where the incident occurred

Required Documentation

  • Identification of the AI system and provider
  • Description of the incident, severity, and circumstances
  • Corrective actions taken or planned
  • Initial assessment of causal relationship

CSA alignment: The Cloud Security Alliance (CSA) 5-step AI incident response framework (Detect, Analyze, Contain, Remediate, Review) maps well to these EU AI Act obligations. Organizations that adopt the CSA model can use it as their operational IR process, with the Art. 73 reporting integrated as a mandatory step during the Contain/Remediate phases.

Proportionate Response per Tier

The EU AI Act is designed so governance effort matches risk. Here is how governance intensity scales across the four tiers.

Unacceptable
Full prohibition. No compliance path. Remove from service or do not deploy.
High Risk
Risk management + data governance + documentation + logging + transparency + human oversight + accuracy/robustness + QMS + conformity assessment
Limited
Transparency and disclosure obligations only. Inform users of AI interaction or label generated content.
Minimal
No mandatory obligations. Voluntary codes of conduct encouraged.
Governance Activity Unacceptable High Risk Limited Minimal
Risk Management SystemN/A (Banned)Mandatory--
Data GovernanceN/AMandatory--
Technical DocumentationN/AMandatory--
LoggingN/AMandatory--
TransparencyN/AMandatoryMandatoryVoluntary
Human OversightN/AMandatory--
Accuracy/RobustnessN/AMandatory--
Conformity AssessmentN/AMandatory--
Incident ReportingN/AMandatory--
FRIAN/AConditional--