- techjacksolutions.com
- Mon - Friday: 8.00 am - 6.00 pm
We are creative, ambitious and ready for challenges! Hire Us
We are creative, ambitious and ready for challenges! Hire Us
Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.
411 University St, Seattle, USA
+1 -800-456-478-23
This EU AI ACT guide explains Regulation (EU) 2024/1689—the EU Artificial Intelligence Act—its purpose, scope, and who must comply.
Verify in Official Journal (EU AI Act)
Navigating the EU AI Act doesn’t have to be overwhelming. Whether you’re a CTO planning your AI strategy(Complete AI governance framework), a compliance officer ensuring regulatory alignment, or simply curious about how Europe’s groundbreaking AI regulation affects your organization, this comprehensive guide breaks down everything you need to know. From understanding the Act’s four-tier risk framework to identifying your specific obligations and deadlines, we’ll walk you through the practical steps to achieve compliance while continuing to innovate. You’ll discover how the regulation impacts different roles within your organization, learn which AI systems are prohibited versus those requiring special safeguards, and gain actionable insights to build a compliant AI governance framework that protects both your business and the fundamental rights of EU citizens.
The Act lays down harmonized rules for placing AI systems on the EU market (Art. 1).
Verify Subject Matter: Article 1
The Act establishes different risk categories through Articles 5 (prohibited practices), 6 (high-risk classification), and 50 (transparency obligations), which are commonly interpreted as a four-tier framework.
The EU AI Act prohibits the following categories of AI systems:
Behavioral Manipulation AI systems that manipulate human behavior to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behavior of minors.
Social Scoring Systems that allow social scoring by governments or companies.
Predictive Policing Certain applications of predictive policing.
Biometric Systems
Article 5(1)(h) on biometric identification
Discrimination and Exploitation AI systems that discriminate against individuals or exploit their vulnerabilities.
Penalties and Timeline
Article 99: Penalties
Violations in this category carry the highest fines under the Act: up to €35 million or 7% of global annual turnover, whichever is higher.
Prohibitions for these systems apply six months after the AI Act’s entry into force, which is February 2, 2025.
Article 6: Classification Rules for High-Risk AI Systems
High-risk AI systems are subject to stringent requirements because they pose significant risks to health and safety or fundamental rights.
Classification Criteria
The AI Act identifies high-risk systems through two main categories:
1. AI as Safety Components (Annex I) AI systems intended to be used as a safety component of a product, or where the AI system is itself a product, covered by specific EU harmonization legislation including:
This classification applies when the product incorporating the AI system requires third-party conformity assessment under that legislation.
2. Specific Use Cases (Annex III) AI systems used in the following sensitive areas:
AI systems under Annex III are always considered high-risk if they perform profiling of individuals.
Requirements for Providers
Providers of high-risk AI systems must:
The Act provides Obligations of Deployers of High-Risk AI Systems (Art. 26).
Article 26: Obligations of Deployers of High-Risk AI Systems
(Link to our services that can help guide you on your Risk Management): AI Governance & Risk Management solutions
Penalties and Timeline
Fines for non-compliance: up to €15 million or 3% of global revenues.
Implementation timeline:
Limited Risk AI Systems Under the EU AI Act
Limited risk AI systems are subject to lighter transparency obligations, primarily requiring developers and deployers to ensure that end-users are aware they are interacting with AI.
Examples of Limited Risk Systems
Key Requirement
The primary obligation for limited risk systems is transparency: users must know when they’re interacting with AI rather than human-generated content or human operators.
Minimal Risk AI Systems Under the EU AI Act
Most AI systems fall into this category and are unregulated by the Act, presenting minimal or no risk to citizens’ rights or safety.
Examples of Minimal Risk Systems
Companies may voluntarily commit to additional codes of conduct for these AI systems.
General-Purpose AI (GPAI) Models Under the EU AI Act
The AI Act introduces dedicated rules for GPAI models, which are capable of performing a wide range of distinct tasks and are typically trained on large amounts of data.
Dedicated rules for GPAI models are in Arts. 54–56.
Article 54: Authorized Representatives of Providers of General-Purpose AI Models
Requirements for All GPAI Model Providers
Free and Open-Source GPAI Models Subject to lighter regulation – only need to comply with copyright and publish the training data summary, unless they present a systemic risk.
GPAI Models with Systemic Risk
These models face additional binding obligations due to their high-impact capabilities (cumulative training compute exceeding 10^25 FLOPs or determined by Commission decision based on Annex XIII criteria).
Additional requirements:
Timeline Obligations for GPAI systems apply after 12 months (August 2, 2025).
Expand the role below, to learn more about how the EU AI ACT can shape responsibilities. |
Core Concerns Under the EU AI Act: As the strategic lead for AI implementation, you're accountable for ensuring your organization's entire AI portfolio complies with the Act's risk-based framework. Your primary concerns include establishing governance structures that align AI development with organizational risk tolerance and ethical principles. You must oversee compliance across all risk categories, from prohibited systems to GPAI models with systemic risk.
Key responsibilities include implementing continuous risk management systems for high-risk AI, ensuring technical documentation and conformity assessments are completed, and maintaining the central EU database registrations. You're also responsible for AI education strategies and preventing shadow AI usage. For GPAI models, you must ensure compliance with copyright law, training data transparency, and additional obligations for systemic risk models (including adversarial testing and cybersecurity measures). Your role requires balancing innovation with regulatory compliance while fostering an appropriate organizational culture around responsible AI use.
Get guidance via our article: Operationalizing our 8-stage AI Governance Framework
Core Concerns Under the EU AI Act: Your role centers on ensuring AI systems meet their intended purpose while complying with the Act's requirements. You must validate that AI use cases align with business objectives without crossing into prohibited categories. For high-risk systems, you're accountable for approval decisions, ensuring models meet performance and fairness criteria before deployment. Transparency is crucial—you must ensure users know when they're interacting with AI (chatbots, deepfakes, emotion recognition systems). You participate in decision tollgates throughout the AI lifecycle, documenting system purposes, uses, and risks.
Key concerns include defining clear technical specifications, ensuring systems don't manipulate behavior or discriminate, and verifying compliance with sector-specific regulations. For systems using GPAI models, you must understand their capabilities and limitations through provider documentation. Your decisions directly impact whether AI systems require conformity assessments, database registration, or specific transparency measures based on their risk classification.
Core Concerns Under the EU AI Act: You're at the technical forefront of ensuring AI systems meet the Act's stringent requirements for accuracy, robustness, and cybersecurity. For high-risk systems, you must implement comprehensive data governance for training, validation, and testing datasets, ensuring data quality and representativeness while preventing errors. Your responsibilities include identifying and mitigating biases, implementing secure coding practices, and maintaining detailed technical documentation. For GPAI models with systemic risk, you must conduct adversarial testing and model evaluations.
Key concerns include version control, dependency tracking, and automated integrity checks. You're responsible for implementing human oversight capabilities and ensuring systems achieve appropriate performance levels. The Act requires you to document the entire development process, including hyperparameter tuning and model training procedures. You must also verify user instructions through testing and apply explainable AI techniques where needed, particularly for high-risk applications in employment, law enforcement, or essential services.
Core Concerns Under the EU AI Act: Your role is fundamental in navigating the Act's complex legal landscape and ensuring organizational compliance across all AI systems. You must advise on risk classifications, determining whether systems fall into prohibited, high-risk, limited-risk, or minimal-risk categories. Key concerns include protecting fundamental rights, ensuring GDPR alignment, and managing contractual obligations with AI vendors. For high-risk systems, you oversee conformity assessments, CE marking requirements, and database registration obligations. You must ensure GPAI models comply with Union copyright law and training data transparency requirements.
Critical responsibilities include developing AI compliance frameworks, managing liability issues, and ensuring proper documentation for potential audits. You handle serious incident reporting obligations and coordinate with national authorities. For international deployments, you navigate cross-border compliance. Your expertise is vital in interpreting the Act's requirements, especially regarding biometric systems, profiling applications, and sector-specific regulations that trigger additional obligations.
Core Concerns Under the EU AI Act: You're responsible for implementing AI-specific risk management frameworks that align with the Act's risk-based approach. For high-risk systems, you must establish continuous risk management processes throughout the AI lifecycle, systematically identifying, analyzing, and mitigating risks. Your role includes conducting regular audits to verify policy compliance and adherence to the Act's requirements.
Key concerns include assessing whether AI systems pose risks to health, safety, or fundamental rights, and ensuring appropriate mitigation measures. You develop audit frameworks to evaluate AI usage, monitor compliance with internal policies, and assess vendor risks. For GPAI models with systemic risk, you oversee the assessment and mitigation of Union-level risks. You're responsible for establishing mechanisms to track serious incidents and ensure proper reporting. Your work includes evaluating the effectiveness of human oversight measures, accuracy benchmarks, and cybersecurity protections, while ensuring trustworthy AI characteristics are integrated into organizational policies.
Core Concerns Under the EU AI Act: Your role is critical in managing supply chain risks and ensuring third-party AI products meet the Act's requirements. You must conduct thorough due diligence on vendors, verifying their compliance with relevant risk categories and obligations. For high-risk AI systems, you ensure contracts specify necessary information, capabilities, and technical access to enable compliance.
Key concerns include establishing vendor risk management programs covering security, ethical, and compliance factors. You must ensure external GPAI models provide required technical documentation, copyright compliance policies, and training data summaries. Critical responsibilities include developing standardized data usage agreements and ensuring content provenance standards. You verify that vendors of high-risk systems have completed conformity assessments and maintain proper CE marking. For AI components integrated into your products, you must ensure the supply chain maintains compliance throughout. Your work directly impacts whether your organization can demonstrate compliance when using third-party AI technologies.
Core Concerns Under the EU AI Act: You're responsible for the technical infrastructure ensuring AI systems maintain compliance throughout their operational lifecycle. For high-risk systems, you must implement automatic recording of events (logs) and ensure traceability of system functioning. Your role includes hardening MLOps pipelines, implementing automated integrity checks, and integrating security measures into CI/CD processes.
Key concerns include maintaining model accuracy, robustness, and cybersecurity protection as required by the Act. You establish continuous monitoring mechanisms to detect performance degradation or emerging risks. For GPAI models with systemic risk, you ensure adequate cybersecurity for both the model and physical infrastructure. Critical responsibilities include managing change control processes, maintaining Software Bill of Materials (SBOMs), and implementing secure deployment practices. You handle the technical aspects of serious incident detection and reporting. Your work ensures AI systems remain within their approved operational parameters and maintain required performance levels post-deployment.
Core Concerns Under the EU AI Act: Your role bridges GDPR and AI Act compliance, ensuring data protection throughout AI system lifecycles. You oversee how personal data is collected, processed, and stored for AI training and operations, ensuring compliance with both regulations. Key concerns include conducting Data Protection Impact Assessments (DPIAs) that complement the Act's Fundamental Rights Impact Assessments. For high-risk AI systems, you ensure data governance meets Article 10 requirements for quality, representativeness, and bias prevention. You must verify that biometric and categorization systems comply with both GDPR and the Act's specific restrictions.
Critical responsibilities include ensuring transparency in data use, implementing data minimization principles, and safeguarding individual rights. For GPAI models, you address copyright compliance and training data transparency from a privacy perspective. You manage consent requirements for AI processing and ensure appropriate legal bases. Your expertise is vital when AI systems process special categories of personal data or involve automated decision-making with legal effects.
Click each card to learn more
Or expand text below:
The EU AI Act mandates comprehensive Quality Management System requirements for providers of high-risk AI systems to ensure responsible development, deployment, and management that upholds safety, transparency, accountability, and fundamental rights protection.
The QMS must ensure compliance with the EU AI Act and establish sound quality management practices to mitigate risks and ensure trustworthiness. Documentation must be systematic and orderly, presented as written policies, procedures, and instructions.
All data operations performed before market placement or service deployment must be documented, including:
Special emphasis on ensuring training, validation, and testing datasets are relevant, representative, error-free, and complete to the best extent possible for intended purposes.
During conformity assessment procedures:
Notified bodies conducting QMS assessments must satisfy:
The EU AI Act’s QMS requirements align with broader AI governance frameworks such as ISO/IEC 42001, emphasizing:
QMS requirements apply when high-risk AI systems obligations take effect:
Providers should begin QMS implementation well before these deadlines to ensure compliance readiness and allow time for refinement based on operational experience.
EU AI Act – Article Reference Guide Based on Regulation (EU) 2024/1689 of 13 June 2024
EU Artificial Intelligence Act (Regulation 2024/1689).
Understanding these definitions is crucial for compliance with the
AI Act’s requirements for providers, deployers, and other stakeholders.
Last updated: [Date] | Version: Official EU AI Act terminology
Skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause.
The Commission’s function of contributing to the implementation, monitoring and supervision of AI systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January 2024; references in this Regulation to the AI Office shall be construed as references to the Commission.
A controlled framework set up by a competent authority which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision.
A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Related: Recital 12
A natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation.
An AI system for the purpose of assigning natural persons to specific categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly necessary for objective technical reasons. Related: Recital 16
Personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, such as facial images or dactyloscopic data. Related: Recital 14
The automated recognition of physical, physiological, behavioural, or psychological human features for the purpose of establishing the identity of a natural person by comparing biometric data of that individual to biometric data of individuals stored in a database. Related: Recital 15
The automated, one-to-one verification, including authentication, of the identity of natural persons by comparing their biometric data to previously provided biometric data. Related: Recital 15
A marking by which a provider indicates that an AI system is in conformity with the requirements set out in Chapter III, Section 2 and other applicable Union harmonisation legislation providing for its affixing.
A set of technical specifications as defined in Article 2, point (4) of Regulation (EU) No 1025/2012, providing means to comply with certain requirements established under this Regulation.
The process of demonstrating whether the requirements set out in Chapter III, Section 2 relating to a high-risk AI system have been fulfilled.
A body that performs third-party conformity assessment activities, including testing, certification and inspection.
Critical infrastructure as defined in Article 2, point (4), of Directive (EU) 2022/2557.
AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful.
A natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity. Related: Recital 13
A natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market.
A provider of an AI system, including a general-purpose AI system, which integrates an AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.
An AI system for the purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. Related: Recital 18
Any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base. Related: Recital 110
An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market. Related: Recitals 97, 98, and 99
An AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems. Related: Recital 100
A harmonised standard as defined in Article 2(1), point (c), of Regulation (EU) No 1025/2012.
Capabilities that match or exceed the capabilities recorded in the most advanced general-purpose AI models. Related: Recital 110
A natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country.
A subject’s freely given, specific, unambiguous and voluntary expression of his or her willingness to participate in a particular testing in real-world conditions, after having been informed of all aspects of the testing that are relevant to the subject’s decision to participate.
Data provided to or directly acquired by an AI system on the basis of which the system produces an output.
The information provided by the provider to inform the deployer of, in particular, an AI system’s intended purpose and proper use.
The use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.
Activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security.
Either:
The supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.
The national authority carrying out the activities and taking the measures pursuant to Regulation (EU) 2019/1020.
A notifying authority or a market surveillance authority; as regards AI systems put into service or used by Union institutions, agencies, offices and bodies, references to national competent authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data Protection Supervisor.
Data other than personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679.
A conformity assessment body notified in accordance with this Regulation and other relevant Union harmonisation legislation.
The national authority responsible for setting up and carrying out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their monitoring.
A provider, product manufacturer, deployer, authorised representative, importer or distributor.
The ability of an AI system to achieve its intended purpose.
Personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679.
The first making available of an AI system or a general-purpose AI model on the Union market.
A remote biometric identification system other than a real-time remote biometric identification system. Related: Recital 17
All activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions.
Profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679.
A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge.
Any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential capacity restrictions. Related: Recital 19
The supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.
A remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention. Related: Recital 17
A document that describes the objectives, methodology, geographical, population and temporal scope, monitoring, organisation and conduct of testing in real-world conditions.
The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems.
Any measure aiming to achieve the return to the provider or taking out of service or disabling the use of an AI system made available to deployers.
An AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database. Related: Recital 17
The combination of the probability of an occurrence of harm and the severity of that harm.
A component of a product or of an AI system which fulfils a safety function for that product or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property.
A document agreed between the participating provider and the competent authority describing the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox.
Operational data related to activities of prevention, detection, investigation or prosecution of criminal offences, the disclosure of which could jeopardise the integrity of criminal proceedings.
An incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:
The categories of personal data referred to in Article 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725.
For the purpose of real-world testing, means a natural person who participates in testing in real-world conditions.
A change to an AI system after its placing on the market or putting into service which is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in a modification to the intended purpose for which the AI system has been assessed. Related: Recital 128
A risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain. Related: Recital 110
Data used for providing an independent evaluation of the AI system in order to confirm the expected performance of that system before its placing on the market or putting into service.
The temporary testing of an AI system for its intended purpose in real-world conditions outside a laboratory or otherwise simulated environment, with a view to gathering reliable and robust data and to assessing and verifying the conformity of the AI system with the requirements of this Regulation and it does not qualify as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all the conditions laid down in Article 57 or 60 are fulfilled.
Data used for training an AI system through fitting its learnable parameters.
Data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process in order, inter alia, to prevent underfitting or overfitting.
A separate data set or part of the training data set, either as a fixed or variable split.
Any act or omission contrary to Union law protecting the interest of individuals, which:
Any measure aiming to prevent an AI system in the supply chain being made available on the market.
Violation: Using banned AI systems (Article 5)
Violations: Breaking rules for providers, importers, distributors, deployers, transparency
Violation: Lying to authorities or providing misleading information
Authorities consider these factors:
Severity Factors:
Company Factors:
Cooperation Factors:
Violation | Company Fine | EU Institution Fine |
---|---|---|
Banned AI | €35M or 7% | €1.5M |
Other violations | €15M or 3% | €750K |
False info | €7.5M or 1% | N/A |
GPAI violations | €15M or 3% | N/A |
For companies: Always the HIGHER of fixed amount or percentage
Before 2002:
2002:
2003:
2005:
2006:
2008:
2009:
2012:
2013:
2014:
2016:
2017:
2018:
2019:
2020:
2021:
2022:
2023:
2024:
2025:
2026:
2027:
2028:
2029:
2030:
2031:
Ongoing:
Document Version: 1.0
Based on: EU AI Act (Regulation (EU) 2024/1689)
Last Updated: [Date]
Implementation Timeline & Stage Gates
Stage | When to Perform | Key Deliverables |
Stage 1: Conception | During initial AI system design | Initial Classification Report |
Stage 2: Development | Throughout development phase | Risk Management Documentation, Data Governance Plan |
Stage 3: Pre-Market | Before market placement | Technical Documentation, Conformity Assessment, CE Marking |
Stage 4: Pre-Deployment | Before operational use | FRIA (if applicable), User Instructions |
Stage 5: Post-Market | Continuous monitoring | Incident Reports, Monitoring Logs |
Section 1: Initial Classification Assessment
Stage: Conception
Deliverable: Classification Determination Report
1.1 System Identification
Field | Information Required | Response |
AI System Name | Official designation | _________ |
Version | Current version number | _________ |
Provider Name | Legal entity name | _________ |
Provider Registration | Company registration number | _________ |
Intended Purpose | Clear description of primary use | _________ |
Target Users | Who will deploy/use the system | _________ |
1.2 High-Risk Classification Screening
[Reference: Article 6]
Safety Component Check
Does your AI system fall under any of these categories?
Requires third-party conformity assessment? [ ] Yes [ ] No
Annex III Category Check
[Reference: Annex III]
Check all that apply:
1. Biometric Systems
2. Critical Infrastructure
3. Education and Vocational Training
4. Employment and Worker Management
5. Essential Services and Benefits
6. Law Enforcement
7. Migration, Asylum and Border Control
8. Administration of Justice
1.3 High-Risk Derogation Assessment
[Reference: Article 6(3)]
Answer these questions to determine if derogation applies:
Derogation Documentation Required: If claiming derogation, prepare detailed justification document
Classification Result:
Section 2: Risk Management System
Stage: Development
Deliverable: Risk Management Plan & Risk Register
[Reference: Article 9]
2.1 Risk Identification Matrix
Risk Category | Identified Risks | Probability (1-5) | Impact (1-5) | Risk Score |
Health Risks | ||||
Description: | _____________ | ___ | ___ | ___ |
Safety Risks | ||||
Description: | _____________ | ___ | ___ | ___ |
Fundamental Rights | ||||
Description: | _____________ | ___ | ___ | ___ |
Reasonably Foreseeable Misuse | ||||
Description: | _____________ | ___ | ___ | ___ |
2.2 Vulnerable Groups Impact Assessment
Persons under 18:
Other vulnerable groups (elderly, persons with disabilities, etc.):
2.3 Risk Mitigation Measures
For each identified risk, complete:
Risk ID | Mitigation Strategy | Implementation Method | Residual Risk Level | Acceptable? |
R001 | [ ] Yes [ ] No | |||
R002 | [ ] Yes [ ] No |
2.4 Testing Plan
[Reference: Article 9(6-8)]
Pre-defined Metrics:
Testing Schedule:
Real-World Testing Requirements (if applicable):
Section 3: Data Governance Assessment
Stage: Development
Deliverable: Data Governance Documentation
[Reference: Article 10]
3.1 Data Quality Framework
Data Aspect | Documentation Required | Status |
Collection Process | Source, method, date | [ ] Complete |
Data Origin | Original purpose, consent basis | [ ] Complete |
Preparation Operations | Annotation, labeling, cleaning steps | [ ] Complete |
Key Assumptions | What data measures/represents | [ ] Complete |
Volume Assessment | Quantity and suitability analysis | [ ] Complete |
3.2 Bias Detection and Mitigation
Bias Assessment Checklist:
Identified Biases:
3.3 Data Gaps and Shortcomings
Gap Identified | Impact on Compliance | Remediation Action | Timeline |
Special Category Data Processing (if applicable):
Section 4: Human Oversight Design
Stage: Development
Deliverable: Human Oversight Specification
[Reference: Article 14]
4.1 Oversight Capabilities Checklist
System Understanding:
Human Control Measures:
4.2 Enhanced Requirements for Biometric Systems
Two-Person Verification (if applicable):
Section 5: Fundamental Rights Impact Assessment (FRIA)
Stage: Pre-Deployment
Deliverable: FRIA Report
[Reference: Article 27]
5.1 FRIA Applicability
Required if you are:
5.2 FRIA Components
Component | Description | Documented? |
Deployment Process | Where/how AI will be used | [ ] Yes |
Usage Period | Duration and frequency | [ ] Yes |
Affected Persons | Categories and estimated numbers | [ ] Yes |
Specific Risks | To fundamental rights | [ ] Yes |
Oversight Measures | Human supervision details | [ ] Yes |
Mitigation Plan | If risks materialize | [ ] Yes |
Complaints Mechanism | Internal governance | [ ] Yes |
Market Surveillance Authority Notified: [ ] Yes [ ] No [ ] Pending
Section 6: Technical Compliance Requirements
Stage: Pre-Market
Deliverable: Technical Documentation Package
6.1 Performance Standards
[Reference: Article 15]
Requirement | Target Metric | Achieved | Evidence |
Accuracy | ___% | [ ] Yes [ ] No | _______ |
Robustness | [Define] | [ ] Yes [ ] No | _______ |
Cybersecurity | [Standards] | [ ] Yes [ ] No | _______ |
6.2 Transparency Obligations
[Reference: Article 13 & Article 50]
High-Risk System Requirements:
Additional Transparency Requirements (if applicable):
6.3 Logging and Traceability
[Reference: Article 12]
Automatic Logging Implemented:
Section 7: Documentation Package
Stage: Pre-Market
Deliverable: Complete Documentation Set
7.1 Technical Documentation Checklist
[Reference: Article 11 & Annex IV]
7.2 Quality Management System
[Reference: Article 17]
QMS Components Documented:
7.3 Conformity Documentation
[Reference: Articles 47-49]
Section 8: Post-Market Monitoring
Stage: Post-Market (Ongoing)
Deliverable: Monitoring Reports & Incident Logs
8.1 Monitoring System Setup
[Reference: Article 72]
Monitoring Plan Elements:
8.2 Serious Incident Reporting
[Reference: Article 73]
Incident Categories:
Reporting Timeline:
Incident Log:
Date | Incident Type | Severity | Reported | Resolution |
[ ] Yes [ ] No |
Section 9: General-Purpose AI Models (if applicable)
Stage: Development/Pre-Market
Deliverable: GPAI Documentation Package
[Reference: Articles 51-55]
9.1 Systemic Risk Assessment
Computational Power Check:
Commission Designation Check:
9.2 Additional Requirements for Systemic Risk
If systemic risk identified:
Section 10: Final Compliance Verification
Stage: Pre-Market/Pre-Deployment
Deliverable: Compliance Certificate
10.1 Pre-Market Checklist
Requirement | Status | Evidence Location |
Risk classification completed | [ ] Done | _______ |
Risk management system operational | [ ] Done | _______ |
Technical documentation complete | [ ] Done | _______ |
Testing completed | [ ] Done | _______ |
QMS established | [ ] Done | _______ |
Conformity assessment done | [ ] Done | _______ |
CE marking affixed | [ ] Done | _______ |
EU database registration | [ ] Done | _______ |
10.2 Deployment Readiness
10.3 Compliance Declaration
Declaration: I hereby confirm that all applicable requirements of the EU AI Act have been assessed and addressed for this AI system.
Responsible Person:
Appendix: Key Deliverables Summary
Stage | Key Deliverables | Storage Location | Retention Period |
Conception | Classification Report | _______ | 10 years |
Development | Risk Management Plan | _______ | 10 years |
Development | Data Governance Docs | _______ | 10 years |
Pre-Market | Technical Documentation | _______ | 10 years |
Pre-Market | QMS Documentation | _______ | 10 years |
Pre-Market | Conformity Declaration | _______ | 10 years |
Pre-Deployment | FRIA Report | _______ | 10 years |
Post-Market | Monitoring Reports | _______ | 10 years |
Post-Market | Incident Reports | _______ | 10 years |
Important Notes:
End of Checklist
For the most current version of the EU AI Act and related guidance, visit: EU AI Act Official Text
Select the appropriate guide below (via the tabs) to see details about the EU AI ACT for your experience level.
Reading time: 15 minutes
The EU AI Act is a new law that controls how artificial intelligence (AI) can be used in Europe. Think of it as a rulebook that makes sure AI is safe and fair for everyone. It’s the world’s first major law about AI, and it started on August 2, 2024.
The EU created this law for five main reasons:
Think of it like traffic laws – we need rules so everyone stays safe and knows what to do.
The law applies to different groups:
Providers
Deployers
Importers
Distributors
You and Me
The EU AI Act sorts AI into four groups, like a danger scale from safe to banned:
These are completely forbidden
What’s banned:
When banned: February 2, 2025
These need lots of checks and rules
Examples:
What they must do:
These just need to be honest
Examples:
Main rule: Tell people when they’re dealing with AI
Most AI falls here – no special rules
Examples:
Some AI systems are really powerful and can do many things (like ChatGPT). These are called “General-Purpose AI” or GPAI.
All GPAI must:
Extra-powerful GPAI must also:
The law gives you important protections when organizations use AI systems that affect you:
AI systems must identify themselves and their artificial nature:
You can challenge unfair AI treatment:
Real people must oversee important AI decisions:
Your data stays protected when used by AI:
Breaking the rules costs money:
Worst violations (banned AI):
Other violations:
Lying to authorities:
Smaller companies pay less
EU Level:
Your Country:
The law isn’t just about stopping bad AI. It also helps good AI grow:
AI Sandboxes
Real-World Testing
Even if you don’t work with AI:
The EU AI Act is like a safety manual for AI. It:
Most importantly: AI should help humans, not replace or harm them.
So important, I’ll say it again: AI should help humans, not replace or harm them.For professionals implementing and managing AI compliance within organizations
The EU AI Act requires a strategic, risk-based approach to compliance. This guide provides actionable strategies for practitioners responsible for AI governance within their organizations.
The Act’s foundation is a risk-based approach that scales requirements based on potential harm. Your implementation should focus on:
1. Continuous Risk Management
Implement iterative risk assessment throughout AI system lifecycles
2. Integration with Existing Systems
3. Standards-Based Compliance
Technical Resources
Human Resources
Training Programs
For SMEs and Startups:
For All Organizations:
Timeline: Immediate
Timeline: 3-6 months
Technical Documentation Requirements:
Operational Documentation:
Timeline: 6-12 months
Data Governance
Technical Safeguards
Conformity Assessment
Timeline: Continuous
Post-Market Monitoring
Incident Management
(article: build your governance committee).
Executive Level
Management Level
Operational Level
Internal Coordination
External Engagement
Date | Requirement | Action Needed |
---|---|---|
February 2, 2025 | Prohibited practices ban | Cease all banned AI uses |
May 2, 2025 | GPAI codes of practice | Align with industry codes |
August 2, 2025 | GPAI model obligations | Implement model governance |
August 2, 2025 | Authorized representatives | Appoint EU representative |
August 2, 2026 | High-risk system compliance | Full compliance required |
August 2, 2027 | Product safety AI | Comply with Annex I requirements |
Regulatory Sandboxes
Funding Opportunities
Industry Resources
Remember: Compliance is not a one-time project but an ongoing commitment to responsible AI governance. Start now, iterate often, and maintain continuous improvement.
Founder: CISSP, CRISC, CSSP - Senior Director of Cloud Security Architecture & Risk
This is a living page and will continuously be updated & enhanced.
Quality Management System (QMS) Requirements Under the EU AI Act
The EU AI Act mandates comprehensive Quality Management System requirements for providers of high-risk AI systems to ensure responsible development, deployment, and management that upholds safety, transparency, accountability, and fundamental rights protection.
The QMS must ensure compliance with the EU AI Act and establish sound quality management practices to mitigate risks and ensure trustworthiness. Documentation must be systematic and orderly, presented as written policies, procedures, and instructions.
All data operations performed before market placement or service deployment must be documented, including:
Special emphasis on ensuring training, validation, and testing datasets are relevant, representative, error-free, and complete to the best extent possible for intended purposes.
During conformity assessment procedures:
Notified bodies conducting QMS assessments must satisfy:
The EU AI Act’s QMS requirements align with broader AI governance frameworks such as ISO/IEC 42001, emphasizing:
QMS requirements apply when high-risk AI systems obligations take effect:
Providers should begin QMS implementation well before these deadlines to ensure compliance readiness and allow time for refinement based on operational experience.
EU AI Act – Article Reference Guide Based on Regulation (EU) 2024/1689 of 13 June 2024
Violation: Using banned AI systems (Article 5)
Violations: Breaking rules for providers, importers, distributors, deployers, transparency
Violation: Lying to authorities or providing misleading information
Authorities consider these factors:
Severity Factors:
Company Factors:
Cooperation Factors:
Violation | Company Fine | EU Institution Fine |
---|---|---|
Banned AI | €35M or 7% | €1.5M |
Other violations | €15M or 3% | €750K |
False info | €7.5M or 1% | N/A |
GPAI violations | €15M or 3% | N/A |
For companies: Always the HIGHER of fixed amount or percentage
Before 2002:
2002:
2003:
2005:
2006:
2008:
2009:
2012:
2013:
2014:
2016:
2017:
2018:
2019:
2020:
2021:
2022:
2023:
2024:
2025:
2026:
2027:
2028:
2029:
2030:
2031:
Ongoing:
Document Version: 1.0
Based on: EU AI Act (Regulation (EU) 2024/1689)
Last Updated: [Date]
Implementation Timeline & Stage Gates
Stage | When to Perform | Key Deliverables |
Stage 1: Conception | During initial AI system design | Initial Classification Report |
Stage 2: Development | Throughout development phase | Risk Management Documentation, Data Governance Plan |
Stage 3: Pre-Market | Before market placement | Technical Documentation, Conformity Assessment, CE Marking |
Stage 4: Pre-Deployment | Before operational use | FRIA (if applicable), User Instructions |
Stage 5: Post-Market | Continuous monitoring | Incident Reports, Monitoring Logs |
Section 1: Initial Classification Assessment
Stage: Conception
Deliverable: Classification Determination Report
1.1 System Identification
Field | Information Required | Response |
AI System Name | Official designation | _________ |
Version | Current version number | _________ |
Provider Name | Legal entity name | _________ |
Provider Registration | Company registration number | _________ |
Intended Purpose | Clear description of primary use | _________ |
Target Users | Who will deploy/use the system | _________ |
1.2 High-Risk Classification Screening
[Reference: Article 6]
Safety Component Check
Does your AI system fall under any of these categories?
Requires third-party conformity assessment? [ ] Yes [ ] No
Annex III Category Check
[Reference: Annex III]
Check all that apply:
1. Biometric Systems
2. Critical Infrastructure
3. Education and Vocational Training
4. Employment and Worker Management
5. Essential Services and Benefits
6. Law Enforcement
7. Migration, Asylum and Border Control
8. Administration of Justice
1.3 High-Risk Derogation Assessment
[Reference: Article 6(3)]
Answer these questions to determine if derogation applies:
Derogation Documentation Required: If claiming derogation, prepare detailed justification document
Classification Result:
Section 2: Risk Management System
Stage: Development
Deliverable: Risk Management Plan & Risk Register
[Reference: Article 9]
2.1 Risk Identification Matrix
Risk Category | Identified Risks | Probability (1-5) | Impact (1-5) | Risk Score |
Health Risks | ||||
Description: | _____________ | ___ | ___ | ___ |
Safety Risks | ||||
Description: | _____________ | ___ | ___ | ___ |
Fundamental Rights | ||||
Description: | _____________ | ___ | ___ | ___ |
Reasonably Foreseeable Misuse | ||||
Description: | _____________ | ___ | ___ | ___ |
2.2 Vulnerable Groups Impact Assessment
Persons under 18:
Other vulnerable groups (elderly, persons with disabilities, etc.):
2.3 Risk Mitigation Measures
For each identified risk, complete:
Risk ID | Mitigation Strategy | Implementation Method | Residual Risk Level | Acceptable? |
R001 | [ ] Yes [ ] No | |||
R002 | [ ] Yes [ ] No |
2.4 Testing Plan
[Reference: Article 9(6-8)]
Pre-defined Metrics:
Testing Schedule:
Real-World Testing Requirements (if applicable):
Section 3: Data Governance Assessment
Stage: Development
Deliverable: Data Governance Documentation
[Reference: Article 10]
3.1 Data Quality Framework
Data Aspect | Documentation Required | Status |
Collection Process | Source, method, date | [ ] Complete |
Data Origin | Original purpose, consent basis | [ ] Complete |
Preparation Operations | Annotation, labeling, cleaning steps | [ ] Complete |
Key Assumptions | What data measures/represents | [ ] Complete |
Volume Assessment | Quantity and suitability analysis | [ ] Complete |
3.2 Bias Detection and Mitigation
Bias Assessment Checklist:
Identified Biases:
3.3 Data Gaps and Shortcomings
Gap Identified | Impact on Compliance | Remediation Action | Timeline |
Special Category Data Processing (if applicable):
Section 4: Human Oversight Design
Stage: Development
Deliverable: Human Oversight Specification
[Reference: Article 14]
4.1 Oversight Capabilities Checklist
System Understanding:
Human Control Measures:
4.2 Enhanced Requirements for Biometric Systems
Two-Person Verification (if applicable):
Section 5: Fundamental Rights Impact Assessment (FRIA)
Stage: Pre-Deployment
Deliverable: FRIA Report
[Reference: Article 27]
5.1 FRIA Applicability
Required if you are:
5.2 FRIA Components
Component | Description | Documented? |
Deployment Process | Where/how AI will be used | [ ] Yes |
Usage Period | Duration and frequency | [ ] Yes |
Affected Persons | Categories and estimated numbers | [ ] Yes |
Specific Risks | To fundamental rights | [ ] Yes |
Oversight Measures | Human supervision details | [ ] Yes |
Mitigation Plan | If risks materialize | [ ] Yes |
Complaints Mechanism | Internal governance | [ ] Yes |
Market Surveillance Authority Notified: [ ] Yes [ ] No [ ] Pending
Section 6: Technical Compliance Requirements
Stage: Pre-Market
Deliverable: Technical Documentation Package
6.1 Performance Standards
[Reference: Article 15]
Requirement | Target Metric | Achieved | Evidence |
Accuracy | ___% | [ ] Yes [ ] No | _______ |
Robustness | [Define] | [ ] Yes [ ] No | _______ |
Cybersecurity | [Standards] | [ ] Yes [ ] No | _______ |
6.2 Transparency Obligations
[Reference: Article 13 & Article 50]
High-Risk System Requirements:
Additional Transparency Requirements (if applicable):
6.3 Logging and Traceability
[Reference: Article 12]
Automatic Logging Implemented:
Section 7: Documentation Package
Stage: Pre-Market
Deliverable: Complete Documentation Set
7.1 Technical Documentation Checklist
[Reference: Article 11 & Annex IV]
7.2 Quality Management System
[Reference: Article 17]
QMS Components Documented:
7.3 Conformity Documentation
[Reference: Articles 47-49]
Section 8: Post-Market Monitoring
Stage: Post-Market (Ongoing)
Deliverable: Monitoring Reports & Incident Logs
8.1 Monitoring System Setup
[Reference: Article 72]
Monitoring Plan Elements:
8.2 Serious Incident Reporting
[Reference: Article 73]
Incident Categories:
Reporting Timeline:
Incident Log:
Date | Incident Type | Severity | Reported | Resolution |
[ ] Yes [ ] No |
Section 9: General-Purpose AI Models (if applicable)
Stage: Development/Pre-Market
Deliverable: GPAI Documentation Package
[Reference: Articles 51-55]
9.1 Systemic Risk Assessment
Computational Power Check:
Commission Designation Check:
9.2 Additional Requirements for Systemic Risk
If systemic risk identified:
Section 10: Final Compliance Verification
Stage: Pre-Market/Pre-Deployment
Deliverable: Compliance Certificate
10.1 Pre-Market Checklist
Requirement | Status | Evidence Location |
Risk classification completed | [ ] Done | _______ |
Risk management system operational | [ ] Done | _______ |
Technical documentation complete | [ ] Done | _______ |
Testing completed | [ ] Done | _______ |
QMS established | [ ] Done | _______ |
Conformity assessment done | [ ] Done | _______ |
CE marking affixed | [ ] Done | _______ |
EU database registration | [ ] Done | _______ |
10.2 Deployment Readiness
10.3 Compliance Declaration
Declaration: I hereby confirm that all applicable requirements of the EU AI Act have been assessed and addressed for this AI system.
Responsible Person:
Appendix: Key Deliverables Summary
Stage | Key Deliverables | Storage Location | Retention Period |
Conception | Classification Report | _______ | 10 years |
Development | Risk Management Plan | _______ | 10 years |
Development | Data Governance Docs | _______ | 10 years |
Pre-Market | Technical Documentation | _______ | 10 years |
Pre-Market | QMS Documentation | _______ | 10 years |
Pre-Market | Conformity Declaration | _______ | 10 years |
Pre-Deployment | FRIA Report | _______ | 10 years |
Post-Market | Monitoring Reports | _______ | 10 years |
Post-Market | Incident Reports | _______ | 10 years |
Important Notes:
End of Checklist
For the most current version of the EU AI Act and related guidance, visit: EU AI Act Official Text