Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

+1 -800-456-478-23

  • AI Governance
  • CIS Controls
  • GDPR
  • HIPAA
  • ISO27001
  • NIST CSF
  • PCI-DSS
  • SOC-2
IT Consulting Services
Template Marketplace
Education & Training
// our recent news

READ OUR LATEST BLOG & NEWS

We’re putting the finishing touches on these upcoming services! Although they’re not live just yet, feel free to explore the service cards below and see how we plan to define each offering’s scope, deliverables, and value in our Projects section.

For a quick overview or to inquire about custom solutions, be sure to visit our Solutions menu — we’d love to discuss your needs and offer a consultation when you’re ready.

Confident AI Adoption
Confident AI Adoption
Harness cutting-edge AI safely and responsibly through robust governance frameworks, security policies, and compliance tools. Our AI Governance consulting helps you mitigate risk, meet regulatory demands, and scale innovation – without confusion or costly missteps.
LEARN MORE
Cybersecurity Risk Assessments
Cybersecurity Risk Assessments
Identify and mitigate security threats before they become a problem. Our comprehensive risk assessments evaluate applications, databases, networks, and infrastructure to ensure compliance and resilience. Put a stop to any potential threat, before it starts.
LEARN MORE
Security & Compliance Templates
Security & Compliance Templates
Stay ahead of regulatory requirements with our expertly designed security and compliance templates. We provide ready-to-use policies, procedures, and frameworks tailored to your industry’s needs. Accelerate certification & reduce audit times.
LEARN MORE
Cloud Security & Architecture Consulting
Cloud Security & Architecture Consulting
Secure your cloud infrastructure with our in-depth security architecture assessments & advisory consultations. From AWS, GCP, Oracle Cloud and Azure M365, we help businesses build resilient, scalable, and compliant cloud environments.
LEARN MORE
Reliable Attack Defense
Reliable Attack Defense
Stay ahead of cyber threats with structured incident response processes and readiness planning. Our service helps you detect breaches, limit damage, and streamline compliance—empowering your team to handle threats with confidence and preserve operational continuity.
LEARN MORE
E-Learning & Training
E-Learning & Training
Enhance your organization’s capabilities—from leadership to front-line teams—with specialized e-learning on cybersecurity and AI risk governance. Our courses blend real-world examples with recognized standards, ensuring compliance and confidence in the face of evolving threats.
LEARN MORE
// our services

OUR LEADERSHIP TEAM

Tech Jacks Solutions’ goal is to overflow value into your business. We start by taking a professionally optimized approach when delivering our solutions to you.

TECH JACKS SOLUTIONS

EU AI Act - Article Reference Guide

EU AI Act – Article Reference Guide Based on Regulation (EU) 2024/1689 of 13 June 2024


Chapter I: General Provisions

  • [Article 1: Subject Matter]: Defines the purpose of the Regulation to improve internal market functioning, promote human-centric and trustworthy AI, ensure high protection of health, safety, and fundamental rights against harmful AI effects, and support innovation.
  • [Article 2: Scope]: Specifies application to providers, deployers, importers, and distributors of AI systems and GPAI models within or outside the Union if output is used in the Union; excludes AI for military, defense, or national security objectives.
  • [Article 3: Definitions]: Provides definitions for key terms used throughout the Regulation, such as ‘AI system’, ‘provider’, ‘deployer’, ‘high-risk’, and ‘general-purpose AI model’.
  • [Article 4: AI Literacy]: Mandates that providers and deployers ensure sufficient AI literacy among their staff and persons dealing with AI system operation and use on their behalf.

Chapter II: Prohibited AI Practices

  • [Article 5: Prohibited AI Practices]: Lists strictly prohibited AI practices including systems that manipulate human behavior, exploit vulnerabilities, or use social scoring; prohibitions apply from February 2, 2025.

Chapter III: High-Risk AI Systems

Section 1: Classification

  • [Article 6: Classification Rules for High-Risk AI Systems]: Establishes conditions for high-risk classification including AI as safety components of products under EU harmonization legislation (Annex I) or systems used for purposes in Annex III; Commission can amend these lists.
  • [Article 7: Amendments to Annex III]: Grants Commission power to amend Annex III by adding or removing high-risk use cases based on criteria like risk of harm to health, safety, or fundamental rights.

Section 2: Requirements

  • [Article 8: Compliance with the Requirements]: States that high-risk AI systems must comply with requirements set out in this section.
  • [Article 9: Risk Management System]: Requires providers to establish, implement, document, and maintain continuous risk management system throughout AI system lifecycle to identify, assess, and mitigate risks.
  • [Article 10: Data and Data Governance]: Mandates training, validation, and testing datasets meet quality criteria emphasizing relevance, representativeness, error-freeness, and completeness to mitigate biases.
  • [Article 11: Technical Documentation]: Requires comprehensive technical documentation before market placement detailing design, development, data, and risk management; SMEs can provide simplified documentation.
  • [Article 12: Record-Keeping]: Requires automatic recording of events (logs) throughout system lifetime for traceability and post-market monitoring.
  • [Article 13: Transparency and Provision of Information to Deployers]: High-risk systems must include clear, comprehensive instructions enabling deployers to understand operation, functionality, strengths, and limitations.
  • [Article 14: Human Oversight]: Requires design and development allowing effective human oversight during use.
  • [Article 15: Accuracy, Robustness and Cybersecurity]: Stipulates appropriate levels of accuracy, robustness, and cybersecurity based on intended purpose and context.

Section 3: Obligations

  • [Article 16: Obligations of Providers of High-Risk AI Systems]: Outlines general provider obligations including ensuring compliance, quality management system, documentation, conformity assessment, EU declaration, and corrective actions.
  • [Article 17: Quality Management System]: Requires providers to implement documented quality management system ensuring regulatory compliance.
  • [Article 18: Documentation Keeping]: Providers must keep technical documentation, QMS documentation, and records available to authorities for 10 years.
  • [Article 19: Automatically Generated Logs]: Providers must keep logs from high-risk AI systems for at least six months when under their control.
  • [Article 20: Corrective Actions and Duty of Information]: Addresses corrective actions and duty to inform authorities if AI system not in conformity.
  • [Article 21: Cooperation with Competent Authorities]: Obliges providers to cooperate by providing necessary information and documentation upon request.
  • [Article 22: Authorised Representatives of Providers]: Non-EU providers must appoint EU authorized representative for compliance matters.
  • [Article 23: Obligations of Importers]: Importers must verify CE marking, documentation, and provider compliance before making systems available.
  • [Article 24: Obligations of Distributors]: Distributors must verify CE marking, documentation, and provider/importer compliance.
  • [Article 25: Responsibilities Along the AI Value Chain]: Clarifies when distributors, importers, deployers, or third parties become providers with provider obligations.
  • [Article 26: Obligations of Deployers of High-Risk AI Systems]: Deployers must use systems per instructions, monitor operation, inform affected persons, and comply with registration requirements.
  • [Article 27: Fundamental Rights Impact Assessment]: Public authorities and certain private entities must perform FRIA before using high-risk AI systems, identifying and mitigating fundamental rights risks.

Section 4: Notifying Authorities and Notified Bodies

  • [Articles 28-39]: Establish requirements for notifying authorities, conformity assessment bodies, notification procedures, operational obligations, coordination, and third-country recognition.

Section 5: Standards, Conformity Assessment, Certificates, Registration

  • [Article 40: Harmonised Standards and Standardisation Deliverables]: Compliance with harmonized standards published in Official Journal presumes conformity with requirements.
  • [Article 41: Common Specifications]: Allows common specifications to presume conformity where harmonized standards unavailable.
  • [Article 42: Presumption of Conformity with Certain Requirements]: Establishes specific presumptions of conformity for systems trained on representative data or certified under cybersecurity schemes.
  • [Article 43: Conformity Assessment]: High-risk systems require conformity assessment before market placement via internal control or notified body assessment.
  • [Article 44: Certificates]: Details validity periods and renewal processes for notified body certificates.
  • [Article 45: Information Obligations of Notified Bodies]: Notified bodies must inform authorities about issued, refused, or withdrawn certificates.
  • [Article 46: Derogation from Conformity Assessment Procedure]: Outlines conditions for conformity assessment derogations.
  • [Article 47: EU Declaration of Conformity]: Providers must draw up written declaration for each high-risk system declaring conformity with AI Act and applicable EU legislation.
  • [Article 48: CE Marking]: Providers must affix CE marking to systems, packaging, or documentation indicating conformity.
  • [Article 49: Registration]: Providers/representatives must register themselves and high-risk systems in EU database; public authorities acting as deployers also have registration obligations.

Chapter IV: Transparency Obligations

  • [Article 50: Transparency Obligations for Certain AI Systems]: Requires informing users they’re interacting with AI unless obvious; mandates labeling of deepfakes and AI-generated content.

Chapter V: General-Purpose AI Models

Section 1: Classification Rules

  • [Article 51: Classification of GPAI Models with Systemic Risk]: Defines criteria for classifying GPAI models as having systemic risk based on high-impact capabilities or compute exceeding 10^25 FLOPs; providers must notify AI Office if criteria met.
  • [Article 52: Procedure]: Outlines procedure for classifying GPAI models with systemic risk and maintaining list of such models.

Section 2: Obligations for Providers of General-Purpose AI Models

  • [Article 53: Obligations for Providers of GPAI Models]: Requires technical documentation, information for downstream providers, copyright compliance policy, and training content summary; exemptions for non-systemic risk open-source models.
  • [Article 54: Authorised Representatives of GPAI Providers]: Non-EU GPAI providers must appoint Union authorized representative for compliance tasks.

Section 3: Obligations for GPAI Models with Systemic Risk

  • [Article 55: Obligations for GPAI Models with Systemic Risk]: Additional obligations including model evaluation, adversarial testing, risk assessment/mitigation, incident reporting, and cybersecurity protection.

Section 4: Codes of Practice

  • [Article 56: Codes of Practice]: AI Office encourages Union-level codes covering GPAI obligations including risk management; codes should be ready by May 2, 2025.

Chapter VI: Measures in Support of Innovation

  • [Article 57: AI Regulatory Sandboxes]: Member States must establish at least one sandbox by August 2, 2026 to facilitate AI development and testing under regulatory oversight.
  • [Article 58: Detailed Arrangements for AI Regulatory Sandboxes]: Common principles for sandbox establishment and operation to avoid Union fragmentation.
  • [Article 59: Testing in Real World Conditions]: Allows real-world testing under specific safeguards.
  • [Article 60: Further Provisions for Testing in Real World Conditions]: Additional rules including required plan and EU database registration.
  • [Article 61: Informed Consent for Testing]: Requires documented informed consent from participants in real-world testing.
  • [Article 62: Measures for SMEs and Start-ups]: Member States should provide priority sandbox access and information platforms for SMEs.
  • [Article 63: Derogations for Specific Operators]: Allows microenterprises simplified compliance with quality management system elements.

Chapter VII: Governance

Section 1: Governance at Union Level

  • [Article 64: AI Office]: Establishes European AI Office within Commission to develop Union AI expertise and implement Regulation.
  • [Article 65: European Artificial Intelligence Board]: Establishes Board of Member State representatives for uniform application and coordination.
  • [Article 66: Tasks of the Board]: Board provides opinions, recommendations, and implementation guidance.
  • [Article 67: Advisory Forum]: Establishes stakeholder forum for input.
  • [Article 68: Scientific Panel of Independent Experts]: Panel provides alerts and advice on GPAI systemic risks.
  • [Article 69: Access to Pool of Experts]: Facilitates Member State access to expert pool.

Section 2: National Competent Authorities

  • [Article 70: Designation of National Competent Authorities]: Each Member State designates notifying and market surveillance authorities for supervising Regulation implementation.

Chapter VIII: EU Database

  • [Article 71: EU Database for High-Risk AI Systems]: Establishes central public database for registered high-risk systems with exceptions for sensitive data.

Chapter IX: Post-Market Monitoring, Information Sharing and Market Surveillance

Section 1: Post-Market Monitoring

  • [Article 72: Post-Market Monitoring]: Providers must establish monitoring system and plan to collect usage experience and identify corrective action needs.

Section 2: Sharing of Information

  • [Article 73: Reporting of Serious Incidents]: Providers must report serious incidents and corrective measures to AI Office and authorities without undue delay.

Section 3: Enforcement

  • [Articles 74-84]: Cover market surveillance, mutual assistance, supervision of testing, authority powers, confidentiality, procedures for risk evaluation, safeguards, compliance issues, and testing support structures.

Section 4: Remedies

  • [Article 85: Right to Lodge a Complaint]: Confirms right to lodge complaints with market surveillance authority.
  • [Article 86: Right to Explanation of Individual Decision-Making]: Grants affected persons right to clear explanation of high-risk AI decisions with legal or significant effects.
  • [Article 87: Reporting of Infringements]: Applies whistleblower protection directive to AI Act infringement reporting.

Section 5: Supervision of General-Purpose AI Models

  • [Articles 88-94]: Commission/AI Office has exclusive GPAI enforcement powers including monitoring, documentation requests, evaluations, and procedural rights.

Chapter X: Codes of Conduct and Guidelines

  • [Article 95: Codes of Conduct]: Encourages voluntary application of high-risk requirements to other AI systems.
  • [Article 96: Guidelines from the Commission]: Commission issues implementation guidelines with attention to SMEs and local authorities.

Chapter XI: Delegation of Power and Committee Procedure

  • [Articles 97-98]: Specify conditions for delegated acts and committee procedures.

Chapter XII: Penalties

  • [Article 99: Penalties]: Member States set effective, proportionate penalties; fines up to €35 million or 7% turnover for prohibited practices violations.
  • [Article 100: Administrative Fines on Union Institutions]: Fines for non-compliant EU bodies.
  • [Article 101: Fines for GPAI Providers]: Commission can fine GPAI providers up to 3% turnover or €15 million.

Chapter XIII: Final Provisions

  • [Articles 102-110]: Amend existing EU regulations/directives for AI Act consistency.
  • [Article 111: AI Systems Already Placed on Market]: Transitional provisions with different compliance deadlines based on system nature.
  • [Article 112: Evaluation and Review]: Commission regularly evaluates need for Act amendments.
  • [Article 113: Entry into Force]: Regulation entered force August 2, 2024 with phased implementation.