ISO 42001 Documentation Requirements
Every document the standard requires, mapped to Annex A controls with implementation priorities, dependency chains, and cross-framework alignment. Built from the official ISO/IEC 42001:2023 standard.
Documents
Controls
Tiers
Groups
Documentation Is Where AI Management System Implementations Stall
ISO 42001 requires approximately 20 specific documented outputs for your AI Management System (AIMS). Most organizations either create them all at once and burn out, or skip the dependency chain and build documents that reference things that do not exist yet.
The standard itself does not tell you what order to create these documents. It lists requirements across Clauses 4 through 10, adds documentation mandates through 38 Annex A controls, and leaves you to figure out what depends on what.
That is the gap this guide fills. Not what ISO 42001 says (you can read the standard for that), but how to build the documentation set in a sequence that works, with each document enabling the ones that follow.
✗ Without a Build Order
✓ With Dependency-Aware Sequencing
How Documentation Flows Through the AIMS
ISO 42001 documentation splits into two categories. Operational documents define how the system works. Evidence records prove it is working. Per Clause 7.5, both require controlled creation, updating, distribution, and retention.
Operational documents are prescriptive: the AI policy tells people what matters, processes tell them how to act, plans tell them when and who. These must exist before you generate any evidence.
Evidence records are descriptive: risk assessment results, audit findings, management review minutes, corrective action logs. They prove the operational documents are being followed, and they feed back into the next improvement cycle.
Both types are governed by the same controls: Cl. 7.5.2 covers how all documented information is created, identified, formatted, reviewed, and approved. Cl. 7.5.3 covers how all documented information is controlled for distribution, access, storage, version control, retention, and disposition. Both clauses apply to operational documents and evidence records alike.
Clause 7.5.3 requires that documented information be controlled for availability, suitability, and protection from loss of confidentiality, improper use, and loss of integrity. This means version control, access restrictions, and retention schedules must be defined before you start generating documents at scale.
Implementation Priority Guide TJS RECOMMENDED
Five tiers structure the rollout. Each tier unlocks the next. Do not skip ahead: a risk treatment plan built before risk criteria exist will need to be rebuilt.
If your organization has ISO 27001, you may already have 40-60% of the management system infrastructure in place: document control (Cl. 7.5), internal audit program (Cl. 9.2), management review process (Cl. 9.3), and corrective action procedures (Cl. 10.2). Engineering teams typically have design documents, test plans, and deployment runbooks that map directly to A.6.2.2 through A.6.2.5. Start by mapping what exists before creating anything new.
For the documents you still need to create, TJS offers pre-built ISO 42001 documentation templates mapped to the clause requirements below.
Foundation Documents
Weeks 1-3 | Must complete before all othersThese four documents define the boundaries and direction of your entire AI management system. Every other document references at least one of them.
Risk Framework
Weeks 3-6 | Requires: Foundation DocumentsThe risk framework translates your policy into actionable criteria and repeatable processes. Without these, risk assessments produce inconsistent results.
Operational Controls
Weeks 6-10 | Requires: Risk FrameworkThese documents operationalize the controls selected in your Statement of Applicability. They define how your organization handles AI resources, data, lifecycle stages, and third-party relationships.
Evidence Records
Weeks 10-14 | Generated from running the systemThese are not documents you write from scratch. They are outputs captured from executing your operational controls. If you have no operational controls yet, you have nothing to capture.
Improvement Cycle
Weeks 14-18 | Requires: Evidence RecordsThe final tier closes the Plan-Do-Check-Act loop. These documents evaluate what is working, identify gaps, and drive corrective actions back into the system.
Ready to start building?
The Quick-Start Checklist covers all five tiers in priority order, or download the full bundle with all six governance tools.
The Complete Documentation Map
Every mandatory documented output from ISO 42001, mapped to the clause that requires it, the Annex A controls it enables, and what it depends on. Click any card to see details.
Cl. 4.3 AIMS Scope
+ DetailsDefine boundaries and applicability of your AI management system, considering external and internal issues and interested party requirements.
Depends on: Nothing (this is the starting point).
Enables: AI Policy, Risk Criteria, all downstream documents.
Cl. 5.2 AI Policy
+ DetailsTop management establishes management direction for AI activities, providing a framework for setting AI objectives with commitment to applicable requirements and continual improvement.
Annex B.2 guidance: Content should address business strategy, risk level, legal requirements. Must align with quality, security, safety, and privacy policies (A.2.3).
Depends on: AIMS Scope.
Enables: AI Objectives, Risk Criteria, all operational controls.
Cl. 5.3 Roles & Responsibilities
+ DetailsTop management assigns responsibility and authority for ensuring AIMS conformance and reporting on AIMS performance.
Annex B.3 guidance: Define roles for risk management, impact assessment, security, safety, privacy, development, human oversight, supplier relationships, and data quality. Establish confidential reporting mechanisms with whistleblower protection (A.3.3).
Depends on: AIMS Scope.
Enables: All operational and evidence documents (people need to be assigned before work begins).
Cl. 6.2 AI Objectives
+ DetailsMeasurable targets consistent with the AI policy, established at relevant functions and levels with plans for what will be done, resources, responsibilities, timelines, and evaluation methods.
Example: “Reduce false positive rate in hiring AI to below 5% by Q3” is measurable. “Improve AI fairness” is not.
Depends on: AI Policy.
Enables: Risk Assessment Process, Monitoring and Measurement.
Cl. 6.1.1 Risk Criteria
+ DetailsThresholds distinguishing acceptable from non-acceptable AI risks, planned by domain, intended use, and context.
Depends on: AIMS Scope, AI Policy, AI Objectives.
Enables: Risk Assessment Process, Risk Treatment Process, Impact Assessment Process.
Cl. 6.1.2 Risk Assessment Process
+ DetailsA repeatable methodology for identifying AI risks, analyzing potential consequences to organizations, individuals, and societies, assessing likelihood, determining risk levels, and prioritizing for treatment.
Cross-reference: ISO 23894 Cl. 6.4.2 (risk identification), 6.4.3 (risk analysis), 6.4.4 (risk evaluation) provide detailed AI-specific guidance.
Depends on: Risk Criteria, AIMS Scope.
Enables: Risk Assessment Results (Cl. 8.2), Risk Treatment Process.
Cl. 6.1.3 Risk Treatment Process
+ DetailsSelect treatment options, determine necessary controls comparing with Annex A, produce a Statement of Applicability, formulate the risk treatment plan, and obtain management approval for residual risks.
Depends on: Risk Assessment Process, Risk Criteria.
Enables: Statement of Applicability, Risk Treatment Results (Cl. 8.3), all Annex A operational controls.
Cl. 6.1.4 AI System Impact Assessment
+ DetailsProcess for assessing potential consequences to individuals, groups, and societies from AI system development, provision, or use, considering technical and societal context and applicable jurisdictions.
Cross-reference: ISO 42005 provides detailed guidance covering timing (Cl. 5.4), scope (Cl. 5.5), responsibilities (Cl. 5.6), thresholds (Cl. 5.7), and analysis (Cl. 5.9).
Depends on: Risk Criteria, AIMS Scope.
Enables: Impact Assessment Results (Cl. 8.4).
Cl. 7.2 Competence Evidence
+ DetailsProof that persons affecting AI performance have the necessary competence through education, training, or experience.
Annex B.4 guidance (A.4.6): Document human resources including data scientists, oversight roles, domain experts, and their competences for AI development, deployment, operation, maintenance, and decommissioning.
Depends on: Roles and Responsibilities.
Enables: Evidence of qualified personnel for audit purposes.
A.4.2-A.4.6 Resource Documentation
+ DetailsCatalog of all resources required for AI activities: data resources, tooling, system and computing resources, and human resources.
• Data resources (A.4.3): Provenance, categories, bias assessment, quality measures
• Tooling resources (A.4.4): Algorithms, models, optimization methods
• System/computing (A.4.5): On-premises vs. cloud, processing capabilities
• Human resources (A.4.6): Data scientists, oversight roles, domain experts
Depends on: AIMS Scope, Roles and Responsibilities.
Enables: Risk assessments, lifecycle controls.
A.7.2-A.7.6 Data Governance
+ DetailsProcesses for managing data throughout AI system development: acquisition, quality, provenance, and preparation.
• Data management (A.7.2): Privacy, security, transparency, representativeness
• Data acquisition (A.7.3): Categories, sources, demographics, bias, rights, provenance
• Data quality (A.7.4): Impact on outputs, bias on fairness, per ISO/IEC 25024
• Data provenance (A.7.5): Creation, update, transcription, validation, transfer history
• Data preparation (A.7.6): Cleaning, imputation, normalization, labeling, encoding
Depends on: Resource Documentation, Risk Treatment.
Enables: Lifecycle processes, monitoring results.
A.6.1-A.6.2 AI System Lifecycle
+ DetailsProcesses and documentation for each stage of the AI system lifecycle: requirements, design, verification, deployment, operation, technical documentation, and event logging.
• Requirements (A.6.2.2): Documented requirements for new or materially enhanced AI systems
• Design (A.6.2.3): Based on organizational objectives and specifications
• Verification (A.6.2.4): Testing methodologies, evaluation criteria, error rates
• Deployment (A.6.2.5): Plan with requirements met prior to deployment
• Operation (A.6.2.6): Performance monitoring, data drift, repairs, updates, support
• Technical docs (A.6.2.7): For each category of interested parties
• Event logging (A.6.2.8): At minimum when the AI system is in use
Depends on: Risk Treatment, Resource Documentation, Data Management.
Enables: Monitoring results, audit evidence.
Annex A Control Explorer
ISO 42001 Annex A contains 9 control groups with 38 controls. Each control generates documentation requirements. Click a group to expand its controls and see what documentation each one demands.
The Statement of Applicability (Cl. 6.1.3) lets you exclude controls that do not apply to your AI activities with documented justification. If your organization only consumes AI (does not develop it), several A.6 lifecycle controls and A.7 data controls may not apply. The “Mandatory” badge below means the control uses “shall” language and is required when included in your SoA.
Document Dependency Chains
Each dependency chain shows which documents must exist before others can be created. Building out of order creates circular references and rework.
The Statement of Applicability is frequently created too early. It lists which Annex A controls apply and which do not, with justification. But those justifications come from risk treatment results. If you write the SoA before completing risk assessment and treatment, the justifications are guesswork, not evidence.
Cross-Framework Alignment
Organizations pursuing multiple frameworks can use a single documentation set with cross-references. Here is how the key ISO 42001 documents map to NIST AI RMF and EU AI Act requirements.
| ISO 42001 Document | NIST AI RMF Alignment | Shared Output |
|---|---|---|
| AIMS Scope (Cl. 4.3) | MAP 1.1: Intended purposes, context, settings documented | System boundaries and context definition |
| AI Policy (Cl. 5.2) | GOVERN 1.2: Trustworthy AI characteristics in policies | Organizational AI principles |
| Roles (Cl. 5.3) | GOVERN 2.1: Roles, responsibilities, lines of communication | Accountability matrix |
| AI Objectives (Cl. 6.2) | MAP 1.3: Organization’s AI technology goals | Measurable AI targets |
| Risk Assessment (Cl. 6.1.2) | MAP 5.1: Likelihood and magnitude of impacts | Risk identification and analysis |
| Risk Treatment (Cl. 6.1.3) | MANAGE 1.2: Treatment prioritized by impact | Risk response plans |
| Impact Assessment (Cl. 6.1.4) | MAP 5.1: Impacts to individuals characterized | Consequence assessment |
| Monitoring (Cl. 9.1) | MEASURE 2.4: Functionality monitored in production | Performance tracking |
| Competence (Cl. 7.2) | GOVERN 2.2: Personnel receive AI risk training | Training records |
| Corrective Action (Cl. 10.2) | MANAGE 4.3: Incidents communicated, recovery documented | Nonconformity tracking |
| ISO 42001 Document | EU AI Act Alignment | Applicability |
|---|---|---|
| Risk Assessment (Cl. 6.1.2) | Art. 9: Risk management system for high-risk AI | High-risk AI systems |
| Impact Assessment (Cl. 6.1.4) | Art. 9: Identification and analysis of known risks | High-risk AI systems |
| Data Governance (A.7) | Art. 10: Data and data governance requirements | High-risk AI systems |
| Technical Docs (A.6.2.7) | Art. 11 + Annex IV: Technical documentation | High-risk AI systems |
| Event Logging (A.6.2.8) | Art. 12: Record-keeping and automatic logging | High-risk AI systems |
| User Information (A.8.2) | Art. 13: Transparency and provision of information | High-risk AI systems |
| Responsible Use Processes (A.9.2) | Art. 14: Human oversight measures | High-risk AI systems |
| Monitoring (Cl. 9.1) | Art. 9(2): Continuous iterative process | High-risk AI systems |
| Incident Comms (A.8.4) | Art. 73: Reporting of serious incidents | High-risk AI providers |
| ISO 42001 Document | ISO 23894 Alignment | Guidance Provided |
|---|---|---|
| Risk Criteria (Cl. 6.1.1) | Cl. 6.3: Scope, context, and criteria (incl. 6.3.4) | AI-specific risk criteria definition |
| Risk Assessment (Cl. 6.1.2) | Cl. 6.4.2: Risk identification | AI risk identification methods |
| Risk Assessment (Cl. 6.1.2) | Cl. 6.4.3: Risk analysis | AI risk analysis techniques |
| Risk Assessment (Cl. 6.1.2) | Cl. 6.4.4: Risk evaluation | Comparing AI risks against criteria |
| Risk Treatment (Cl. 6.1.3) | Cl. 6.5: Risk treatment | AI-specific treatment options |
| Monitoring (Cl. 9.1) | Cl. 6.6: Monitoring and review | Ongoing AI risk monitoring guidance |
| Evidence Records | Cl. 6.7: Recording and reporting | AI risk documentation and reporting |
Only clauses with direct documentation mapping are shown. ISO 23894 Cl. 6.1 (General) and Cl. 6.2 (Communication and consultation) also exist but do not map to specific ISO 42001 document requirements.
ISO 23894 has clauses 6.1 through 6.7 only. Clauses 6.8 and 6.9 do not exist. Risk treatment is Cl. 6.5, not Cl. 6.6. Risk analysis is Cl. 6.4.3, not Cl. 6.5. These are commonly misattributed in secondary sources.
Foundation Readiness Check
Answer 8 questions covering the core ISO 42001 documentation requirements. This checks your foundation and risk framework readiness, not full certification scope. Additional documentation (data governance, lifecycle, third-party) is covered in the Annex A explorer above.
Do you have a documented AIMS scope that defines which AI systems are covered?
Per Cl. 4.3: boundaries, applicability, AI system roles.
Is there a formal AI policy signed by top management?
Per Cl. 5.2: documented, communicated, available to interested parties.
Have you defined documented risk criteria for AI systems?
Per Cl. 6.1.1: thresholds distinguishing acceptable from non-acceptable risks.
Do you have a documented AI risk assessment process?
Per Cl. 6.1.2: repeatable methodology for identifying, analyzing, and prioritizing AI risks.
Is there a Statement of Applicability listing all Annex A controls with justifications?
Per Cl. 6.1.3: links risk treatment results to specific controls.
Have you documented AI system impact assessment results?
Per Cl. 8.4 and A.5.3: consequences to individuals, groups, and societies assessed and retained.
Have you completed at least one internal audit of the AIMS?
Per Cl. 9.2: audit program covering conformance and effectiveness.
Do you have documented nonconformity and corrective action records?
Per Cl. 10.2: tracking, resolution, effectiveness review, and AIMS changes if necessary.
Priority Gaps to Close
Common Documentation Mistakes
These are the errors auditors flag most often. Each one traces back to a specific clause requirement.
The Statement of Applicability (Cl. 6.1.3) must justify why each Annex A control is included or excluded. Those justifications come from risk treatment decisions. If you build the SoA early as a “gap analysis,” you end up with generic rationale like “not applicable” instead of specific, risk-based reasoning. Auditors will ask for the connection between your risk register and your SoA, and template-based SoAs cannot provide it.
While A.2.3 requires alignment with other organizational policies, the AI policy (Cl. 5.2) must address AI-specific concerns: fairness, transparency, explainability, human oversight, and societal impact. A.2.2 specifically requires a policy “for the development or use of AI systems.” An ISO 27001 policy with “AI” appended will fail the appropriateness test at Cl. 5.2 because it does not address AI-specific management direction.
Clause 6.1.4 is not optional and it is not a duplicate of risk assessment. Risk assessment (Cl. 6.1.2) focuses on organizational risks. Impact assessment (Cl. 6.1.4) focuses on consequences to individuals, groups, and societies. Per Annex B.5, this includes individual impacts (fairness, accountability, transparency, security, privacy, safety, accessibility, human rights) and societal impacts (environment, economic, government, health/safety, culture/values). Organizations that skip this miss the standard’s core purpose of responsible AI.
Clause 7.5.3 requires that documented information be controlled for availability, suitability, and protection from loss of confidentiality, improper use, and loss of integrity. This means distribution controls, access restrictions, storage, version control, retention, and disposition. Documents in shared folders without version history, access logs, or defined retention periods violate this clause directly. Set this up before generating documents at scale.
ISO 42001 uses precise language. “The organization shall document” means create and maintain a prescriptive document (policy, process, plan). “The organization shall retain documented information” means keep evidence records that prove something happened. Risk assessment results (Cl. 8.2) must be “retained” because they are evidence. The risk assessment process (Cl. 6.1.2) must be “documented” because it is a procedure. Mixing these up leads to creating procedures when you should be capturing outputs, or vice versa.
Common Questions
Answers sourced from ISO/IEC 42001:2023 clause requirements, Annex A controls, and Annex B implementation guidance.
Approximately 20 specific documented outputs across Clauses 4 through 10, plus documentation generated by Annex A controls. The exact count depends on your scope: Cl. 7.5.1 states the AIMS shall include documented information required by the standard and whatever else your organization determines is needed for AIMS effectiveness. A small company consuming one AI tool will have fewer documents than a large enterprise developing multiple AI systems.
Operational documents define how the system works: policies, procedures, process definitions, and plans. They are prescriptive and tell people what to do. Evidence records prove the system is working: audit results, risk assessment outputs, management review minutes, corrective action logs. They are descriptive and demonstrate conformance. Cl. 7.5 covers both, with Cl. 7.5.2 addressing creation and updating, and Cl. 7.5.3 addressing control, distribution, and retention.
Required by Cl. 6.1.3, the SoA lists all Annex A controls, states whether each is included or excluded from your AIMS, and justifies each decision. It connects your risk treatment results to specific controls: if risk treatment identifies the need for data quality management, the SoA maps that to control A.7.4. Excluded controls must have documented rationale explaining why they do not apply to your AI activities. The SoA is one of the first documents auditors request.
Partially. Annex D specifically addresses integration with ISO/IEC 27001, ISO/IEC 27701, ISO 9001, ISO 22000, and ISO 13485. Shared management system elements (document control, internal audit programs, management review) can be combined. But ISO 42001 adds AI-specific requirements that ISO 27001 does not cover: AI system impact assessment (Cl. 6.1.4), AI-specific risk assessment (Cl. 6.1.2), data governance for AI systems (A.7), and AI system lifecycle controls (A.6). These must be documented separately.
Annex C is informative (not normative), listing 11 potential AI-related organizational objectives and 7 risk sources. The objectives are: Accountability (C.2.1), AI Expertise (C.2.2), Data Availability and Quality (C.2.3), Environmental Impact (C.2.4), Fairness (C.2.5), Maintainability (C.2.6), Privacy (C.2.7), Robustness (C.2.8), Safety (C.2.9), Security (C.2.10), and Transparency and Explainability (C.2.11). Use these to inform your AI Objectives (Cl. 6.2) and risk criteria (Cl. 6.1.1).
Significant overlap exists. The AI Policy (Cl. 5.2) aligns with NIST GOVERN 1.2 (trustworthy AI in policies). Risk assessment (Cl. 6.1.2) maps to NIST MAP functions. Impact assessments (Cl. 6.1.4) correspond to NIST MAP 5.1. Monitoring (Cl. 9.1) aligns with NIST MEASURE functions. Organizations pursuing both can use a single documentation set with cross-references rather than parallel systems. See the Cross-Framework Alignment section above for the full mapping.
Templates & Tools
Downloadable resources to accelerate your ISO 42001 documentation build. Each tool maps to specific clause requirements covered in this guide.
Charter Implementation Checklist
Covers governance structure, policy elements, and role assignments aligned to ISO 42001 Cl. 4-5.
Download Free → Quick StartAI Governance Quick-Start Checklist
Three-tier checklist covering foundation, risk framework, and operational controls in priority order.
Download Free → ReferenceRegulatory Mapping Cheat Sheet
40-field cross-reference mapping ISO 42001 to NIST AI RMF, EU AI Act, and ISO 23894.
Download Free → InteractiveRisk Tier Decision Tree
Seven-question decision tree to classify AI systems by risk level, aligned to Cl. 6.1.1 criteria.
Download Free → Template40-Field AI Use Case Tracker
Fillable tracker covering all resource documentation fields from A.4.2 through A.4.6.
Download Free → TemplateBoard AI Governance Summary
Quarterly board report template covering management review requirements from Cl. 9.3.
Download Free →Need the actual documents, not just the checklist?
Pre-built templates for AIMS Scope, AI Policy, Statement of Applicability, Risk Assessment, Impact Assessment, and more. Each template maps to the clause requirements covered in this guide, with fill-in sections and auditor-ready formatting.