Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

42001
What is iso 42001 Clause 6

Author: Derrick D. Jackson
Title: Founder & Senior Director of Cloud Security Architecture & Risk
Credentials: CISSP, CRISC, CCSP
Last updated November 10th, 2025

Hello Everyone, Help us grow our community by sharing and/or supporting us on other platforms. This allow us to show verification that what we are doing is valued. It also allows us to plan and allocate resources to improve what we are doing, as we then know others are interested/supportive.

What Is ISO 42001 Clause 6: The Strategic Core of AI Governance

Your organization mapped its AI landscape in Clause 4 and secured leadership commitment in Clause 5. Clause 6 is where ISO/IEC 42001:2023 gets operational: systematic planning for AI risk management.

This is where you establish formal methodologies for identifying AI system risks and determining how to address them before they materialize.

Disclaimer: This article provides educational guidance on ISO 42001 planning requirements. Organizations should obtain the official standard from ISO for complete compliance requirements.

Executive Summary

What It Covers: Clause 6 establishes systematic processes for AI risk management: risk criteria, assessment methodologies, treatment plans, impact assessments, and measurable objectives.

Time Investment: Think 3-6 months for initial implementation with cross-functional teams (data science, legal, ethics, business, risk).

Key Deliverables: Statement of Applicability (SoA) listing all controls with justifications, AI Risk Treatment Plan, documented assessment processes, and quantitative objectives.

Why It Matters: Without formal planning, you can’t demonstrate compliance to auditors or regulators. The EU AI Act and other regulations require documented risk management for high-risk AI. This matters if you are in-scope of EU AI ACT or not.

Integration Point: Planning in Clause 6 feeds directly into operational execution in Clause 8.

Why Planning Matters Now

Regulatory Requirements – The EU AI Act mandates risk management for high-risk AI, with fines up to €35 million or 7% of global revenue.

AI-Specific Risks – Model bias, data poisoning, and concept drift need specialized methodologies that ISO/IEC 23894:2023 helps develop.

Audit Evidence – Certification requires systematic, repeatable processes. Without documented methodologies and treatment plans, you can’t demonstrate compliance.

3 Why Planning Matters Now

Core Planning Requirements

Risk Criteria (6.1.1)

You need rules for deciding which risks are acceptable and which need action. These criteria drive your entire assessment, treatment, and impact evaluation process throughout the AI system lifecycle.

Your criteria should reflect your AI’s domain, intended use, risk appetite, and regulatory requirements. Healthcare diagnostic AI needs stricter thresholds than an internal document classifier.

Example: Healthcare AI might set “any risk affecting diagnostic accuracy by >2% requires immediate treatment” while lower-consequence systems use different thresholds.

AI Risk Assessment (6.1.2)

You need a systematic process that produces consistent results every time you use it. ISO 42001 requires you to identify risks that could prevent achieving your AI objectives, analyze potential consequences to your organization and society, then evaluate those risks against your criteria to prioritize what needs fixing first.

Document your methodology formally. You can adapt general risk management principles from ISO 31000:2018 to your AI-specific needs.

AI Risk Treatment (6.1.3)

Once you’ve assessed your risks, you have four options: modify them (mitigate), share them (transfer), avoid them (eliminate), or retain them (accept). This process creates two critical documents:

Statement of Applicability (SoA) lists every Annex A control and states whether you’re implementing it, with specific justifications. You must check your chosen controls against Annex A to verify you haven’t missed anything necessary. You can add controls beyond Annex A when needed.

AI Risk Treatment Plan documents how you’ll implement controls, including timelines, responsibilities, and resources.

Critical: Generic justifications like “not applicable” fail audits. You need specific reasoning based on your risk assessment and context.

AI System Impact Assessment (6.1.4)

This is different from risk assessment. Here you’re evaluating potential consequences for individuals, groups, and societies from your AI system, not just organizational risks.

ISO/IEC 42005:2025 provides guidance for these assessments. The results feed into your risk assessment process, making sure potential harms to people influence your organizational risk decisions.

AI Objectives and Planning (6.2)

You need measurable objectives consistent with your AI policy from Clause 5. They must be monitored, communicated, documented, and updated as your systems evolve.

Make them quantitative: “reduce demographic parity difference below 5%,” “maintain model accuracy above 95%,” or “detect 90% of critical failures 48 hours before occurrence.”

Your planning includes figuring out what actions you’ll take, who’s responsible, what resources you need, when things will happen, and how you’ll measure success. These objectives connect directly to performance monitoring in Clause 9.

7 Implementation Approach

Implementation Approach

You need cross-functional teams: data scientists, legal, ethics, business units. Start by documenting risk criteria before assessments.

Timelines will vary based on resources, priority and execution*

Months 1-2: Establish risk criteria
Months 2-3: Build assessment methodology (adapt ISO 27001:2022 or ISO 31000)
Months 3-4: Create Statement of Applicability
Months 4-5: Define impact assessment and objectives


Hypothetical Illustrative Examples

Example: Financial Services: Credit Decisioning AI

A European bank building credit approval AI established risk criteria distinguishing between individual harm and systemic discrimination. Assessment identified potential disparate impact on protected groups as high-priority. They selected modification through bias testing controls plus quarterly third-party fairness audits. Their SoA justified excluding continuous learning controls because their credit model uses a locked version updated through formal change management.

Clause 6 Takeaway: Planning forced them to define “acceptable fairness” quantitatively (demographic parity difference below 5%) before deployment, not after discovering problems in production.

8 Case Study Financial Services

Example Healthcare: Diagnostic Imaging AI

A medical device company developing diagnostic AI established risk criteria where any risk affecting diagnostic accuracy by more than 2% required highest treatment. Assessment revealed their AI performs differently across patient subgroups and equipment types. Impact assessment showed rural hospitals often use older equipment, creating equity issues. They added enhanced validation testing across equipment types. Their SoA excluded continuous learning controls because they chose locked models for FDA compliance.

Clause 6 Takeaway: Planning revealed that technical decisions (locked vs. continuous learning models) have regulatory implications and control applicability consequences that must be documented in the SoA.


Example Manufacturing: Predictive Maintenance AI

A manufacturer implementing predictive maintenance AI focused risk criteria on worker safety and production continuity. Assessment identified model drift as high-priority because equipment conditions change as machines age. They realized treating this risk required ongoing monitoring rather than one-time validation. Impact assessment revealed maintenance technicians rely on AI predictions for safety decisions, leading them to add an objective around prediction reliability.

Clause 6 Takeaway: Planning connected technical monitoring (model drift detection) to operational safety outcomes, ensuring objectives addressed both efficiency and worker safety.

9 Case Studies Healthcare and Manufacturing

Frequently Asked Questions

Q: How does AI risk assessment differ from information security risk assessment?

AI risk assessment extends beyond confidentiality, integrity, and availability to consider fairness, bias, transparency, explainability, and societal impacts. You can adapt ISO 27001 risk processes, but expand the scope. Traditional IT risks ask “could this system be compromised?” AI risks also ask “could this system discriminate unfairly?”

Q: Do we need all planning components before operations?

Yes. The components build on each other. You can’t create a meaningful SoA without risk assessment results. You can’t assess risks without established criteria. Skipping steps leads to audit findings requiring backfilled documentation.

Q: Can we exclude controls from our SoA?

You can exclude controls with risk-based justifications auditors will scrutinize. “Not applicable” isn’t sufficient. Explain why based on your risk assessment, context, and system characteristics. Excluding continuous learning controls for locked models is valid. Excluding bias testing without evidence will fail audit.

Q: How often should we reassess risks?

Minimum annually, but trigger reassessments for new systems, significant changes, incidents, new regulations, or context changes. The SoA should be a living document reflecting current implementation reality.

Q: What connects AI objectives to risk treatment?

Objectives should address your highest-priority risks. If risk assessment identifies fairness as critical, objectives should include measurable fairness targets. Objectives translate risk treatment intentions into measurable outcomes you’ll track in Clause 9.

Q: Why do we need AI-specific planning?

Traditional risk management doesn’t address algorithmic bias, model explainability, autonomous decision impacts, or systems that learn and change over time. You’re extending your existing capability. Organizations with mature risk programs adapt faster but still need AI-specific criteria and controls.

Integration Considerations

If you’re already ISO 27001 certified, extend your existing processes for AI-specific concerns. ISO/IEC 27005:2022 provides methodological foundations. The NIST AI Risk Management Framework aligns well with ISO 42001’s planning structure.

AI systems change through concept drift and data shifts. Your assessment processes need to account for this. Specify reassessment intervals in your planning documentation.

Certification Readiness

Common gaps: under-documented risk criteria, generic SoA justifications, unmeasurable objectives, missing connections between assessments and treatment plans.

Requirements:

  • Documented risk criteria with appropriate approval
  • Formal assessment methodology with repeatable processes
  • Completed SoA with specific control justifications
  • Risk treatment plan connecting risks to controls
  • Documented impact assessment process
  • Measurable AI objectives
  • Clear responsibility assignments
12 Certification Readiness

Key Takeaways

Planning in Clause 6 drives operational execution in Clause 8. Plan for 3-5 months of initial work with ongoing reassessment as systems evolve.

The Statement of Applicability is your central control document. Auditors scrutinize it closely. Invest in specific, risk-based justifications for every control decision.

Next Steps: Review current practices, identify your cross-functional team, allocate 3-5 months for implementation.

13 Key Takeaways and Resources

Ready to Test Your Knowledge?


Additional Resources:

This article references ISO/IEC 42001:2023 for educational purposes and does not reproduce copyrighted content. Organizations should obtain the official standard for complete requirements.

Author

Derrick Jackson

I’m the Founder of Tech Jacks Solutions and a Senior Director of Cloud Security Architecture & Risk (CISSP, CRISC, CCSP), with 20+ years helping organizations (from SMBs to Fortune 500) secure their IT, navigate compliance frameworks, and build responsible AI programs.

Leave a comment

Your email address will not be published. Required fields are marked *