Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Governance
AI Acceptable Use Policy

AI Acceptable Use Policy

Your compliance blueprint for responsible AI deployment, with practical framework alignment and a 90-day rollout plan.

Derrick D. Jackson | CISSP, CRISC, CCSP ~15 min read
90 Day Rollout
3 Frameworks
4 Risk Tiers
Shadow AI
AI Acceptable Use Policy

An AI Acceptable Use Policy is not just another compliance document. It is the primary instrument for translating high-level ethical principles and complex regulatory requirements into concrete, auditable corporate practice. This guide shows you exactly how to build one using proven frameworks.

The regulatory landscape shifted dramatically in 2024. The EU AI Act entered into force August 1st. NIST released voluntary guidance that is becoming the de facto US standard. AWS became the first major cloud provider to achieve ISO/IEC 42001 certification. If you are competing with companies like these, you are already behind.

What Is an AI Acceptable Use Policy?

An AI Acceptable Use Policy is a formal organizational framework that defines the boundaries, requirements, and expectations for how AI technologies may be used across your organization. It tells every employee, contractor, and partner what they can do with AI, what they cannot do, and what safeguards must be in place before any AI system touches production data or customer-facing processes.

Traditional IT acceptable use policies do not cover AI. Machine learning systems create risks that standard IT governance was never designed to handle: algorithmic bias that discriminates against protected groups, privacy implications from training data exposure, automated decisions that affect real people’s employment, credit, and healthcare, and the potential for harm at a scale no individual human could cause. Your policy needs to address these specifically, not as footnotes to an existing IT policy, but as a standalone governance instrument with teeth.

Think of the relationship this way: your AI Governance Charter is the constitution. Your AI Acceptable Use Policy is the first law passed under that constitution. The charter establishes principles and authority. The AUP translates those principles into specific, enforceable rules that every person in the organization can follow starting on day one.

Technologies Covered

Machine learning, NLP, computer vision, generative AI, predictive analytics, and any third-party AI service. Internal builds and SaaS subscriptions alike.

ISO 42001 Cl. 5.2

People in Scope

All employees, contractors, temporary staff, and anyone with access to organizational systems. No exceptions, no carve-outs.

NIST GOVERN 2.1

Data Rules

Clear guidelines per data classification level. Prohibit sensitive data in unapproved tools. Define what can and cannot be entered into public AI services.

EU AI Act Art. 10

Risk Categories

A structured framework that separates low-risk tools from high-risk systems, with proportionate controls for each tier.

NIST MAP 1.1

Key Terms to Include in Your AUP

Your policy should define these terms explicitly. Ambiguous language creates enforcement gaps. Include a definitions section early in your AUP so every reader starts from the same baseline.

Term Definition
AI System A machine-based system that processes inputs to generate outputs such as predictions, decisions, content, or recommendations that can influence physical or virtual environments (EU AI Act Art. 3(1)).
Shadow AI Any AI tool or service used within the organization without formal registration, approval, or oversight by the AI governance function. Includes personal subscriptions, browser extensions, and embedded AI features in existing SaaS tools.
High-Risk AI System An AI system that poses significant risk to the health, safety, or fundamental rights of natural persons. Defined by the EU AI Act Annex III across 8 domains including biometrics, critical infrastructure, employment, and law enforcement.
Human-in-the-Loop (HITL) A governance control requiring a qualified human to review, approve, override, or reverse AI-generated outputs before they take effect. Required for high-risk systems under EU AI Act Art. 14.
Risk Tier A classification level (Unacceptable, High, Medium, Low) assigned to an AI system based on its potential impact on individuals, the organization, and society. Determines which governance controls apply.
Deployer Any natural or legal person that uses an AI system under its authority, except where the system is used in the course of a personal non-professional activity (EU AI Act Art. 3(4)). Most organizations are deployers, not providers.
Conformity Assessment The process of verifying whether an AI system meets applicable requirements before it is placed on the market or put into service. Required for high-risk AI systems under EU AI Act Art. 43.
📥 Free Download
Quick-Start Governance Checklist
Not sure where to start? This 3-tier checklist walks you through the foundational steps for any AI governance initiative, including your AUP.
Free Download →

Why You Need This, and Why Now

Regulatory Imperative

The EU AI Act mandates documented governance for high-risk AI. ISO 42001 certification requires a formal AI policy. Regulators are not waiting for you to catch up.

🛡

Legal Liability

Penalties for prohibited AI practices reach up to 35 million EUR or 7% of annual global turnover, whichever is higher. Art. 5 violations carry the steepest fines.

🔒

Data Exposure

Employees are entering PII, trade secrets, and proprietary data into public AI tools right now. Without an AUP, there is no policy violation to enforce.

Algorithmic Bias

Discriminatory outputs from AI systems create legal risk and reputational damage. Without mandatory fairness testing, you will not find bias until it finds you.

Operational Failures

Unvetted AI systems can create cascading business disruption. Shadow AI, where employees adopt tools without IT oversight, is the fastest-growing governance gap.

Mapping Your AUP to Frameworks

Every section of your AUP should trace back to established frameworks. Here is how the three major standards align to your policy structure.

AUP SectionRegulatory Requirement
Risk ManagementArt. 9: Risk identification, analysis, and mitigation throughout the AI system lifecycle
Quality ManagementArt. 17: Documented roles, responsibilities, training requirements, and quality management procedures
Post-Market MonitoringArt. 72: Monitoring plans, review schedules, and ongoing performance metrics
Incident ReportingArt. 73: Serious incident reporting (2 days for critical infrastructure disruption, 10 days for death, 15 days for other serious incidents)
Prohibited PracticesArt. 5: Banned AI applications including social scoring, manipulative techniques, and penalty exposure
Human OversightArt. 14: Human review requirements for high-risk AI systems, including kill-switch capabilities
AUP SectionNIST Function
Policy DocumentGOVERN: Your AUP IS the primary GOVERN artifact, establishing organizational AI risk management
Intake & AssessmentMAP: Context establishment and risk characterization for each AI system
Testing & ValidationMEASURE: Fairness, bias, reliability, and performance evaluation
Risk TreatmentMANAGE: Risk treatments, incident response procedures, and decommissioning plans
Stakeholder EngagementGOVERN 1.4 / MAP 5.2: Internal and external stakeholder engagement and feedback requirements
AUP SectionISO 42001 Clause
Policy CreationCl. 5.2: AI policy establishment and communication across the organization
ImplementationCl. 8.1: Operational planning and lifecycle execution
MonitoringCl. 9.1: Performance monitoring and measurement against objectives
ImprovementCl. 10.1 / 10.2: Nonconformity handling and continual improvement
Risk AssessmentCl. 6.1 / 8.2: Risk actions and AI risk assessment processes
📥 Free Download
Regulatory Mapping Cheat Sheet
Map your AUP sections to specific NIST subcategories, ISO 42001 clauses, and EU AI Act articles in one reference sheet.
Free Download →

Building Your Policy: Foundation First

A policy without organizational buy-in is just a PDF on a shared drive. Follow this sequence to build something that actually gets enforced.

1

Executive Sponsorship

Secure C-suite backing with resource allocation authority. Without executive sponsorship, your AUP has no enforcement mechanism. The sponsor signs the policy, owns the budget, and breaks organizational logjams.

Read: AI Governance Charter Guide →
Executive Sponsor Named
2

Cross-Functional Team

Assemble representatives from Legal, Compliance, IT, InfoSec, HR, Data Science, and key business units. Each function brings a perspective that keeps the policy practical rather than theoretical.

Read: AI Governance Committee Hub →
Working Group Chartered
3

AI System Inventory

You cannot govern what you cannot see. Build a central registry of every AI system in use: vendor tools, internal models, embedded AI features, and experimental projects. This is the foundation your risk classification will sit on.

Read: AI Use Case Inventories → Read: AI Use Case Tracker →
System Registry Complete
4

Risk Classification

Apply a tiered framework (Low, Medium, High, Unacceptable) to every system in your inventory. This drives proportionate controls, so you do not over-govern a grammar checker or under-govern a hiring algorithm.

Read: AI Risk Management Hub →
Risk Tiers Assigned
5

Policy Drafting

Write the policy covering purpose, scope, governance structure, permitted and prohibited uses, data handling rules, risk management processes, and enforcement mechanisms. Use the essential sections checklist below.

Draft Policy Complete
6

Review & Approval

Circulate for stakeholder feedback, incorporate revisions, obtain legal sign-off, and secure formal executive approval. Then launch through the 90-day rollout plan.

Policy Approved
📥 Free Download
Charter Implementation Checklist
Track every phase of your governance foundation build, from executive sponsorship through formal approval.
Free Download →
📥 Free Download
40-Field AI Use Case Tracker Template
Document every AI system with our fillable template. Covers all the fields your inventory needs for risk classification.
Free Download →

Essential Policy Sections

Every effective AUP needs these eight sections. Skip one and you will have gaps auditors and regulators will find before you do.

1. Purpose & Scope

Define the policy objectives, which AI technologies are covered, which personnel are in scope, and the organizational boundaries. Be explicit about what counts as an “AI system.”

ISO 42001 Cl. 5.2

2. Definitions

AI systems, high-risk AI, sensitive data classifications, prohibited practices, shadow AI, and any domain-specific terminology. No ambiguity in enforcement.

EU AI Act Art. 3

3. Guiding Principles

Lawfulness, fairness, transparency, accountability, and human oversight. These principles anchor every rule in the policy and provide the reasoning behind restrictions.

NIST GOVERN 1.1

4. Governance Structure

Oversight council composition, operational governance office, system owner responsibilities, and decision authority levels. Who approves what, and who can say no.

ISO 42001 Cl. 5.3

5. Usage Rules

Permitted uses by risk tier, explicitly prohibited activities, data governance requirements, and restrictions on public AI tools. This is where shadow AI prevention lives.

EU AI Act Art. 5

6. Processes

Intake and registration workflow, risk assessment methodology, approval gates, ongoing monitoring requirements, and incident management procedures.

NIST MAP / MEASURE

7. Documentation

System inventories, risk assessment records, decision logs, model documentation standards, and audit trail requirements. If it is not documented, it did not happen.

EU AI Act Art. 11

8. Enforcement

Violation categories, escalation procedures, consequences by severity, reporting mechanisms, and whistleblower protections. A policy without enforcement is a suggestion.

ISO 42001 Cl. 10.1
Don’t Build Your AUP from Scratch

We offer both a free and a professional AUP template because we believe every organization deserves a starting point for responsible AI, regardless of budget. AI governance done well creates positive outcomes for everyone. AI governance skipped creates headlines.

The free community template gives you a solid foundation. The professional edition gives you the depth, framework alignment, and audit-ready documentation that saves weeks of legal and compliance review.

Free Community Edition
$0
AI Acceptable Use Policy Starter

A clear starting point for organizations beginning their AI governance journey. Covers foundational policy structure, key definitions, and basic usage rules you can customize for your context.

  • Core policy structure and sections
  • Key definitions and scope language
  • Basic permitted/prohibited use rules
  • Instant download, no strings attached
Download Free Template
Recommended
Professional
$15
AI Acceptable Use Policy Template

35-page .docx with 24 sections, 7 operational appendices, and 139 verified citations. Covers acceptable use, prohibited activities, enforcement, risk matrix, ethics charter, incident response workflow, and approval processes.

  • 6 frameworks: NIST, EU AI Act, ISO 42001, ISO 27001, OECD, IEEE
  • Agentic AI controls + GPAI compliance obligations
  • 139 verified source citations for audit defense
  • Framework crosswalk table + 7 operational appendices
Get the Professional Template →

Both templates are built from the same research that powers this article. The professional edition adds depth, framework crosswalks, and the operational appendices that compliance teams and auditors expect to see.

Sample Policy Language

These are starting points for your own policy language, not legal advice. Customize for your organizational context and have legal counsel review before adoption.

Data Protection EU AI Act Art. 10

“Personnel are strictly prohibited from inputting any data classified as Confidential, Restricted, or containing personally identifiable information (PII) into publicly available AI tools, including but not limited to ChatGPT, Google Gemini, Claude, and Copilot consumer editions. All AI-processed data must remain within approved organizational systems that meet our data classification and handling standards.”
Adapt classifications to match your organization’s data governance framework.

Decision-Making Restrictions EU AI Act Art. 14

“AI systems may not be used to make final decisions regarding employment, credit, housing, healthcare, or legal matters without meaningful human review by a qualified individual. All Medium and High Risk AI systems involved in consequential decisions must include documented human-in-the-loop controls with override capabilities.”
Align “meaningful human review” with EU AI Act Art. 14 human oversight requirements.

Bias Prevention ISO 42001 Cl. 8.4

“All AI systems classified as Medium or High Risk must undergo fairness evaluation testing across relevant demographic categories before deployment and on a recurring quarterly basis. Testing results, including any identified disparate impact, must be documented and reported to the AI Governance Office within 10 business days of completion.”
Define “relevant demographic categories” based on applicable anti-discrimination law.

Shadow AI Prevention NIST GOVERN 1.6

“All AI tools, services, and applications must be registered in the organization’s AI System Inventory prior to use. Use of unregistered AI tools, including personal accounts on public AI services for work-related tasks, constitutes a policy violation. The IT Security team will conduct quarterly scans to identify unauthorized AI tool usage across organizational networks.”
See our Shadow AI Risk Detection guide for identification and mitigation strategies.

Incident Reporting EU AI Act Art. 73

“Any employee who discovers or suspects an AI system malfunction, bias incident, data breach, or safety concern must report it to the AI Governance Office within 24 hours using the designated incident reporting channel. For high-risk AI systems, the organization must notify the relevant market surveillance authority within the timelines specified by the EU AI Act: 2 days for critical infrastructure disruption, 10 days for incidents involving death, and 15 days for all other serious incidents.”
Incident reporting timelines per EU AI Act Art. 73(3). Adapt notification channels to your org structure.

These five examples cover the most common policy provisions, but a production AUP typically includes 15 to 20 specific policy statements across all 8 essential sections. The key principle: be specific enough to be enforceable, but flexible enough to accommodate different risk levels and use cases. Vague policies create loopholes. Overly rigid policies get ignored.

📥 AUP Templates
All sample language above (and more) pre-written and ready to customize
Start with the free community edition for foundational policy structure, or go deeper with the professional template ($15) for 35 pages, 7 appendices, and 139 verified citations across 6 frameworks.

Risk Classification Framework

Not every AI system needs the same level of governance. This four-tier framework ensures proportionate controls: minimal friction for low-risk tools, maximum scrutiny for high-risk systems, and an outright ban on unacceptable applications.

⚠ High Risk Hiring and termination decisions, health data processing, credit determinations, content moderation affecting access to services, and safety-critical systems. Requires full risk assessment, human oversight, ongoing monitoring, and incident reporting.
⚠ Medium Risk Customer service chatbots handling personal data, content generation with company data, business analytics informing strategy, and automated scheduling affecting employees. Requires risk assessment, periodic review, and documented safeguards.
✓ Low Risk Grammar and spell checking, internal translation, template content generation with human review, calendar assistants, and code completion tools on non-sensitive repositories. Standard registration and monitoring.
✗ Unacceptable Social scoring of natural persons, real-time biometric identification in publicly accessible spaces, manipulative subliminal techniques below consciousness, exploitation of vulnerabilities of specific groups, emotion recognition in workplaces and education, untargeted facial image scraping, biometric categorization inferring sensitive attributes, and predictive policing based solely on profiling or assessment of traits (EU AI Act Art. 5).

AI Tool Classification: Common Examples

Use this table as a starting reference when classifying AI tools in your organization. Your specific context, data sensitivity, and intended use cases may shift a tool into a different tier.

AI Tool / Category Typical Tier Key Considerations
Grammar and writing assistants Low Minimal data exposure; ensure sensitive documents are not processed through cloud APIs
Code completion tools (Copilot, Codeium) Low Review IP exposure; restrict on proprietary codebases; check vendor data retention policies
Customer service chatbots Medium Handles personal data; requires data classification review and human escalation paths
Generative AI (ChatGPT, Claude, Gemini) Medium Data leakage risk; define what data can be input; require human review of all outputs
Predictive analytics (demand, pricing) Medium Business-critical decisions; monitor for model drift; validate against known baselines
HR screening and recruitment AI High Directly affects individuals’ opportunities; mandatory bias testing and human override required
Credit and insurance decisioning High Legal obligations; explainability required; ongoing monitoring and audit trail mandated
Real-time biometric ID in public spaces Prohibited Banned under EU AI Act Art. 5(1)(h); no exceptions for commercial deployment
📥 Free Download
Risk Tier Decision Tree
7-question interactive flow to classify any AI system from Unacceptable to Low Risk, with EU AI Act obligations mapped per tier.
Free Download →

Decision Framework: Questions Every AI System Must Answer

Before any AI system enters your environment, it should answer these questions. Use them as intake criteria for new AI projects and procurement requests.

Data Sensitivity

Does this AI process confidential, personal, or proprietary data? What classification levels does the data fall under? Can data be anonymized before processing?

Decision Impact

Can this AI’s outputs directly affect individuals’ rights, opportunities, or access to services? Are the decisions reversible? What is the blast radius of an incorrect output?

Human Oversight

Is there meaningful human review in the decision loop? Can a qualified person override, correct, or reverse AI-generated outcomes? How quickly can a kill switch be activated?

Regulatory Exposure

Does this AI system fall under EU AI Act high-risk categories? Does it process data subject to GDPR, HIPAA, or sector-specific regulations? What compliance evidence must be maintained?

Bias & Fairness

Could this system produce different outcomes for different demographic groups? What testing has been done? What ongoing monitoring is in place to detect drift or emerging bias?

Vendor & Supply Chain

If third-party AI, what are the vendor’s data handling practices? Where is data processed geographically? What contractual protections exist for data security and model changes?

These questions feed directly into two formal processes: intake forms (structured registration capturing business problem, data sources, sensitivity levels, users, and risk score) and impact assessments (deep analysis for medium and high-risk systems covering stakeholder impacts, fairness evaluation, privacy implications, and specific mitigation strategies). Both processes should be mandatory before any AI system moves to production.

Governance Roles: Who Owns What

Clear accountability prevents governance gaps. Every organization needs these four layers of AI oversight, scaled to fit your size and complexity.

AI Oversight Council

Senior leadership body responsible for strategic direction, policy approval, and high-risk system decisions. Typically includes CTO, CISO, General Counsel, Chief Ethics Officer, and business unit leaders.

ISO 42001 Cl. 5.1

AI Governance Office

Operational team handling daily governance: intake reviews, risk assessments, inventory management, compliance monitoring, training coordination, and incident triage. The engine that makes the policy work.

NIST GOVERN 2.1

AI System Owners

Named individuals (not teams) accountable for specific AI systems throughout their lifecycle. Responsible for documentation, risk management, performance monitoring, and compliance with policy requirements for their assigned systems.

EU AI Act Art. 26

All Personnel

Every employee and contractor bears responsibility for following the AUP, completing required training, reporting policy violations or incidents, and using only approved AI tools. The policy applies equally at every level.

NIST GOVERN 2.2

For detailed guidance on building and staffing your AI governance committee, including RACI matrices, meeting cadences, and authority cascades, see our AI Governance Committee Hub and the 8-Stage Implementation Guide.

Implementation: 90-Day Rollout

Your policy is approved. Now you need people to actually follow it. A structured 90-day rollout turns a document into an operational reality.

  • Executive announcement from the sponsor to all staff, with mandatory acknowledgment
  • Mandatory training sessions covering policy scope, risk tiers, and what changes for each role
  • Q&A sessions per department to address concerns and edge cases
  • AI intake forms become mandatory for all new AI project initiation
  • Policy portal launched with templates, decision trees, and FAQs
Key Deliverable: Communication Complete + Process Gates Active
33% of 90-day rollout
  • Impact assessments required for all Medium and High Risk projects in the pipeline
  • Weekly office hours from the governance team for implementation support
  • Department-specific guidance documents published for Engineering, Marketing, HR, and Legal
  • Shadow AI discovery sweep to identify unapproved tools already in use
  • Remediation plans created for legacy AI systems that predate the policy
Key Deliverable: Assessments Embedded in Dev Lifecycle
67% of 90-day rollout
  • First submission analysis reviewing all intake requests for patterns and bottlenecks
  • Early adopter survey collecting feedback on process friction and policy clarity
  • Template and training updates based on real-world implementation experience
  • Governance dashboard published with compliance metrics, submission counts, and risk distribution
  • First governance report delivered to executive sponsor and steering committee
Key Deliverable: First Governance Report Published
100% of 90-day rollout
📥 Free Download
Board AI Governance Summary Template
Present your 90-day governance results to the board with this 9-section quarterly report. Includes live compliance bars, KPI cards, and risk distribution charts.
Free Download →

Measuring Success: Governance KPIs

A policy without metrics is just a suggestion. Track these indicators quarterly to prove your AUP is working and to identify areas that need attention.

Policy Coverage Rate

Percentage of AI systems in your inventory covered by the AUP. Target: 100% within 90 days of rollout. Track registered vs. discovered shadow AI instances.

NIST GOVERN 1.6

Training Completion

Percentage of in-scope employees who have completed mandatory AI AUP training. Target: 95%+ by end of Phase 2. Break down by department and role level.

NIST GOVERN 2.2

Incident Response Time

Average time from AI incident detection to documented response. Track against EU AI Act Art. 73 reporting timelines: 2 days for critical infrastructure, 10 days for death, 15 days for other serious incidents.

EU AI Act Art. 73

Risk Assessment Completion

Percentage of medium and high-risk AI systems with completed impact assessments. Target: 100% for high-risk within 30 days of registration, 60 days for medium-risk.

ISO 42001 Cl. 6.1

Policy Violation Rate

Number of documented AUP violations per quarter. Track by severity (critical, major, minor), department, and violation type. Downward trend indicates policy adoption.

ISO 42001 Cl. 10.1

Stakeholder Satisfaction

Quarterly survey score from AI system owners and business unit leaders on governance process friction, clarity, and support. Governance should enable innovation, not block it.

NIST GOVERN 1.4

Leaders Setting the Standard

These organizations are not waiting for regulation to force their hand. They are publishing policies, achieving certifications, and setting the bar your competitors will be measured against.

Cloud / Enterprise

Amazon Web Services

First major cloud provider to achieve ISO/IEC 42001 certification for Amazon Bedrock. Published responsible AI policy covering model evaluation, data governance, and customer-facing transparency obligations.

Platform / Enterprise

Microsoft

Responsible AI Standard with mandatory impact assessments for all AI products. Embedded governance gates into the product development lifecycle. Published guidelines covering fairness, reliability, safety, and inclusiveness.

Platform / Research

Google

AI Principles with explicit prohibited applications, including weapons, surveillance violating norms, and technologies that cause overall harm. Internal review boards evaluate projects against these principles before launch.

SaaS / CRM

Salesforce

Specific restrictions on automated decisions with legal or similarly significant effects. Published “Trusted AI Principles” covering accountability, transparency, empowerment, and inclusivity for Einstein AI products.

Government / Federal

US Department of Homeland Security

Published a federal framework for safe and responsible AI adoption across DHS components. Includes mandatory AI use case inventories, risk assessments, and human oversight requirements for rights-impacting AI.

Templates & Tools for Your AUP Journey

Everything you need to build, operationalize, and maintain your AI Acceptable Use Policy.

Save Time
Free AI Governance Bundle

All six free tools in one download. Checklist, decision tree, regulatory mapping, tracker, board summary, and more.

Download the Free Bundle →

AI Acceptable Use Policy at a Glance

Save or share this infographic as a quick reference for your team.

AI Acceptable Use Policy Infographic
x
x
x
x
x
x

Author

Tech Jacks

I’m the Founder of Tech Jacks Solutions and a Senior Director of Cloud Security Architecture & Risk (CISSP, CRISC, CCSP), with 20+ years helping organizations (from SMBs to Fortune 500) secure their IT, navigate compliance frameworks, and build responsible AI programs.

Leave a comment

Your email address will not be published. Required fields are marked *