Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

AI Vendor Risk Assessment: Third-Party Due Diligence for AI Systems

How to govern what you did not build. A step-by-step guide to evaluating, contracting, and monitoring third-party AI systems.

Derrick D. Jackson | CISSP, CRISC, CCSP April 2026 ~18 min read
6Assessment Steps
6Checklist Categories
4Framework Mappings
7Contract Clauses

Most organizations do not build their own AI. They buy it. ChatGPT, Microsoft Copilot, Salesforce Einstein, automated HR screening tools, AI-driven analytics platforms. Every one of these represents a third-party AI system your organization is accountable for, even though you never wrote a single line of the underlying model code.

Traditional vendor risk management was built for SaaS applications and cloud infrastructure. AI systems introduce entirely new risk categories: model opacity, training data provenance, algorithmic bias, drift over time, and regulatory obligations that fall on the deployer, not just the provider. If your vendor risk program has not been updated for AI, you have a gap.

Developing vs. Consuming AI: Why It Matters for Vendor Risk

TJS is one of the only governance providers that separates the governance requirements for organizations that build AI systems from those that procure and deploy vendor AI. The risk profiles, compliance obligations, and control frameworks are fundamentally different.

Developing AI

  • Full control over training data
  • Model architecture decisions
  • Internal bias testing
  • Direct access to model weights
  • Provider obligations under EU AI Act

Consuming Vendor AI

  • No visibility into training data
  • Black-box model behavior
  • Dependent on vendor transparency
  • Contract-based risk controls
  • Deployer obligations under EU AI Act

The 6-Step Vendor Risk Assessment Process

From initial identification through continuous monitoring, each step builds on the last.

1

Identify Vendor AI

Catalog all third-party AI systems in use or under evaluation

2

Request Documentation

Collect model cards, SOC reports, compliance certs, data handling policies

3

Score Risk

Apply risk tier classification using likelihood and impact scoring

4

Evaluate Controls

Assess security, bias, transparency, and compliance controls

5

Negotiate Terms

Embed AI-specific clauses into vendor contracts and SLAs

6

Monitor Ongoing

Quarterly reviews, drift alerts, re-assessment triggers

Free Download
Risk Tier Decision Tree
Classify each vendor AI system into the right risk tier with our interactive 7-question decision tree.
Download the Decision Tree →

Vendor Due Diligence Checklist

Six categories, 30+ questions. Use this as your standard intake questionnaire for every AI vendor evaluation.

  • Data Processing Location: Where is customer data processed? Which jurisdictions? Are there sub-processors?
  • Data Retention: How long is input/output data stored? Can you enforce deletion schedules?
  • Cross-Border Transfer: Does data leave your region? What transfer mechanisms are in place (SCCs, adequacy decisions)?
  • Encryption: Is data encrypted at rest and in transit? What algorithms? Who holds the keys?
  • Training Data Usage: Does the vendor use your data to train or fine-tune models? Can you opt out?
GDPR Art. 28 ISO 42001 A.10
  • Model Card: Does the vendor publish a model card (intended use, limitations, performance benchmarks)?
  • Explainability: Can the vendor explain how the model reaches decisions? What level of output reasoning is available?
  • Bias Testing: Has the vendor conducted bias and fairness testing? Are results published or available on request?
  • Performance Benchmarks: What accuracy, precision, and recall metrics does the vendor report? Against which datasets?
  • Known Limitations: Does the vendor document failure modes, edge cases, and known limitations?
EU AI Act Art. 13 NIST MAP 2.3
  • SOC 2 Type II: Does the vendor hold current SOC 2 Type II certification? When was the last audit?
  • ISO 27001: Is the vendor ISO 27001 certified? Does the certificate cover AI operations specifically?
  • Penetration Testing: How often does the vendor conduct pen tests? Are results or summaries available?
  • Incident History: Has the vendor had data breaches or security incidents? What was the response?
  • AI-Specific Threats: Does the vendor test for prompt injection, data poisoning, model extraction, and adversarial inputs?
CSA GRC NIST MANAGE 2.4
  • EU AI Act Conformity: Has the vendor conducted a conformity assessment for high-risk AI systems?
  • GDPR Compliance: Is the vendor GDPR-compliant? Do they have a Data Protection Officer? DPIA available?
  • Sector Regulations: Does the vendor meet industry-specific requirements (HIPAA, PCI-DSS, FedRAMP, SOX)?
  • Regulatory Roadmap: How is the vendor preparing for upcoming AI regulation? Is there a published compliance timeline?
EU AI Act Art. 26 GDPR ISO 42001 Cl. 4.2
  • SLA Uptime: What availability guarantees does the contract include? What are the remedies for downtime?
  • Liability Clauses: Who is liable for AI-generated harm? Are there caps on liability? Indemnification provisions?
  • Right to Audit: Can your organization audit the vendor AI system, or request third-party audits?
  • Exit Strategy: What are the data portability and transition provisions? Is there vendor lock-in risk?
  • Change Notification: Is the vendor obligated to notify you before model updates, retraining, or architecture changes?
NIST GOVERN 6.1 ISO 42001 A.10
  • Training Data Provenance: Can the vendor describe the origin, composition, and licensing of training data?
  • Model Versioning: Does the vendor maintain version history? Can you pin to a specific model version?
  • Drift Monitoring: Does the vendor monitor for model drift? How are performance degradations detected and communicated?
  • Human-in-the-Loop Options: Can you configure HITL review for high-stakes decisions? What override mechanisms exist?
  • Output Logging: Does the vendor provide access to input/output logs for auditability?
EU AI Act Art. 14 NIST MEASURE 2.6 ISO 42001 Cl. 8.4
Free Download
40-Field AI Use Case Tracker Template
Track every vendor AI system with our 40-field template. Includes risk scoring, compliance mapping, and owner assignment fields.
Download the Tracker →

Third-Party Risk Under Each Framework

Every major AI governance framework addresses vendor and third-party risk. Here is what each one requires.

Framework Key Reference Third-Party Vendor Requirements
ISO 42001 Annex A.10 Third-party and customer relationships controls. Requires documented policies for AI system suppliers, assessment of third-party AI risks, and contractual provisions for transparency and accountability.
NIST AI RMF GOVERN 6.1-6.2 Policies and procedures for third-party AI risks. Organizations must address risks from AI systems developed or deployed by external entities, including supply chain provenance and data integrity.
EU AI Act Art. 25-27 Deployer obligations for high-risk AI systems. Deployers must use vendor AI in accordance with instructions, monitor performance, keep logs, conduct DPIAs, and report serious incidents. Art. 26 places direct compliance duties on deployers.
CSA GRC Vendor Risk Mgmt AI vendor risk management within GRC responsibilities. Covers vendor evaluation criteria, ongoing monitoring, contractual security requirements, and incident coordination with vendors.
Free Download
Regulatory Mapping Cheat Sheet
40 controls mapped across ISO 42001, NIST AI RMF, EU AI Act, and CSA. See exactly where vendor risk fits in each framework.
Download the Mapping →

Shadow AI as a Vendor Risk Category

You cannot assess what you do not know about. Shadow AI is the fastest-growing vendor risk category in most organizations.

84%
of internal audit departments lack an AI audit framework, leaving vendor AI unmonitored (ECIIA 2024).

What Is Shadow AI?

Shadow AI refers to vendor AI tools adopted by employees or teams without IT approval, security review, or governance oversight. Every shadow AI tool is an unassessed vendor relationship. It bypasses your intake process, your risk scoring, your contract protections, and your compliance controls.

Detection Methods
🔌

Network Monitoring

DNS logs, firewall rules, and proxy analysis to identify traffic to known AI API endpoints

💻

Endpoint Analysis

Browser extensions, installed applications, and OAuth token grants to AI services

💰

Procurement Audit

Expense reports, credit card statements, and P-card usage for AI tool subscriptions

📋

Employee Survey

Anonymous self-reporting surveys asking what AI tools teams use in daily workflows

Contract Clauses for AI Vendor Agreements

Standard vendor contracts were not designed for AI. These seven clause categories close the gap between traditional SaaS procurement and AI-specific risk.

Clause 1

Data Processing Terms

Define exactly what data the vendor can process, where it is stored, retention periods, deletion schedules, and whether customer data is used for model training. Require opt-out rights for training data usage.

Clause 2

Model Transparency Requirements

Require the vendor to provide model cards, performance benchmarks, known limitations documentation, and bias testing results. Include provisions for updates when model behavior changes.

Clause 3

Incident Notification SLA

Specify maximum notification timeframes for security incidents, data breaches, model failures, and bias discoveries. Align with EU AI Act Art. 73 serious incident timelines (10 days for death, 15 days for other).

Clause 4

Right to Audit

Secure the contractual right to audit AI system behavior, request third-party assessments, and access performance monitoring data. Essential for deployer obligations under EU AI Act Art. 26.

Clause 5

Liability Caps & Indemnification

Address liability for AI-generated harm, including algorithmic discrimination, incorrect automated decisions, and IP infringement from AI outputs. Define indemnification scope and caps.

Clause 6

Termination Triggers

Define specific conditions that trigger contract termination: repeated compliance failures, unresolved bias findings, material model changes without notice, or failure to maintain security certifications.

Clause 7

IP Ownership & Output Rights

Clarify ownership of AI-generated outputs, derivative works, and fine-tuned models. Address whether vendor retains rights to aggregated insights derived from your usage data.

Ongoing Vendor Monitoring

Assessment is not a one-time event. Vendor AI systems change constantly through retraining, updates, and data shifts. Your monitoring program needs to keep pace.

Quarterly

Performance Review

  • Accuracy and reliability metrics against SLA baselines
  • Incident count and mean time to resolution
  • User satisfaction and complaint trends
  • Cost per transaction or API call trending
  • Comparison against initial risk assessment scoring
Continuous

Drift & Alert Monitoring

  • Statistical drift detection on model outputs
  • Anomaly alerts for unexpected behavior patterns
  • Vendor security advisory monitoring
  • Regulatory change alerts affecting the vendor
  • Sub-processor or infrastructure change notifications
Trigger-Based

Re-Assessment Triggers

  • Vendor announces major model version change
  • New regulation affecting the AI system category
  • Security incident or data breach at vendor
  • Significant performance degradation detected
  • Contract renewal approaching (90-day advance)
Free Download
Board Summary Template
Report vendor AI risk posture to your board with our quarterly summary template. Pre-formatted with KPI sections, risk tier summaries, and trend visualizations.
Download the Template →

Get Started With Vendor Risk Assessment

Everything you need to assess, contract, and monitor third-party AI vendors.

All-in-One Bundle
Download All Governance Tools - Free
Every community template and checklist in one download. One email, everything you need.
Get the Bundle →
Consulting
Need Help With Vendor Risk?

Our vendor risk assessment service includes AI-specific questionnaires, contract clause templates, risk scoring models, and ongoing monitoring frameworks. Built on ISO 42001, NIST AI RMF, and EU AI Act deployer obligations.