AI Vendor Risk Assessment: Third-Party Due Diligence for AI Systems
How to govern what you did not build. A step-by-step guide to evaluating, contracting, and monitoring third-party AI systems.
Most organizations do not build their own AI. They buy it. ChatGPT, Microsoft Copilot, Salesforce Einstein, automated HR screening tools, AI-driven analytics platforms. Every one of these represents a third-party AI system your organization is accountable for, even though you never wrote a single line of the underlying model code.
Traditional vendor risk management was built for SaaS applications and cloud infrastructure. AI systems introduce entirely new risk categories: model opacity, training data provenance, algorithmic bias, drift over time, and regulatory obligations that fall on the deployer, not just the provider. If your vendor risk program has not been updated for AI, you have a gap.
Developing vs. Consuming AI: Why It Matters for Vendor Risk
TJS is one of the only governance providers that separates the governance requirements for organizations that build AI systems from those that procure and deploy vendor AI. The risk profiles, compliance obligations, and control frameworks are fundamentally different.
Developing AI
- Full control over training data
- Model architecture decisions
- Internal bias testing
- Direct access to model weights
- Provider obligations under EU AI Act
Consuming Vendor AI
- No visibility into training data
- Black-box model behavior
- Dependent on vendor transparency
- Contract-based risk controls
- Deployer obligations under EU AI Act
The 6-Step Vendor Risk Assessment Process
From initial identification through continuous monitoring, each step builds on the last.
Identify Vendor AI
Catalog all third-party AI systems in use or under evaluation
Request Documentation
Collect model cards, SOC reports, compliance certs, data handling policies
Score Risk
Apply risk tier classification using likelihood and impact scoring
Evaluate Controls
Assess security, bias, transparency, and compliance controls
Negotiate Terms
Embed AI-specific clauses into vendor contracts and SLAs
Monitor Ongoing
Quarterly reviews, drift alerts, re-assessment triggers
Vendor Due Diligence Checklist
Six categories, 30+ questions. Use this as your standard intake questionnaire for every AI vendor evaluation.
- Data Processing Location: Where is customer data processed? Which jurisdictions? Are there sub-processors?
- Data Retention: How long is input/output data stored? Can you enforce deletion schedules?
- Cross-Border Transfer: Does data leave your region? What transfer mechanisms are in place (SCCs, adequacy decisions)?
- Encryption: Is data encrypted at rest and in transit? What algorithms? Who holds the keys?
- Training Data Usage: Does the vendor use your data to train or fine-tune models? Can you opt out?
- Model Card: Does the vendor publish a model card (intended use, limitations, performance benchmarks)?
- Explainability: Can the vendor explain how the model reaches decisions? What level of output reasoning is available?
- Bias Testing: Has the vendor conducted bias and fairness testing? Are results published or available on request?
- Performance Benchmarks: What accuracy, precision, and recall metrics does the vendor report? Against which datasets?
- Known Limitations: Does the vendor document failure modes, edge cases, and known limitations?
- SOC 2 Type II: Does the vendor hold current SOC 2 Type II certification? When was the last audit?
- ISO 27001: Is the vendor ISO 27001 certified? Does the certificate cover AI operations specifically?
- Penetration Testing: How often does the vendor conduct pen tests? Are results or summaries available?
- Incident History: Has the vendor had data breaches or security incidents? What was the response?
- AI-Specific Threats: Does the vendor test for prompt injection, data poisoning, model extraction, and adversarial inputs?
- EU AI Act Conformity: Has the vendor conducted a conformity assessment for high-risk AI systems?
- GDPR Compliance: Is the vendor GDPR-compliant? Do they have a Data Protection Officer? DPIA available?
- Sector Regulations: Does the vendor meet industry-specific requirements (HIPAA, PCI-DSS, FedRAMP, SOX)?
- Regulatory Roadmap: How is the vendor preparing for upcoming AI regulation? Is there a published compliance timeline?
- SLA Uptime: What availability guarantees does the contract include? What are the remedies for downtime?
- Liability Clauses: Who is liable for AI-generated harm? Are there caps on liability? Indemnification provisions?
- Right to Audit: Can your organization audit the vendor AI system, or request third-party audits?
- Exit Strategy: What are the data portability and transition provisions? Is there vendor lock-in risk?
- Change Notification: Is the vendor obligated to notify you before model updates, retraining, or architecture changes?
- Training Data Provenance: Can the vendor describe the origin, composition, and licensing of training data?
- Model Versioning: Does the vendor maintain version history? Can you pin to a specific model version?
- Drift Monitoring: Does the vendor monitor for model drift? How are performance degradations detected and communicated?
- Human-in-the-Loop Options: Can you configure HITL review for high-stakes decisions? What override mechanisms exist?
- Output Logging: Does the vendor provide access to input/output logs for auditability?
Third-Party Risk Under Each Framework
Every major AI governance framework addresses vendor and third-party risk. Here is what each one requires.
| Framework | Key Reference | Third-Party Vendor Requirements |
|---|---|---|
| ISO 42001 | Annex A.10 | Third-party and customer relationships controls. Requires documented policies for AI system suppliers, assessment of third-party AI risks, and contractual provisions for transparency and accountability. |
| NIST AI RMF | GOVERN 6.1-6.2 | Policies and procedures for third-party AI risks. Organizations must address risks from AI systems developed or deployed by external entities, including supply chain provenance and data integrity. |
| EU AI Act | Art. 25-27 | Deployer obligations for high-risk AI systems. Deployers must use vendor AI in accordance with instructions, monitor performance, keep logs, conduct DPIAs, and report serious incidents. Art. 26 places direct compliance duties on deployers. |
| CSA GRC | Vendor Risk Mgmt | AI vendor risk management within GRC responsibilities. Covers vendor evaluation criteria, ongoing monitoring, contractual security requirements, and incident coordination with vendors. |
Shadow AI as a Vendor Risk Category
You cannot assess what you do not know about. Shadow AI is the fastest-growing vendor risk category in most organizations.
What Is Shadow AI?
Shadow AI refers to vendor AI tools adopted by employees or teams without IT approval, security review, or governance oversight. Every shadow AI tool is an unassessed vendor relationship. It bypasses your intake process, your risk scoring, your contract protections, and your compliance controls.
Network Monitoring
DNS logs, firewall rules, and proxy analysis to identify traffic to known AI API endpoints
Endpoint Analysis
Browser extensions, installed applications, and OAuth token grants to AI services
Procurement Audit
Expense reports, credit card statements, and P-card usage for AI tool subscriptions
Employee Survey
Anonymous self-reporting surveys asking what AI tools teams use in daily workflows
Contract Clauses for AI Vendor Agreements
Standard vendor contracts were not designed for AI. These seven clause categories close the gap between traditional SaaS procurement and AI-specific risk.
Data Processing Terms
Define exactly what data the vendor can process, where it is stored, retention periods, deletion schedules, and whether customer data is used for model training. Require opt-out rights for training data usage.
Model Transparency Requirements
Require the vendor to provide model cards, performance benchmarks, known limitations documentation, and bias testing results. Include provisions for updates when model behavior changes.
Incident Notification SLA
Specify maximum notification timeframes for security incidents, data breaches, model failures, and bias discoveries. Align with EU AI Act Art. 73 serious incident timelines (10 days for death, 15 days for other).
Right to Audit
Secure the contractual right to audit AI system behavior, request third-party assessments, and access performance monitoring data. Essential for deployer obligations under EU AI Act Art. 26.
Liability Caps & Indemnification
Address liability for AI-generated harm, including algorithmic discrimination, incorrect automated decisions, and IP infringement from AI outputs. Define indemnification scope and caps.
Termination Triggers
Define specific conditions that trigger contract termination: repeated compliance failures, unresolved bias findings, material model changes without notice, or failure to maintain security certifications.
IP Ownership & Output Rights
Clarify ownership of AI-generated outputs, derivative works, and fine-tuned models. Address whether vendor retains rights to aggregated insights derived from your usage data.
Ongoing Vendor Monitoring
Assessment is not a one-time event. Vendor AI systems change constantly through retraining, updates, and data shifts. Your monitoring program needs to keep pace.
Performance Review
- Accuracy and reliability metrics against SLA baselines
- Incident count and mean time to resolution
- User satisfaction and complaint trends
- Cost per transaction or API call trending
- Comparison against initial risk assessment scoring
Drift & Alert Monitoring
- Statistical drift detection on model outputs
- Anomaly alerts for unexpected behavior patterns
- Vendor security advisory monitoring
- Regulatory change alerts affecting the vendor
- Sub-processor or infrastructure change notifications
Re-Assessment Triggers
- Vendor announces major model version change
- New regulation affecting the AI system category
- Security incident or data breach at vendor
- Significant performance degradation detected
- Contract renewal approaching (90-day advance)
Get Started With Vendor Risk Assessment
Everything you need to assess, contract, and monitor third-party AI vendors.
Our vendor risk assessment service includes AI-specific questionnaires, contract clause templates, risk scoring models, and ongoing monitoring frameworks. Built on ISO 42001, NIST AI RMF, and EU AI Act deployer obligations.
Every recommendation in this guide traces back to primary authoritative sources, not opinions.
Built from 130+ primary source documents including international standards, government frameworks, and industry research.