Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Governance
ai use case tracker image - person tracking at table over a document of AI use cases

Why Your Organization Needs a Comprehensive AI Use Case Tracker

And What to Track: The 40-Field Guide for Complete AI Visibility

Derrick D. Jackson | CISSP, CRISC, CCSP | Updated May 2025 ∼12 min read
40Fields
5Categories
8Frameworks

You know what’s funny about AI governance? Everyone talks about it, but most organizations are flying blind. They’ve got AI systems scattered across departments, no one knows who owns what, and when regulators come knocking, it’s a scramble to find documentation.

An AI Use Case Tracker fixes this mess. Think of it as a master spreadsheet that answers the question: “What AI are we actually using, and should we be worried about any of it?”

Regulations
EU AI Act, NIST, industry rules — documentation required when auditors show up
Risk
AI can discriminate, leak data, or make decisions nobody can explain
Operations
Know who to call at 2 AM when something breaks
ROI
Track whether your AI investments actually work

What You Should Track (And Why Each Matters)

Not every field applies to every AI system. Start with Identification & Ownership for all systems, then add fields based on risk level. Low-risk internal tools may only need 10–15 fields. High-risk customer-facing systems need all 40.

Select a category to explore fields
Use Case ID
Unique identifier like UC-2024-001 for every AI system
NIST AI RMFAll AI Systems
Why it matters
Without this, you can’t track anything. Try explaining to an auditor which version of “the marketing model” had the bias issue six months ago. Good luck with that. NIST AI RMF’s Govern function says system identification is foundational, and they’re right.
Use Case Name
“Customer Churn Prediction” beats “ML Model v2” every time
OECD AI Principles
Why it matters
People need to understand what they’re talking about without a decoder ring. OECD AI Principles emphasize transparency starts with clear naming.
Business Function
Which department uses this? Context determines risk classification
EU AI Act Title III
Why it matters
HR’s hiring algorithms face different regulations than Marketing’s recommendation engines. The EU AI Act (Title III, Chapter 2) cares deeply about context. An AI screening resumes is high-risk. An AI suggesting products isn’t.
Owner(s) and Email
A person with a name and email — not a team, not a committee
ISO 42001 §6
Why it matters
When your AI starts making weird decisions, you need to know exactly who to call. ISO/IEC 42001 Section 6 hammers this point home: no accountability means no governance.
Objective
“Reduce churn by 10%” is measurable. “Improve experience” is fluff.
ISO 42001 §5
Why it matters
Clear objectives prevent scope creep and let you kill underperforming projects. ISO/IEC 42001 Section 5 requires documented objectives for AI systems.
AI Type
NLP, Computer Vision, Recommender, Time Series — different types carry different risks
OECD Classification
Why it matters
NLP systems reading customer emails create privacy concerns. Computer vision in warehouses might track employee movements. Recommender systems can create filter bubbles. Document the type to anticipate category-specific risks per OECD AI classifications.
Model Type
Decision Tree, CNN, Transformer, XGBoost — impacts explainability
NIST SP 1270
Why it matters
A Random Forest can show feature importance. A deep neural network is a black box. A rules-based system shows exact logic. Regulators increasingly care about model transparency, especially in high-risk areas like hiring or credit decisions.
Model Version
Semantic versioning (major.minor.patch) with changelog
ISO 42001
Why it matters
Version 1.2.3 had a bug that rejected qualified female candidates. Version 1.2.4 fixed it. Without version tracking, you can’t identify affected users or demonstrate remediation to regulators.
Data Source
Every source explicitly — “Customer database” isn’t enough
NIST SP 1271High-Risk Required
Why it matters
List every source explicitly. Try: “Salesforce CRM (2020-2024), AWS transaction logs, demographic data from InfoGroup (under MSA dated 3/15/23).” Using scraped LinkedIn data for hiring? That’s a lawsuit. Using customer data beyond your privacy policy scope? Another lawsuit.
Data Sensitivity
Public / Internal / Confidential / Restricted — includes PII, PHI, financial, biometric
GDPRNIST 800-53ISO 27001High-Risk Required
Why it matters
Get this wrong and GDPR fines start at 4% of global revenue. Does it include PII (names, SSNs)? PHI (health records under HIPAA)? Financial data? Biometric data? NIST 800-53 Rev5 and ISO/IEC 27001 provide classification frameworks.
Data Governance / Lineage
Document the full data journey — approvals, flows, transformations
NIST AI RMF Map
Why it matters
Example: “Raw data → ETL pipeline strips PII → Feature engineering adds derived fields → Model training on anonymized set → Audit log captures all transformations.” Regulators love these diagrams.
Retention Period
Exact timeframes with justification — not “just in case”
GDPR Art. 5(1)(e)ISO 27701
Why it matters
“Financial predictions: 7 years (IRS). Training data: 3 years (model refresh). User behavior: 90 days (privacy policy).” Keeping data “just in case” violates GDPR’s data minimization principle.
Bias & Fairness Concerns
Specific, measurable disparities — not vague statements
NIST SP 1270High-Risk Required
Why it matters
“Loan model approves 72% of white applicants vs 58% of Black applicants.” “Resume screener favors male-associated names (78% callback vs 61%).” Include your mitigation strategies.
AI Inherent Risk
Rate by autonomy and impact: Low / Medium / High / Critical
EU AI Act Tiers
Why it matters
Low: suggestions humans review (spell checker). Medium: automated decisions with human oversight (fraud alerts). High: autonomous decisions affecting people (loan approvals). Critical: safety-critical or legally-binding (medical diagnosis). EU AI Act defines four risk tiers with different requirements for each.
Data Sensitivity Risk
Combine classification with volume and retention
NIST 800-53
Why it matters
A system processing millions of SSNs for years is riskier than one processing hundreds of email addresses temporarily.
Combined Risk
Simple matrix: autonomy × data sensitivity = risk level
ISO 23894:2023
Why it matters
Don’t overcomplicate this. High autonomy + sensitive data = High risk. Low autonomy + public data = Low risk. Everything else = Medium risk. Document your reasoning.
Training Data Provenance
Where the model was trained — distinct from operational data
EU AI Act Art. 10NIST MAP
Why it matters
For ML models, where did the training data come from? Vendor-trained on proprietary corpus? Fine-tuned on internal data? This is distinct from “Data Source” which covers operational data. EU AI Act Article 10 requires data governance including training data provenance.
Data Residency
Physical location of data + transfer mechanisms
GDPR Ch. VEU AI Act
Why it matters
Where does the data physically reside? Where do API calls go? Critical for GDPR cross-border transfers and data sovereignty requirements. Document regions, transfer mechanisms (SCCs, adequacy decisions, DPF).
Regulatory Impact
Every applicable regulation — EU AI Act, GDPR, HIPAA, industry-specific
EU AI ActGDPR
Why it matters
Miss one and enforcement actions get expensive. Your legal team will thank you for maintaining this list.
Security Requirements
TLS, access controls, audit logging, adversarial defenses
NIST 800-53
Why it matters
AI systems are attack targets. Model theft is real. Adversarial attacks happen. Document your defenses.
Explainability Level
Can you explain why the model made a specific decision? High/Medium/Low.
EU AI ActHigh-Risk Required
Why it matters
Black box models in lending or hiring will get you in trouble. The EU AI Act requires transparency for high-risk applications.
Impact Assessment Completed?
DPIA, AI Impact Assessment — many regulations require these
GDPREU AI Act
Why it matters
“We forgot” doesn’t impress regulators.
Ethical Considerations
Beyond legal — does this system respect user autonomy?
OECDISO 42001
Why it matters
Legal compliance is the floor, not the ceiling. Could it discriminate against vulnerable groups? Document ethical reviews and decisions.
User / Stakeholder Impact
Who does this affect? Employees? Customers? Job applicants?
EU AI Act
Why it matters
The same AI technology has different implications depending on who it affects.
KPIs & Metrics
Precision, recall, customer satisfaction, processing time
ISO 42001
Why it matters
Precision: 0.87. Recall: 0.76. Customer satisfaction: +12%. Processing time: -34%. Without metrics, you’re guessing whether your AI works.
Expected Benefits / ROI
“Reduce manual review by 50%” or “Save $2M annually”
ISO 42001 §5
Why it matters
If you can’t articulate expected value, why are you building it?
Human-in-the-Loop
Required oversight level per use case
EU AI Act Art. 14NIST GOVERNHigh-Risk Required
Why it matters
What human oversight is required for each use case? Customer support: all responses reviewed before sending. Engineering: at developer discretion. Legal: blocked pending approval. EU AI Act Article 14 mandates human oversight for high-risk systems.
Deployment Environment
Cloud, on-prem, hybrid, edge — affects security posture
NIST 800-53
Why it matters
Where does this system run? Cloud SaaS, self-hosted cloud, on-premises, hybrid, or edge/IoT? The deployment environment directly affects your security posture, data residency, and available controls.
Approval Status
Approved / In Review / Rejected — deploying unapproved AI makes headlines
ISO 42001
Why it matters
Deploying unapproved AI is how companies end up in headlines for all the wrong reasons.
Development Status
Planning / Testing / Production / Decommissioned
NIST AI RMF
Why it matters
Different stages need different oversight. Don’t apply production controls to a proof of concept.
Tool or Vendor Name
OpenAI API? AWS SageMaker? Vendor dependencies create risks.
NIST AI RMF
Why it matters
What happens if they change their terms? Raise prices? Get hacked?
Audit History
When last reviewed, by whom, what they found
ISO 42001NIST
Why it matters
“Internal audit March 2024: Identified need for additional bias testing. Completed April 2024.” Shows you’re actively managing governance.
License
MIT? Apache? Proprietary? GPL in proprietary = problem.
ISO 42001
Why it matters
Using GPL code in your proprietary system? That’s a problem. Using unlicensed code? Bigger problem.
Contracting
MSA with cloud provider? SOW with consultants?
ISO 42001
Why it matters
Contracts define who’s liable when things go wrong. You want to know this before things go wrong.
Last Updated
When was this record last verified as accurate?
NIST AI RMF
Why it matters
Six-month-old information might be completely wrong. Set calendar reminders for regular updates.
Governance Contacts
Primary, backup, and escalation — with names, not committees
ISO 42001
Why it matters
“Jane Smith (primary), John Doe (backup), Lisa Park (escalation)” beats “AI Ethics Committee” when you need decisions fast.
Incident History
Past incidents with root cause and remediation
EU AI Act Art. 73NIST MANAGE
Why it matters
Separate from Audit History. Document actual incidents: “2026-02-14: Generated incorrect contract clause, caught in legal review. Root cause: insufficient prompt guardrails. Remediation: input validation layer deployed.” EU AI Act Article 73 requires serious incident reporting.
Insurance / Liability
AI-specific coverage and vendor indemnification
ISO 42001
Why it matters
Some organizations are buying AI-specific insurance. Document: Does your cyber liability policy cover AI outputs? Does the vendor provide indemnification for IP infringement? What are the liability caps?
Decommissioning Plan
What happens when you sunset this system?
NIST GOVERN 1.7ISO 42001
Why it matters
Data deletion within 30 days of sunset. Model weights archived (encrypted). Users notified 60 days prior. Fallback process documented. NIST AI RMF GOVERN 1.7 explicitly addresses decommissioning.
Cross-border Transfers
International data flow mechanisms
GDPR Ch. VEU AI Act
Why it matters
SCCs, adequacy decisions, DPF status for any data crossing borders. Increasingly critical as AI systems call APIs in different jurisdictions.

Ready to start tracking?

Download the pre-built template with all 40 fields, dropdowns, and risk scoring.

Get the 40-Field Tracker Template → Download the Checklist →

AI Inherent Risk Tiers

Rate each AI system based on its autonomy level and potential impact on people. The EU AI Act defines four risk tiers with different requirements for each.

Low
Makes suggestions humans review
e.g., spell checker, content recommendations
Minimal governance required. Basic documentation and periodic review. No mandatory EU AI Act obligations beyond transparency for certain systems.
Medium
Automates decisions with human oversight
e.g., fraud alerts, customer segmentation
Moderate governance. Documented risk assessment, defined oversight procedures, regular review cycles. May trigger specific transparency requirements.
High
Autonomous decisions affecting people
e.g., loan approvals, hiring, insurance
Full governance required. Mandatory risk management system, data governance, technical documentation, human oversight, accuracy/robustness testing, conformity assessment. EU AI Act Title III, Chapter 2 obligations apply.
Critical
Safety-critical or legally-binding decisions
e.g., medical diagnosis, autonomous vehicles
Maximum governance + external oversight. Third-party conformity assessment, continuous post-market monitoring, serious incident reporting. Some uses prohibited outright under EU AI Act Article 5.

AI Autonomy × Data Sensitivity

Don’t overcomplicate this. High autonomy + sensitive data = High risk. Low autonomy + public data = Low risk.

AI Autonomy ↑
Medium
High
Critical
High
Low
Medium
High
Low
Low
Low
Medium
Public
Internal
Sensitive

Data Sensitivity →

Need the full regulatory mapping?

See exactly which NIST, ISO, EU AI Act, and GDPR clauses apply to each of the 40 fields.

Grab the Cheat Sheet → Free Download →

What a Completed Tracker Entry Looks Like

Here’s what a fully documented AI Use Case Tracker entry looks like for a real system — Anthropic’s Claude, used as an internal productivity assistant. The heat indicators show where risk concentrates.

UC-2026-017 Claude — Internal AI Assistant
COMBINED: HIGH
Low Risk Medium Risk High Risk Critical
Identification & Ownership
Use Case NameClaude AI — Internal Productivity Assistant
Business FunctionEngineering, Legal Review, Content, Customer Support
OwnerMarcus Webb, VP of Engineering — m.webb@acme.com
ObjectiveReduce internal documentation time by 40%, accelerate code review by 30%
Technical Metadata
AI TypeLarge Language Model (LLM) — Generative AI
Model TypeTransformer (proprietary) — Black box, limited explainability
Model VersionClaude Opus 4 (via API, version pinned to 2026-03)
Data & Risk Attributes
Data SourceInternal codebase, Confluence docs, Slack history, customer tickets (Zendesk), legal contracts
Data SensitivityCONFIDENTIAL — Contains PII (customer names, emails), proprietary source code, legal documents, financial data in contracts
RetentionAPI: zero retention (Anthropic policy). Internal logs: 90 days. Prompt cache: session-only.
Bias ConcernsCode review may favor patterns from dominant language in training data. Legal summaries may reflect US-centric legal reasoning. Customer response tone may vary across cultural contexts.
AI Inherent RiskHIGH — Autonomous text generation used in customer-facing responses and legal review drafts without mandatory human review gate
Combined RiskHIGH — High autonomy × Confidential data = High risk. Legal and customer support use cases elevate to near-critical.
Business & Compliance
RegulatoryGDPR (customer PII in prompts), SOC 2 (code access), CCPA, potential EU AI Act Art. 52 transparency obligations
SecurityTLS 1.3, API key rotation (90-day), SSO via Okta, DLP scanning on prompts, no data used for training (contractual)
ExplainabilityLOW — Transformer architecture, no feature attribution. Outputs are plausible but not traceable to specific training data.
Impact AssessmentDPIA completed 2026-01-15. AI Impact Assessment: PENDING (due before legal use case goes live)
KPIsDoc time: -38% (target -40%). Code review: -27% (target -30%). Hallucination rate: 4.2% (target <3%). CSAT: +8%.
Oversight & Audit
ApprovalCONDITIONAL — Approved for engineering & content. Legal use case: pending AI Impact Assessment completion.
StatusProduction (engineering, content). Pilot (customer support). Blocked (legal review).
VendorAnthropic PBC — API access via Enterprise agreement (signed 2025-11, annual renewal)
Last Audit2026-02-20 — Internal security review. Finding: prompt injection risk in customer support workflow. Remediation: input sanitization layer deployed 2026-03-01.
GovernancePrimary: Marcus Webb (VP Eng). Backup: Mike Torres (CISO). Escalation: AI Governance Committee (quarterly review).

Want a blank version of this?

The fillable tracker template includes all 40 fields with the same structure as this example.

Download Fillable Tracker →

Making This Actually Work

An AI Use Case Tracker is only useful if it’s accurate and current.

01
Start Small
Don’t try to document 200 systems on day one. Pick your highest-risk systems and work down.
02
Use What You Have
Got ServiceNow? SharePoint? Even Excel works. Fancy tools can come later.
03
Make It Mandatory
No production deployment without tracker documentation. Period.
04
Review Regularly
Quarterly reviews catch drift before it becomes a problem. Keep it readable.

Interactive Tracker Template

Explore our pre-built tracker template below. Copy it to your own Airtable workspace or use it as a reference for building your own.

Airtable — AI Use Case Tracker Template
x
x
x
x
x
x

Author

Tech Jacks Solutions

Leave a comment

Your email address will not be published. Required fields are marked *