Frameworks & Practices
Deep Dive
Six frameworks define how organizations defend AI systems. This page breaks each one into actionable knowledge — what it covers, where it applies, which roles use it daily, and how they fit together. Not theory. Practice.
The Framework Landscape
AI security frameworks exist because traditional cybersecurity controls weren’t designed for systems that learn, adapt, and generate. A firewall can’t stop model poisoning. An IDS won’t detect prompt injection. These six frameworks fill the gap — each from a different angle.
OWASP LLM Top 10 (2025)
The definitive list of application-layer risks for large language model deployments. Version 2.0, released November 18, 2024 (45 pages). Officially titled “OWASP Top 10 for LLM Applications.” If you build, deploy, or defend LLM-powered applications, this is your starting framework. Each risk maps to specific defense strategies, the roles responsible for mitigation, and corresponding MITRE ATLAS technique references.
The OWASP LLM Top 10 is the most immediately actionable framework on this list. If you’re a developer or security engineer, start here. You can map every item to a specific control you can implement this week. The other frameworks provide context and strategy; OWASP gives you a checklist.
MITRE ATLAS — Adversarial Threat Landscape
ATLAS (Adversarial Threat Landscape for AI Systems) extends MITRE ATT&CK into the AI domain. It catalogs how adversaries actually attack AI systems — 14 tactic categories in a matrix structure, backed by documented real-world case studies. The Spring 2025 update added 19 new techniques and 6 new case studies, with 150+ contributing organizations. This is the offensive playbook that red teamers and threat intelligence analysts live in.
AML.TA#### format. Technique IDs use AML.T####.
Source: MITRE SAFE-AI Report (MP250397), Table A-1
MITRE ATLAS is to AI security what ATT&CK is to traditional infosec. If you’re interviewing for red team or threat intel roles, you need to speak fluently about these 14 tactic categories and their techniques. Study the documented case studies — interviewers reference them directly. The Spring 2025 update expanded significantly: 19 new techniques and 6 new case studies from 150+ contributing organizations.
Governance Frameworks: NIST AI RMF, ISO 42001, EU AI Act
These three frameworks define the governance, compliance, and regulatory landscape for AI security. They don’t tell you how to stop prompt injection — they tell you how to build organizational programs that ensure someone is responsible for stopping it, how to prove compliance, and what happens legally when you fail.
- GOVERN: Organizational risk culture, policies, accountability structures
- MAP: Context establishment, AI system categorization, impact assessment
- MEASURE: TEVV processes, trustworthiness metrics, risk tracking
- MANAGE: Risk treatment, incident response, continuous improvement
- 7 trustworthiness characteristics: valid & reliable, safe, secure & resilient, accountable & transparent, explainable & interpretable, privacy-enhanced, fair with bias managed
- Lead Implementer certification path
- Lead Auditor certification path
- Annual audit cycles & certification maintenance
- Cross-functional control implementation
- Referenced by 12+ AI governance roles
- High-risk AI system security controls
- Model validation & testing requirements
- Training data security & provenance
- Breach notification & incident reporting
- Compliance timeline creating talent demand surge
If you’re targeting financial services AI security roles, SR 11-7 is mandatory reading. This Federal Reserve guidance governs model risk management in banking and is directly driving the AI Model Risk Analyst hiring surge. Bank of America, Citi, and major institutions actively hire for SR 11-7 + AI expertise. AI Model Risk Analyst salary range: $100K–$160K.
Source: Federal Reserve SR 11-7 guidance • Salary: Glassdoor, ZipRecruiter 2025–2026 • Hiring: Bank of America, Citi active postingsHow These Frameworks Connect
These three governance frameworks aren't alternatives — they layer. NIST AI RMF provides the risk management methodology. ISO 42001 provides the certifiable management system. The EU AI Act provides the legal mandate. Organizations building AI governance programs typically start with NIST AI RMF (free, flexible), then implement ISO 42001 for certification, and map EU AI Act obligations on top.
Google SAIF & Emerging Frameworks
Google’s Secure AI Framework (SAIF) represents the industry-led approach to AI security. Unlike standards bodies, SAIF comes from a company actively defending production AI at scale. Its partnership with HackTheBox makes it the most hands-on framework available.
- HackTheBox AI Red Teamer Path ($490/yr Silver Annual)
- 40–80+ hours study with practical engagement
- 7-day practical exam expected Q1 2026
- Complementary to MITRE ATLAS tactical framework
- Summit: April 22, 2026 • $5,250
- Offensive AI attack techniques & methodology
- Builds on MITRE ATLAS tactical framework
- Target audience: experienced pentesters moving into AI
Which Framework First?
You don’t need all six frameworks on day one. Your starting point depends on your background and target role. Here’s the recommended study sequence by career path.
Regardless of background, OWASP LLM Top 10 is the highest-ROI starting framework for AI security. It’s free, immediately actionable, and referenced in interviews across every AI security role. ISC2’s 2026 workforce data identifies AI/ML as the #1 skill need — and OWASP is where most hiring managers expect you to start demonstrating that skill.
Source: ISC2 2025 Workforce Study (AI/ML #1 skill need for 2026)How Frameworks Work Together
No single framework covers the full AI security landscape. They layer: OWASP identifies the vulnerabilities, ATLAS maps how adversaries exploit them, NIST AI RMF and ISO 42001 define the organizational response, EU AI Act makes it mandatory, and SAIF provides the red team methodology. Here’s how they connect.
Identifies the 10 most critical application-layer risks. Your defensive checklist.
Maps the adversarial playbook — 14 tactic categories with techniques. Spring 2025 update added 19 new techniques and 6 case studies.
Defines organizational accountability, audit requirements, and legal obligations. Makes security non-optional.
In practice, you won’t use all six frameworks simultaneously. Most organizations adopt OWASP + one governance framework (NIST or ISO) + whatever regulatory framework applies to their jurisdiction. The frameworks that matter most for your career depend on your target role and industry. Financial services demands SR 11-7 + NIST. EU-facing companies need the AI Act. Red teams live in ATLAS + SAIF.