Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Sub-Page 4 of 5

Frameworks & Practices
Deep Dive

Six frameworks define how organizations defend AI systems. This page breaks each one into actionable knowledge — what it covers, where it applies, which roles use it daily, and how they fit together. Not theory. Practice.

6 Frameworks Covered
14 ATLAS Tactic Categories Source: MITRE SAFE-AI Report (MP250397), Appendix A
10 OWASP LLM Risks Source: genai.owasp.org (2025)
14% Orgs with Adequate AI Security Talent Source: World Economic Forum 2025
Overview

The Framework Landscape

AI security frameworks exist because traditional cybersecurity controls weren’t designed for systems that learn, adapt, and generate. A firewall can’t stop model poisoning. An IDS won’t detect prompt injection. These six frameworks fill the gap — each from a different angle.

OWASP LLM Top 10
OWASP Foundation • 2025
The 10 most critical vulnerabilities in LLM applications. Defensive playbook for builders and defenders.
Focus: Application-layer risks • Audience: Developers, security engineers
Source: genai.owasp.org
MITRE ATLAS
MITRE Corporation
Adversarial threat landscape for AI systems. 14 tactic categories in an ATT&CK-style matrix, with documented real-world case studies. Spring 2025 update added 19 new techniques.
Focus: Offensive techniques & threat modeling • Audience: Red teamers, threat intel
Source: atlas.mitre.org
NIST AI RMF
NIST • Free Public Standard
Risk management framework bridging traditional risk practices to AI-specific requirements. The governance backbone.
Focus: Risk governance & organizational controls • Audience: Risk managers, policy
Source: nist.gov
ISO/IEC 42001
International Organization for Standardization
AI management system standard. Certifiable. Defines organizational requirements for responsible AI operation.
Focus: Management systems & audit • Audience: Auditors, governance leads
Source: ISO/IEC 42001:2023
EU AI Act
European Union • Active 2026
The world’s first comprehensive AI regulation. Security obligations for high-risk AI systems. Compliance deadlines are live.
Focus: Regulatory compliance • Audience: Compliance, legal, policy
Source: EU AI Act regulatory text
Google SAIF
Google Security Research
Secure AI Framework. Practical red teaming methodology. Co-developed with HackTheBox for hands-on AI adversarial testing.
Focus: Practical red teaming • Audience: Offensive security, red teams
Source: Google SAIF documentation; HackTheBox partnership (Q1 2026)
Application Security

OWASP LLM Top 10 (2025)

The definitive list of application-layer risks for large language model deployments. Version 2.0, released November 18, 2024 (45 pages). Officially titled “OWASP Top 10 for LLM Applications.” If you build, deploy, or defend LLM-powered applications, this is your starting framework. Each risk maps to specific defense strategies, the roles responsible for mitigation, and corresponding MITRE ATLAS technique references.

LLM-01 Prompt Injection Critical
Attacker manipulates LLM via crafted prompts — direct or indirect — to bypass controls or exfiltrate data. Includes jailbreaking, multimodal injection, and adversarial suffixes. The signature AI-native attack vector.
Defense: Constrain model behavior in system prompts, define expected output formats, implement input/output filtering, enforce least privilege, require human approval for high-risk actions, segregate external content
Roles: All AI Security roles
Prompt Injection in Agents → Source: genai.owasp.org
LLM-02 Sensitive Information Disclosure Critical
LLM reveals PII, financial details, proprietary algorithms, or confidential business data through generated outputs. Includes model inversion attacks extracting training data.
Defense: Data sanitization, robust input validation, strict access controls, federated learning, differential privacy, clear data usage policies. ATLAS refs: AML.T0024.000 (Infer Training Data Membership), AML.T0024.001 (Invert ML Model)
Roles: AI Security Engineer, AI Privacy Engineer
Source: genai.owasp.org
LLM-03 Supply Chain Vulnerabilities Critical
Compromised models, training data, LoRA adapters, or plugins introduce backdoors. Includes vulnerable pre-trained models, weak model provenance, and on-device supply chain risks.
Defense: Vet suppliers and T&Cs, maintain ML-BOM (OWASP CycloneDX), strict sandboxing, model provenance verification, data version control
Roles: MLSecOps Engineer, AI Security Architect
Agent Supply Chain Security → Source: genai.owasp.org
LLM-04 Data and Model Poisoning Critical
Adversaries corrupt pre-training, fine-tuning, or embedding data to embed biases, backdoors, or sleeper agents. An integrity attack that impacts model predictions.
Defense: Track data origins (CycloneDX/ML-BOM), vet data vendors, strict sandboxing, anomaly detection, data version control, model provenance verification
Roles: AI Model Risk Analyst, Adversarial ML Researcher
Source: genai.owasp.org
LLM-05 Improper Output Handling High
Unvalidated LLM outputs enable XSS, SSRF, or command injection in downstream systems.
Defense: Treat LLM output as untrusted user input. Rigorous output validation and encoding.
Roles: AI Security Engineer, Secure AI/ML Developer
Source: genai.owasp.org
LLM-06 Excessive Agency High
LLM extensions/agents with excessive functionality, permissions, or autonomy execute unintended actions. Agentic AI dramatically amplifies this risk surface.
Defense: Minimize extension permissions (least privilege), avoid open-ended extensions, require human approval for high-impact actions, complete mediation in downstream systems, rate limiting
Roles: AI Red Teamer, AI Product Security Manager
Tool Misuse & Excessive Agency → Source: genai.owasp.org
LLM-07 System Prompt Leakage High
System prompts containing sensitive instructions or internal configuration are extracted. The fundamental risk is not disclosure itself but that applications embed secrets in prompts instead of using proper security controls.
Defense: Never store credentials/connection strings in system prompts, treat system prompts as non-secret, implement proper session management and authorization checks outside the LLM
Roles: AI Penetration Tester, AI Red Teamer
Source: genai.owasp.org
LLM-08 Vector and Embedding Weaknesses Medium
Manipulated embeddings in RAG systems retrieve malicious or poisoned content instead of legitimate data.
Defense: Secure vector databases, validate embedding data integrity, access controls on RAG pipelines
Roles: AI Security Engineer, MLSecOps Engineer
Source: genai.owasp.org
LLM-09 Misinformation Medium
LLM generates plausible but factually incorrect content. Includes unsafe code generation and misrepresentation of complexity in specialized domains (e.g., health). Creates trust and liability risks.
Defense: RAG with verified sources, model fine-tuning, cross-verification with human oversight, automatic validation mechanisms, secure coding practices, clear risk communication to users
Roles: AI Model Risk Analyst, AI Product Security Manager
Source: genai.owasp.org
LLM-10 Unbounded Consumption Medium
Resource exhaustion via complex prompts causes denial of service or runaway cloud costs.
Defense: API rate limiting, input length limits, resource monitoring, cost alerting
Roles: Cloud Security Engineer, AI Infrastructure Security
Source: genai.owasp.org
Practitioner Ground Truth

The OWASP LLM Top 10 is the most immediately actionable framework on this list. If you’re a developer or security engineer, start here. You can map every item to a specific control you can implement this week. The other frameworks provide context and strategy; OWASP gives you a checklist.

Threat Modeling

MITRE ATLAS — Adversarial Threat Landscape

ATLAS (Adversarial Threat Landscape for AI Systems) extends MITRE ATT&CK into the AI domain. It catalogs how adversaries actually attack AI systems — 14 tactic categories in a matrix structure, backed by documented real-world case studies. The Spring 2025 update added 19 new techniques and 6 new case studies, with 150+ contributing organizations. This is the offensive playbook that red teamers and threat intelligence analysts live in.

For GRC professionals: ATLAS is the AI equivalent of ATT&CK — it catalogs what attackers actually do to AI systems. You don't need to understand the technical exploits. Read each tactic as a risk category: Reconnaissance = information leakage risk; ML Supply Chain = third-party/vendor risk; Model Evasion = output integrity risk; Exfiltration = data loss/IP theft risk. Map these to your existing risk register categories.
Note: ATLAS uses tactic columns in a matrix structure similar to ATT&CK. The groupings below are editorial — designed to show how tactics relate in an attack flow. Real tactic IDs use the AML.TA#### format. Technique IDs use AML.T####. Source: MITRE SAFE-AI Report (MP250397), Table A-1
Group 1 Planning & Preparation 3 tactics
AML.TA0002
Reconnaissance
Gather information about target AI system architecture, data sources, model types, and potential vulnerabilities. Includes techniques like Active Scanning (AML.T0006) and Search for Victim’s Publicly Available Research Materials.
Key role: AI Red Teamer
AML.TA0003
Resource Dev
Acquire adversarial ML capabilities, obtain public ML models for transfer attacks, build shadow models, develop adversarial tools. Includes techniques like Acquire Infrastructure (AML.T0008) and Develop Adversarial ML Attacks.
Key role: Adversarial ML Researcher ($157K–$222K) Source: Glassdoor, Capital One AI Red Team postings 2025–2026
AML.TA0004
Initial Access
Gain foothold via ML supply chain compromise, prompt injection, API exploitation, or compromised ML artifacts. Entry into the AI system environment.
Key role: AI Penetration Tester • MCP Security Risks →
Group 2 AI-Specific Access & Execution 2 tactics
AML.TA0000
ML Model Access
Gain inference or training access to target ML models. The AI-specific equivalent of system access — move from API-level to pipeline-level control. Includes techniques like ML Model Inference API Access and Full ML Model Access.
All offensive AI security roles
AML.TA0005
Execution
Run adversarial inputs or malicious code within AI environments to trigger misclassification, data exfiltration, or unintended model behavior.
Key roles: AI Red Teamer, AI Penetration Tester
Group 3 Persistence, Escalation & Evasion 4 tactics
AML.TA0006
Persistence
Maintain access via backdoored models or persistently poisoned training data that survives retraining cycles.
Defense: MLSecOps Engineer
AML.TA0012
Privilege Escalation
Escalate from inference-only access to training pipeline manipulation or model weight extraction. Move from consumer to operator privileges.
Defense: AI Security Engineer
AML.TA0007
Defense Evasion
Avoid detection by model monitoring, anomaly detection, and security telemetry. Includes Evade ML Model (AML.T0015) — crafting inputs that bypass ML-based defenses.
Detection: AI Security Analyst, AI Threat Intel Analyst
AML.TA0013
Credential Access
Steal API keys, model access tokens, or service credentials used to interact with AI systems and training infrastructure.
Defense: AI Infrastructure Security Specialist ($160K–$240K) Source: OpenAI, NVIDIA, CoreWeave postings 2025–2026
Group 4 Intelligence & Staging 3 tactics
AML.TA0008
Discovery
Map AI system architecture, model types, data sources, and interconnections between AI components. Enumerate available models and their capabilities.
Key role: AI Threat Intelligence Analyst
AML.TA0009
Collection
Gather model weights, training data, hyperparameters, and architecture details for offline analysis and attack development.
Key role: AI Digital Forensics Examiner
AML.TA0001
ML Attack Staging
Prepare adversarial examples, craft poisoned datasets, build targeted evasion inputs using collected intelligence. The AI-specific staging ground before impact.
Key role: Adversarial ML Researcher
Group 5 Exfiltration & Impact 2 tactics
AML.TA0010
Exfiltration
Extract model intellectual property, training data, or sensitive outputs from the target environment. Includes model extraction and training data membership inference.
Defense: Cryptographic Engineer ($172K–$257K) Source: Anthropic postings, FHE market data 2025–2026
AML.TA0011
Impact
Degrade model performance, cause systematic misclassification, deny service, or corrupt outputs at scale. The end goal of the adversarial campaign.
Response: AI Security Manager • AI Security Specialist
Practitioner Ground Truth

MITRE ATLAS is to AI security what ATT&CK is to traditional infosec. If you’re interviewing for red team or threat intel roles, you need to speak fluently about these 14 tactic categories and their techniques. Study the documented case studies — interviewers reference them directly. The Spring 2025 update expanded significantly: 19 new techniques and 6 new case studies from 150+ contributing organizations.

Framework source: atlas.mitre.org • 14 tactic categories • Spring 2025 update: 19 new techniques, 6 new case studies • Verified via SAFE-AI Report (MITRE MP250397, April 2025)
Governance & Compliance

Governance Frameworks: NIST AI RMF, ISO 42001, EU AI Act

These three frameworks define the governance, compliance, and regulatory landscape for AI security. They don’t tell you how to stop prompt injection — they tell you how to build organizational programs that ensure someone is responsible for stopping it, how to prove compliance, and what happens legally when you fail.

NIST AI RMF
National Institute of Standards and Technology • Free
AI risk management framework with four core functions: GOVERN, MAP, MEASURE, and MANAGE. Defines 7 trustworthiness characteristics. Voluntary, rights-preserving, non-sector-specific. Published January 2023 as NIST AI 100-1.
  • GOVERN: Organizational risk culture, policies, accountability structures
  • MAP: Context establishment, AI system categorization, impact assessment
  • MEASURE: TEVV processes, trustworthiness metrics, risk tracking
  • MANAGE: Risk treatment, incident response, continuous improvement
  • 7 trustworthiness characteristics: valid & reliable, safe, secure & resilient, accountable & transparent, explainable & interpretable, privacy-enhanced, fair with bias managed
NIST AI RMF Resource Center → Source: nist.gov • Free public standard • Study: nist.gov direct
ISO/IEC 42001
International Organization for Standardization • 2023
The certifiable AI management system standard. Defines what organizations must have in place — policies, processes, controls, and audit cycles — to manage AI responsibly.
  • Lead Implementer certification path
  • Lead Auditor certification path
  • Annual audit cycles & certification maintenance
  • Cross-functional control implementation
  • Referenced by 12+ AI governance roles
ISO 42001 Resource Center → Source: ISO/IEC 42001:2023
EU AI Act
European Union • Active April 2026
The world’s first comprehensive AI regulation. Classifies AI systems by risk level and imposes security, transparency, and accountability obligations on high-risk deployments.
  • High-risk AI system security controls
  • Model validation & testing requirements
  • Training data security & provenance
  • Breach notification & incident reporting
  • Compliance timeline creating talent demand surge
EU AI Act Hub → Source: EU AI Act regulatory text • Compliance deadlines active 2026
Sector-Specific: SR 11-7 (U.S. Banking)

If you’re targeting financial services AI security roles, SR 11-7 is mandatory reading. This Federal Reserve guidance governs model risk management in banking and is directly driving the AI Model Risk Analyst hiring surge. Bank of America, Citi, and major institutions actively hire for SR 11-7 + AI expertise. AI Model Risk Analyst salary range: $100K–$160K.

Source: Federal Reserve SR 11-7 guidance • Salary: Glassdoor, ZipRecruiter 2025–2026 • Hiring: Bank of America, Citi active postings

How These Frameworks Connect

These three governance frameworks aren't alternatives — they layer. NIST AI RMF provides the risk management methodology. ISO 42001 provides the certifiable management system. The EU AI Act provides the legal mandate. Organizations building AI governance programs typically start with NIST AI RMF (free, flexible), then implement ISO 42001 for certification, and map EU AI Act obligations on top.

NIST AI RMF GOVERN ↔ ISO 42001 Clause 5 (Leadership)
Both require organizational accountability, risk culture, and executive oversight for AI systems.
NIST AI RMF MEASURE ↔ ISO 42001 Clause 9 (Evaluation)
Both require systematic testing, monitoring, and performance evaluation of AI systems.
ISO 42001 Annex A ↔ EU AI Act Article 9
ISO 42001 controls map to EU AI Act risk management requirements for high-risk systems.
All Three ↔ SR 11-7
Financial services layer SR 11-7 model risk requirements on top of the governance trifecta.
Source: NIST AI 100-1, ISO/IEC 42001:2023, EU AI Act • Crosswalk: NIST AI RMF to ISO/IEC 42001 Crosswalk (NIST)
Emerging Practices

Google SAIF & Emerging Frameworks

Google’s Secure AI Framework (SAIF) represents the industry-led approach to AI security. Unlike standards bodies, SAIF comes from a company actively defending production AI at scale. Its partnership with HackTheBox makes it the most hands-on framework available.

Google SAIF
Google Security Research
Practical red teaming methodology for AI systems. Co-developed with HackTheBox for hands-on adversarial testing. Aligns with MITRE ATLAS and OWASP LLM Top 10.
  • HackTheBox AI Red Teamer Path ($490/yr Silver Annual)
  • 40–80+ hours study with practical engagement
  • 7-day practical exam expected Q1 2026
  • Complementary to MITRE ATLAS tactical framework
Source: Google SAIF documentation; HackTheBox partnership announcement Q1 2026
SANS SEC535: Offensive AI
SANS Institute • NEW 2026
SANS’s new offensive AI course covers adversarial machine learning, LLM exploitation, and AI red teaming. Part of the emerging training ecosystem for AI security specialists.
  • Summit: April 22, 2026 • $5,250
  • Offensive AI attack techniques & methodology
  • Builds on MITRE ATLAS tactical framework
  • Target audience: experienced pentesters moving into AI
Source: SANS Institute SEC535 course listing (2026)
Decision Guide

Which Framework First?

You don’t need all six frameworks on day one. Your starting point depends on your background and target role. Here’s the recommended study sequence by career path.

If you’re coming from…
Cybersecurity / SOC Analyst
Start with OWASP LLM Top 10 — it maps closest to the vulnerability-focused thinking you already do. Then layer MITRE ATLAS to understand the offensive perspective.
OWASP LLM MITRE ATLAS NIST AI RMF
If you’re coming from…
Software Development / ML Engineering
Start with OWASP LLM Top 10 — you’ll recognize the application-layer attack patterns. Then add MITRE ATLAS for threat modeling and SAIF for practical red teaming.
OWASP LLM MITRE ATLAS Google SAIF
If you’re coming from…
DevOps / Cloud Engineering
Start with OWASP LLM-03 (Supply Chain) and LLM-10 (Unbounded Consumption) — they overlap with your CI/CD and infrastructure expertise. Then study the full OWASP list and NIST AI RMF.
OWASP LLM NIST AI RMF ISO 42001
If you’re coming from…
Risk / Compliance / GRC
Start with NIST AI RMF — it extends the risk language you already speak. Add ISO 42001 for audit methodology, then learn enough OWASP to understand what the technical controls are actually doing.
NIST AI RMF ISO 42001 EU AI Act OWASP LLM
Universal Starting Point

Regardless of background, OWASP LLM Top 10 is the highest-ROI starting framework for AI security. It’s free, immediately actionable, and referenced in interviews across every AI security role. ISC2’s 2026 workforce data identifies AI/ML as the #1 skill need — and OWASP is where most hiring managers expect you to start demonstrating that skill.

Source: ISC2 2025 Workforce Study (AI/ML #1 skill need for 2026)
Integration

How Frameworks Work Together

No single framework covers the full AI security landscape. They layer: OWASP identifies the vulnerabilities, ATLAS maps how adversaries exploit them, NIST AI RMF and ISO 42001 define the organizational response, EU AI Act makes it mandatory, and SAIF provides the red team methodology. Here’s how they connect.

Layer 1: What Can Go Wrong
OWASP LLM Top 10

Identifies the 10 most critical application-layer risks. Your defensive checklist.

Layer 2: How Attackers Exploit It
MITRE ATLAS + Google SAIF

Maps the adversarial playbook — 14 tactic categories with techniques. Spring 2025 update added 19 new techniques and 6 case studies.

Layer 3: Who Is Responsible
NIST AI RMF + ISO 42001 + EU AI Act

Defines organizational accountability, audit requirements, and legal obligations. Makes security non-optional.

The Practitioner View

In practice, you won’t use all six frameworks simultaneously. Most organizations adopt OWASP + one governance framework (NIST or ISO) + whatever regulatory framework applies to their jurisdiction. The frameworks that matter most for your career depend on your target role and industry. Financial services demands SR 11-7 + NIST. EU-facing companies need the AI Act. Red teams live in ATLAS + SAIF.

© 2026 Tech Jacks Solutions • AI Security Frameworks & Practices Deep Dive • Sub-Page 4 of 5

Framework data sourced from: OWASP Top 10 for LLM Applications v2.0 (Nov 2024, genai.owasp.org), MITRE SAFE-AI Report MP250397 (April 2025), MITRE ATLAS Fact Sheet, ATLAS-NIST Presentation (Spring 2025), NIST AI RMF 1.0 (Jan 2023, nist.gov), ISO/IEC 42001:2023, EU AI Act regulatory text, Google SAIF documentation. Market data: WEF 2025, ISC2 2025 Workforce Study. Salary data: Glassdoor, ZipRecruiter, Practical DevSecOps, Anthropic, OpenAI/NVIDIA/CoreWeave postings (2025–2026). All claims GAIO-verified against knowledgebase sources.

© 2026 Tech Jacks Solutions. All rights reserved.