Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Hub / Govern / Cloud Governance Controls
Govern Pillar

Agent Governance on Azure, AWS, and GCP

Cloud-Native Controls for AI Agents: Mapping Framework Requirements to Platform Services Across the Major Cloud Providers

2,480 Words 14 Min Read 12 Sources 20 Citations
Prerequisites This article assumes familiarity with the NIST AI RMF, ISO 42001, and EU AI Act frameworks. For background, see the Agent Governance Stack article.
01 // Foundation Why Cloud-Native Governance Matters Context

Most enterprises deploy AI agents on one of four major cloud platforms: Microsoft Azure, Amazon Web Services, Google Cloud Platform, or Oracle Cloud Infrastructure. Each platform provides a distinct set of governance services, from content safety filters to audit logging to identity management. The challenge is not the absence of tools. It is the absence of a systematic mapping between governance framework requirements and the specific cloud-native services that satisfy them.

The NIST AI Risk Management Framework (AI 100-1) defines four core functions: Govern, Map, Measure, and Manage. ISO/IEC 42001:2023 specifies certifiable controls across 39 Annex A requirements. The EU AI Act (Regulation 2024/1689) mandates specific technical capabilities for high-risk AI systems under Articles 9 through 15. Each cloud platform implements these requirements differently, using different service names, different configuration models, and different levels of maturity.

"High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and that they perform consistently in those respects throughout their lifecycle."

-- EU AI Act, Article 15 (Regulation 2024/1689)

This article is the implementation bridge. It maps nine governance domains to their framework requirements, then shows exactly which services on Azure, AWS, GCP, and Oracle OCI satisfy each requirement. The goal is practical: an enterprise governance team should be able to read this article and configure their cloud platform to meet NIST, ISO 42001, and EU AI Act requirements for their AI agent deployments. For the broader governance architecture, see the Enterprise Governance Playbook.

02 // Requirements The Governance Requirements Matrix Interactive

Before evaluating cloud services, you need a clear picture of what each governance framework actually requires. The following matrix maps nine governance domains to their specific requirements across NIST AI RMF, ISO 42001, and the EU AI Act. This is the requirements specification that every cloud implementation must satisfy.

Each domain represents a distinct governance capability. Content safety prevents harmful outputs. PII protection ensures data governance compliance. Policy enforcement translates governance rules into automated controls. Audit logging creates the evidence trail for compliance verification. Model evaluation validates performance against declared specifications. Access control manages non-human identities and least-privilege permissions. Guardrails enforce behavioral boundaries at runtime. Incident response handles failures and serious incidents as required by EU AI Act Article 62.

Governance Domain NIST Function ISO 42001 EU AI Act
Accuracy, Robustness & Cybersecurity Manage (MG-2.2) A.8.4 Art. 15
PII / Data Protection Map (MP-3.4), Manage A.9.2 - A.9.4 Art. 10
Policy Enforcement Govern (GV-1.1) A.6.1 Art. 9
Audit Logging Measure (MS-2.4) A.10.2 Art. 12
Model Evaluation Measure (MS-2.6) A.8.5 Art. 15
Access Control (NHI) Govern (GV-2.1) A.9.3 Art. 9
Human Oversight Govern (GV-1.3) A.10.4 Art. 14
Guardrails / Safety Manage (MG-2.2) A.8.4 Art. 15
Incident Response Manage (MG-4.1) A.10.3 Art. 62
Key Insight

These nine domains are framework-agnostic. Whether you are pursuing NIST compliance, ISO 42001 certification, or EU AI Act conformity, you need the same nine capabilities. The cloud services that implement them differ; the requirements do not. Human oversight (Art. 14) is the most frequently underestimated: it requires kill switches, intervention mechanisms, and escalation protocols that allow human operators to halt or override agent behavior in real time.

03 // Azure Azure AI Agent Governance Platform

Microsoft Azure offers the most comprehensive native governance toolset for AI agent deployments among the major cloud providers. The platform's advantage stems from tight integration between Azure AI Content Safety, the Responsible AI Dashboard, Azure Policy, and Microsoft Entra ID, creating a governance surface that spans content moderation, fairness assessment, infrastructure policy, and identity management in a single ecosystem.

Click each service category below to explore the governance capabilities and configuration patterns.

🛡
Azure AI Content Safety
Content Safety + Prompt Shields

Azure AI Content Safety provides multi-modal content moderation for text and image outputs. The service evaluates content across four severity levels (safe, low, medium, high) for categories including hate, violence, sexual content, and self-harm. For agent deployments, the critical feature is Prompt Shields, which detects and blocks both direct prompt injection attacks and indirect injection via grounded data sources. Groundedness detection validates that agent outputs are supported by their source documents, preventing hallucinated citations.

Text Moderation Image Moderation Prompt Shields Groundedness Detection

Practical configuration: Enable Prompt Shields at the Azure OpenAI resource level. Set content filter severity thresholds per category based on your agent's risk classification. For high-risk agents under EU AI Act Article 15, set all categories to block at medium severity or above. Configure groundedness detection for any agent that generates citations or references external data.

📈
Responsible AI Dashboard
Fairness + Explainability + Error Analysis

The Responsible AI Dashboard in Azure Machine Learning provides four integrated assessment components: fairness analysis (demographic parity, equalized odds), explainability (SHAP values, feature importance), error analysis (cohort-level failure patterns), and causal inference. For agent governance, this maps directly to ISO 42001 A.8.5 (system performance) and NIST MEASURE function requirements for bias and fairness testing.

Fairness Assessment Explainability Error Analysis Causal Inference

Practical configuration: Run fairness assessments during the staging phase (stage gate 4) before production deployment. Define sensitive attributes relevant to your agent's domain. For HR screening agents classified as high-risk, fairness assessment is mandatory under EU AI Act Article 10 data governance requirements.

📝
Azure Policy + Entra ID
Policy Enforcement + NHI Management

Azure Policy enforces governance rules at the subscription and resource group level. For AI agent governance, define policies that require content safety filters on all Azure OpenAI deployments, mandate diagnostic logging, restrict model access to approved endpoints, and enforce network isolation. Microsoft Entra ID manages agent identities as managed identities or service principals, implementing the NHI governance requirements from NIST GV-2.1.

Azure Policy Entra Managed Identity RBAC Azure Purview

Practical configuration: Assign each agent a system-assigned managed identity rather than shared credentials. Apply least-privilege RBAC roles scoped to the specific resources each agent requires. Use Azure Purview for data governance, tracking what data each agent can access and how that data is classified. This satisfies ISO 42001 A.9.2-A.9.4 data management controls.

📊
Azure Monitor + Application Insights
Observability + Audit Logging

Azure Monitor with Application Insights provides end-to-end agent observability. Configure diagnostic settings to capture all API calls, token consumption, content filter triggers, and latency metrics. Log Analytics workspace enables KQL queries for behavioral pattern analysis. For EU AI Act Article 12 compliance, configure log retention periods that match your high-risk system classification requirements.

Azure Monitor App Insights Log Analytics Activity Log

Practical configuration: Enable diagnostic settings on all Azure OpenAI resources. Route logs to a dedicated Log Analytics workspace with immutable storage for audit compliance. Set up alert rules for anomalous patterns: sudden increases in token consumption, content filter trigger rates above baseline, or unexpected API call patterns that may indicate prompt injection attempts.

🚨
Incident Response + Human Oversight
Agent Incident Detection + Kill Switches

Microsoft Sentinel provides SIEM capabilities for agent incident detection and response, satisfying EU AI Act Article 62 incident reporting requirements. Configure Sentinel analytics rules to detect agent behavioral anomalies: unexpected tool invocations, privilege escalation attempts, or content filter bypass patterns. Azure Service Health monitors platform-level incidents affecting agent infrastructure. For Article 14 human oversight compliance, use Azure Logic Apps or Function Apps to implement kill switch mechanisms that halt agent execution when anomaly thresholds are breached.

Microsoft Sentinel Service Health Logic Apps Action Groups

Practical configuration: Create a Sentinel workspace with custom analytics rules for agent-specific threat detection. Configure action groups that trigger automated containment (disable agent managed identity, scale to zero) when critical incidents are detected. Maintain a documented escalation protocol from automated detection to human review, satisfying Article 14 intervention requirements.

Azure's governance maturity for AI agents is Mature. The tight integration between Content Safety, Responsible AI, Azure Policy, Entra ID, and Azure Monitor creates a governance surface where policy violations are automatically detected, logged, and can trigger automated remediation. The primary gap: Azure does not yet provide a native agent registry that maps to the BBOM concept, so inventory management requires a custom solution or third-party tooling.

04 // AWS AWS Agent Governance Platform

Amazon Web Services approaches AI agent governance primarily through Amazon Bedrock Guardrails, which provides the most granular content control configuration among the major cloud providers. Bedrock Guardrails operates as a standalone service that can be applied to any Bedrock agent or model invocation, creating a consistent governance layer across different model providers and agent architectures.

🛡
Bedrock Guardrails
Content Filters + PII + Grounding

Bedrock Guardrails provides five governance controls in a single configuration: content filters (hate, insults, sexual, violence, misconduct, prompt attacks), denied topics (custom topic definitions that block specific conversation areas), word filters (profanity and custom blocked terms), PII redaction (automatic detection and masking of 30+ PII types), and contextual grounding (validates output relevance and groundedness against source documents).

Content Filters Denied Topics Word Filters PII Redaction Contextual Grounding

Practical configuration: Create a guardrail version for each agent risk tier. High-risk agents should have content filters set to HIGH strength for all categories, PII redaction enabled for all entity types, and contextual grounding thresholds set at 0.7 or higher. Attach the guardrail to the Bedrock agent association. The denied topics feature is particularly valuable for preventing agents from operating outside their declared scope, directly mapping to excessive agency prevention.

📝
CloudTrail + IAM
Audit Logging + Access Control

AWS CloudTrail captures every Bedrock API call, providing the audit trail required by EU AI Act Article 12 and ISO 42001 A.10.2. Configure CloudTrail to log both management events and data events for Bedrock resources. For agent governance, create dedicated IAM roles for each Bedrock agent with least-privilege permissions. Use IAM policy conditions to restrict which models, knowledge bases, and action groups each agent can access.

CloudTrail CloudWatch IAM Roles AWS Config

Practical configuration: Create an organizational CloudTrail trail with S3 bucket logging and CloudWatch Logs integration. Enable AWS Config rules to verify that all Bedrock agents have guardrails attached, that IAM roles follow least-privilege patterns, and that logging is enabled. Use Amazon Macie for data classification of any S3 buckets that agents access, satisfying ISO 42001 A.9.2 data management requirements.

🚨
Incident Response + Human Oversight
EventBridge + Security Hub

Amazon EventBridge and AWS Security Hub provide agent incident detection and event correlation for EU AI Act Article 62 compliance. EventBridge captures Bedrock agent events (guardrail violations, invocation failures, throttling) and routes them to automated response workflows via Step Functions or Lambda. Security Hub aggregates findings from Config, GuardDuty, and custom agent-specific detectors into a single compliance dashboard. For Article 14 human oversight, implement Step Functions workflows with human approval tasks that gate high-risk agent actions, providing the intervention and halt mechanisms the regulation requires.

EventBridge Security Hub Step Functions Lambda

Practical configuration: Create EventBridge rules that match Bedrock guardrail violation events and route them to an SNS topic for immediate team notification, plus a Step Functions state machine for automated containment. Configure Security Hub custom actions that allow security analysts to disable agent IAM roles with a single click, satisfying Article 14 kill switch requirements.

AWS governance maturity is Mature for content safety and audit logging, with Bedrock Guardrails offering the most granular content control configuration available. The platform's strength is the composability of IAM, CloudTrail, Config, and Guardrails into a layered governance architecture. The primary gap: AWS does not yet provide a native Responsible AI dashboard equivalent to Azure's fairness and explainability tools, requiring third-party solutions for bias assessment as required by NIST MEASURE and EU AI Act Article 10.

05 // GCP GCP Agent Governance Platform

Google Cloud Platform governs AI agent behavior through Vertex AI safety settings, Model Garden evaluation tools, Cloud Audit Logs, and a strong identity management layer. GCP's governance approach is deeply integrated with its Agent Development Kit (ADK), which provides governance hooks at the framework level rather than only at the infrastructure level.

🛡
Vertex AI Safety + Model Garden
Safety Filters + Model Evaluation

Vertex AI safety settings configure content filtering thresholds for Gemini model calls across categories including harassment, hate speech, sexually explicit content, and dangerous content. Each category supports four threshold levels: block none, block few, block some, and block most. Model Garden provides model cards with pre-computed evaluation metrics, benchmarks, and known limitations for each available model, satisfying ISO 42001 A.8.5 system performance documentation.

Safety Settings Model Cards Evaluation Pipelines Responsible AI Toolkit

Practical configuration: Set safety settings at the model endpoint level. For high-risk agents, configure all harm categories to BLOCK_MEDIUM_AND_ABOVE. Use Vertex AI evaluation pipelines to run automated assessments during the staging stage gate. Model evaluation results provide the evidence artifacts required by NIST MEASURE MS-2.6 and EU AI Act Article 15 accuracy requirements.

🔑
IAM + DLP + Cloud Audit Logs
Identity + Data Protection + Audit

GCP's Workload Identity Federation provides the most flexible agent identity model, allowing agents to authenticate without long-lived service account keys. Cloud Audit Logs capture admin activity, data access, and system events across all GCP services, satisfying EU AI Act Article 12 logging requirements. The DLP API provides 150+ built-in infoType detectors for sensitive data classification and redaction.

Workload Identity Cloud Audit Logs DLP API VPC Service Controls

Practical configuration: Use Workload Identity Federation for all agent service accounts. Enable Data Access audit logs for Vertex AI endpoints. Apply VPC Service Controls to create a security perimeter around agent resources, preventing data exfiltration. Run DLP inspections on any agent input/output pipelines that process PII, satisfying ISO 42001 A.9.2-A.9.4 data governance requirements.

🚨
Incident Response + Human Oversight
Security Command Center + Cloud Functions

Security Command Center (SCC) provides centralized threat detection and incident management for agent deployments, addressing EU AI Act Article 62 incident reporting requirements. SCC aggregates findings from Event Threat Detection, Cloud Audit Logs, and custom vulnerability sources into a unified security posture view. Cloud Functions triggered by SCC findings enable automated incident response: revoking agent service account permissions, disabling Vertex AI endpoints, or escalating to human operators. For Article 14 human oversight, use Cloud Tasks with human approval queues to gate high-risk agent actions.

Security Command Center Cloud Functions Event Threat Detection Cloud Tasks

Practical configuration: Enable SCC Premium tier for Event Threat Detection on all projects running Vertex AI agents. Create Cloud Functions that respond to critical SCC findings by automatically disabling the affected agent's service account, satisfying Article 14 kill switch requirements. Configure Pub/Sub notifications to alert the governance team within the incident response SLA.

GCP governance maturity is Developing for agent-specific controls. The safety settings are functional but less granular than Bedrock Guardrails, and the platform lacks a dedicated prompt injection detection layer comparable to Azure Prompt Shields. GCP's strength is the DLP API (the most comprehensive data classification service across all platforms) and Workload Identity Federation for NHI management. The Responsible AI Toolkit provides fairness and explainability capabilities, but they require more manual configuration than Azure's integrated dashboard.

06 // Oracle OCI Oracle OCI Agent Governance Platform

Oracle Cloud Infrastructure provides AI governance capabilities through its OCI AI Services, but the agent-specific governance tooling is less mature than Azure, AWS, or GCP. OCI's strengths lie in its traditional enterprise governance infrastructure: robust IAM with compartment-based isolation, comprehensive audit logging, and Data Safe for data governance.

🛡
OCI AI Services + Content Safety
Generative AI Guardrails + Model Hosting

OCI AI Services provides guardrail capabilities for generative AI models, but the content safety filters are less configurable than Bedrock Guardrails or Azure Content Safety. The service supports custom model hosting on dedicated GPU clusters with network isolation, giving enterprises control over model deployment boundaries. Content filtering covers basic categories but lacks the granular severity-level configuration available on other platforms.

OCI AI Services Dedicated AI Clusters Content Filters Model Hosting

Practical configuration: Deploy agents on dedicated AI clusters with network isolation via private endpoints. Configure available content safety filters at the model endpoint level. Supplement with third-party content safety (OpenAI Moderation API or custom classifiers) for categories not covered natively. Use OCI Functions to implement pre/post-processing guardrail logic around model invocations.

🔑
Identity Domains + Audit + Data Safe
IAM + Audit Logging + Data Governance

OCI Identity Domains with dynamic groups and policy-based access control provides agent identity management, using compartment-based isolation to enforce least-privilege access. The OCI Audit service captures all control plane and data plane API events, satisfying EU AI Act Article 12 logging requirements. Data Safe provides data masking, activity auditing, and security assessment for Oracle databases that agents access, offering governance depth unmatched by other platforms for database-centric agent architectures.

Identity Domains Dynamic Groups OCI Audit Data Safe

Practical configuration: Create a dedicated compartment for each agent deployment with IAM policies that restrict resource access to that compartment. Define dynamic groups that match agent compute instances by tag, then attach policies granting only the specific OCI resources each agent requires. Enable Data Safe on all Oracle databases that agents access and configure activity auditing with alert policies for anomalous query patterns.

🚨
Incident Response + Human Oversight
Cloud Guard + Events Service

OCI Cloud Guard monitors agent infrastructure for security misconfigurations and threat indicators, providing automated remediation through responder recipes. The Events Service captures resource state changes and routes them to OCI Functions or Notifications for automated incident response, satisfying EU AI Act Article 62 requirements. For Article 14 human oversight, use OCI Functions triggered by Cloud Guard problems to implement kill switch mechanisms that disable agent compute or revoke dynamic group policies.

Cloud Guard Events Service OCI Functions Notifications

Practical configuration: Enable Cloud Guard at the tenancy level with detector recipes targeting agent compartments. Create responder recipes that automatically disable agent instances when critical security problems are detected. Configure the Events Service to trigger OCI Functions for custom incident response logic and Notifications for team alerting.

Honest Assessment

OCI's agent governance tooling is Emerging. Organizations running agents on OCI should plan to supplement native capabilities with third-party governance tools (OpenTelemetry, Terraform policy-as-code) to achieve parity with the governance coverage available on Azure, AWS, or GCP. OCI excels when the agent interacts with Oracle databases, where Data Safe provides governance capabilities that no other platform matches.

The gap is not in infrastructure governance (IAM, audit, networking) where OCI is fully mature. The gap is in AI-specific governance services: content safety granularity, prompt injection detection, fairness assessment, and agent behavioral monitoring. Enterprises with significant Oracle database estates should evaluate whether the Data Safe integration advantage outweighs the AI governance tooling gap for their specific agent use cases.

07 // Cross-Cloud Cross-Cloud Governance Patterns Architecture

Most enterprises operate across multiple cloud platforms. A governance architecture that only works on a single cloud is a governance silo. The cross-cloud patterns described here provide platform-agnostic governance capabilities that work across Azure, AWS, GCP, and OCI simultaneously. The strategic principle: centralized governance policy, distributed enforcement.

📜
Policy-as-Code
Define governance rules in Terraform or Pulumi. Enforce content safety configurations, IAM policies, logging requirements, and network isolation across all clouds from a single codebase. Version governance alongside infrastructure.
Terraform Pulumi OPA
📡
Vendor-Neutral Observability
Use OpenTelemetry for standardized telemetry collection across all platforms. Agent traces, metrics, and logs flow to a single observability backend regardless of which cloud the agent runs on. Satisfies EU AI Act Article 12 with consistent evidence.
OpenTelemetry OTLP Grafana
🔑
Federated Identity
Manage agent identities through a central identity provider (Entra ID, Okta) with federation to each cloud's IAM. Every agent authenticates through the same identity governance layer, ensuring consistent access controls and audit trails regardless of deployment platform.
OIDC SAML SCIM
📋
BBOM as Governance Artifact
The Behavioral Bill of Materials is inherently cloud-agnostic. It documents what an agent can do, not where it runs. Use the BBOM as the canonical governance document that travels with the agent across environments, providing consistent documentation regardless of the underlying platform.
BBOM JSON Schema Git
Governance Control Plane
The most mature pattern: a centralized governance control plane that sits above all cloud platforms. It defines policies centrally, pushes enforcement rules to each cloud's native services, collects compliance evidence from all platforms, and produces unified audit reports for regulators.
Central Policy Distributed Enforcement Unified Audit
📜
Policy Repo
Terraform/Pulumi governance definitions, BBOM schemas
Control Plane
Translates policies to cloud-native enforcement rules
Cloud Platforms
Azure Policy, Bedrock Guardrails, GCP Org Policy, OCI Governance
📡
Telemetry
OpenTelemetry collectors aggregate from all platforms
📊
Unified Audit
Single compliance dashboard for NIST, ISO, EU AI Act
Governance Domain Azure AWS GCP OCI
Content Safety Mature Mature Developing Emerging
PII Protection Presidio, Content Safety Guardrails PII, Macie DLP API (150+ types) Data Safe, Masking
Policy Enforcement Azure Policy SCP, Config, Guardrails Org Policy, VPC-SC Compartments, Policies
Audit Logging Mature Mature Mature Mature
Model Evaluation RAI Dashboard Bedrock Evaluation Model Garden Suite Basic Evaluation
NHI Identity Entra Managed Identity IAM Roles, STS Workload Identity Dynamic Groups
Prompt Injection Defense Prompt Shields Prompt Attack Filter Safety Settings Limited
Fairness / Bias RAI Dashboard Clarify (ML only) RAI Toolkit Manual
Human Oversight (Art. 14) Sentinel, Logic Apps Step Functions, EventBridge SCC, Cloud Tasks Cloud Guard, Functions
Incident Response (Art. 62) Sentinel, Service Health Security Hub, EventBridge SCC, Event Threat Detection Cloud Guard, Events Service

The comparison reveals a clear pattern: all four platforms have mature audit logging, but agent-specific governance capabilities vary significantly. Azure leads in integrated responsible AI tooling. AWS leads in content safety granularity and prompt injection defense. GCP leads in data classification and identity federation. OCI leads in Oracle database governance. No single platform covers all nine governance domains at production maturity. Human oversight and incident response capabilities exist on every platform but require deliberate architecture to satisfy Article 14 and Article 62 requirements. The governance control plane pattern, centralizing policy while distributing enforcement, is not a luxury for multi-cloud organizations. It is a necessity.

08 // Horizon What Comes Next Forward Intel

No single cloud platform covers all nine governance domains at production maturity. This is not a criticism of any individual platform. It reflects the reality that AI agent governance is a rapidly evolving discipline where cloud providers are building capabilities faster than governance frameworks can standardize requirements. The practical implication: your governance architecture must sit above the cloud platform, not within it.

The governance control plane concept, where centralized policy definitions translate to distributed cloud-native enforcement, is the architecture pattern that scales. It allows organizations to adopt the strongest capabilities from each platform (Azure Prompt Shields and AWS Bedrock Guardrails for prompt injection defense, GCP DLP API for data classification, OCI Data Safe for database governance) while maintaining a consistent governance posture across all deployments.

Key Insight

The governance layer must be cloud-agnostic even when the enforcement is cloud-native. Policy-as-code (Terraform, Pulumi) plus vendor-neutral observability (OpenTelemetry) plus cloud-agnostic documentation (BBOM) creates a governance architecture that survives cloud migration, multi-cloud expansion, and platform service changes.

The EU AI Act's main application date of 2 August 2026 creates urgency. Organizations deploying high-risk AI agents in EU markets must demonstrate the technical capabilities specified in Articles 9 through 15: risk management, data governance, technical documentation, record-keeping, transparency, human oversight, and accuracy. Every one of these requirements maps to a cloud-native service described in this article. The implementation path is clear. The question is not whether your cloud platform supports governance. It is whether you have mapped your specific governance requirements to the specific services that satisfy them.

Start with the requirements matrix in Section 2. Identify which governance domains your agents require based on their risk classification. Map those requirements to your cloud platform's native services using the provider sections above. Fill gaps with cross-cloud patterns from Section 7. Document everything in the BBOM. That is the implementation sequence: requirements first, platform mapping second, gap analysis third, documentation always.

Explore the full Govern pillar for the Enterprise Governance Playbook, Behavioral Bill of Materials, and EU AI Act agent compliance. For agent identity architecture, see Agent Identity and NHI. For platform selection guidance, see Cloud Agent Platforms. For human oversight design, see Human-in-the-Loop vs Human-on-the-Loop. Test your architecture knowledge in the Agent Blueprint Quest.

◀ Previous Article The Enterprise Agentic AI Governance Playbook Back to Hub ▶ Agentic AI Hub