How much trust should your agent have, and over what? That’s the question at the center of the joint guidance published May 1 by CISA, NSA, and the Australian Signals Directorate, along with additional coalition partners. The document names four specific risk categories that practitioners designing agentic architectures need to treat as baseline threat vectors, not edge cases.
The four risks worth mapping to your architecture: identity spoofing, privilege abuse, flawed orchestration parameters, and corrupted third-party components. Each of these is a design-time problem, not a runtime patch.
Identity spoofing in agentic systems means an agent, or something pretending to be an agent, makes requests it shouldn’t be authorized to make. If your system doesn’t have explicit agent identity verification at each tool call, this is an open attack surface.
Privilege abuse follows from over-permissioned agents. An agent that has write access to your production database because it theoretically might need it is a privilege escalation waiting to happen. The guidance’s framing, limit agentic AI to lower-risk, non-sensitive tasks until standards mature, reflects a principle of minimum viable privilege, not a rejection of the technology.
Flawed orchestration parameters are harder to see coming. If the instructions that govern your agent’s behavior can be malformed, injected, or misconfigured, the agent executes faithfully against bad inputs. Prompt injection, poisoned context, and misconfigured tool permissions all live in this category. The official language treats these as paraphrased guidance rather than verbatim quotes until Builder can verify exact wording against the CISA document directly.
Corrupted third-party components is the supply chain risk. Your agent’s tool chain almost certainly includes dependencies you didn’t build. Each one is an attack surface. The guidance names this explicitly because agentic architectures compound the traditional software supply chain problem: a compromised component doesn’t just run, it runs with the permissions you granted your agent.
The governance narrative, what this means for policy, liability, and regulatory compliance, is covered in depth in the regulation pillar. See CISA, ASD, and NCSC Release Joint Agentic AI Guidance for that analysis. This brief focuses on what the guidance means for the people writing the code.
One distinction matters: this is guidance, not regulation. Nothing in this document is currently enforceable as a legal standard. That framing will change. Guidance from a coalition of five Western security agencies is generally a reliable leading indicator of enforceable requirements. The teams that treat it as a threat model today will have a shorter compliance adaptation window later.
What to watch:
Whether the CISA guidance generates follow-on technical standards from NIST or equivalent bodies. The EU AI Act’s certification challenges for agentic systems create a parallel pressure point, international convergence on agentic security architecture requirements is accelerating across multiple regulatory tracks simultaneously.
TJS synthesis:
The guidance is most useful not as a compliance document but as a design checklist. If you can answer, specifically, not theoretically, how your system handles each of the four named risks, you’ve done the substantive work. The regulatory documentation required to demonstrate that work will vary by jurisdiction and evolve. The architecture doesn’t.