Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
S
Technology Deep Dive

Four Governance Proposals in Two Weeks: What the Agentic Trust Race Reveals About the Gaps Still Left Open

5 min read arXiv (preprint, source ID pending); CISA; TJS prior coverage Partial Very Weak S
In two weeks, researchers and regulators have produced at least four distinct proposals for governing autonomous AI agents, a reputation framework, an audit architecture, federal guidance, and a set of protocol standards. Each addresses a different slice of the same underlying problem. None covers all of it. For teams building agentic pipelines now, the map of what's solved and what isn't is the only thing that matters.
4 governance proposals, 3 failure modes, 0 unified specs
Key Takeaways
  • Four governance proposals in two weeks address different failure modes, protocol standards cover interoperability, CISA names the threats, TRUST Framework addresses audit integrity,
  • AgentReputation adds reputation services and execution isolation
  • No single proposal covers all three CISA-identified failure modes: identity spoofing, context poisoning, and tamper-proof audit trails, teams need to map their architecture against all three independently
  • AgentReputation's reputation layer has a cold-start limitation: new agent deployments have no interaction history to generate meaningful trust signals, which limits its value at launch
  • TRUST Framework and AgentReputation address overlapping but non-identical layers and may be complementary, whether they're designed to interoperate is not yet addressed
Agentic Governance Proposal Coverage by Failure Mode
Protocol Standards (Symphony/A2A/MCP/ACP)
Interoperability only
CISA Guidance (May 2)
Threat taxonomy, all 3 named, none solved
TRUST Framework / HDAG (May 1)
Audit trail integrity
AgentReputation (May 3)
Reputation + execution isolation + persistence
Warning

None of the four proposals addresses human-in-the-loop design at scale, specifically, how a human meaningfully intervenes in a running multi-agent workflow without destroying workflow state. That problem does not have an architectural proposal yet.

Analysis

Reputation systems require interaction history to generate meaningful trust signals. Teams deploying new agentic pipelines today get no protection from a reputation-based trust framework during the cold-start period, a practical limitation AgentReputation's preprint does not appear to address.

The field doesn’t agree on what the agentic trust problem is. That’s the actual problem.

Ask a protocol engineer and they’ll tell you the issue is interoperability, agents built on different frameworks can’t authenticate each other because there’s no shared communication standard. Ask a security researcher and they’ll say the issue is persistence, agent actions leave no tamper-resistant audit trail, so when something goes wrong, attribution is impossible. Ask a compliance team and they’ll point to identity, there’s no reliable way to verify that the agent a system is communicating with is actually authorized to act. Ask CISA and they’ll name all three.

The past two weeks have produced four proposals that collectively attempt to solve these problems. None of them does it alone.

The Four Proposals: What Each One Actually Addresses

Symphony, A2A, MCP, and ACP (late April), The protocol-layer standards covered in the Five Agentic Standards in Ten Days brief define how agents communicate. A2A (Agent-to-Agent protocol) and MCP (Model Context Protocol) establish message formatting and context-passing conventions. Symphony and ACP address orchestration and capability discovery. Together they answer one question: can agents from different vendors talk to each other? They don’t address what happens when a communicating agent is lying about who it is or what it has done.

CISA Agentic AI Guidance (May 2), The CISA guidance brief names context poisoning, identity spoofing, and prompt injection as the three priority failure modes for agentic systems in critical infrastructure contexts. CISA’s contribution is the threat taxonomy, not the architecture. It tells practitioners what to worry about. It doesn’t specify how to build systems that prevent it.

TRUST Framework, HDAG Architecture (May 1), The TRUST Framework proposal addresses audit integrity through a Hierarchical Directed Acyclic Graph structure that makes agent action logs tamper-resistant. It targets the persistence problem: once an agent acts, that action should be verifiable and unalterable. HDAG focuses on the record layer. It doesn’t address how agents acquire reputation scores or how identity is verified before an action is taken.

AgentReputation (May 3), According to the paper’s authors, the AgentReputation framework separates agentic operations into three layers: task execution, what the researchers describe as reputation services, and what they call tamper-proof persistence. The framework targets marketplace deployments, multi-agent systems where agents from different operators interact without a shared authority. The architecture is designed to contain failures by isolating components. A compromised execution layer shouldn’t be able to corrupt reputation records. Corrupted reputation records shouldn’t be able to poison active task execution.The paper will be linked directly once confirmed.

Mapping the Failure Modes Against the Proposals

Three failure modes. Four proposals. Here’s how the coverage maps:

Failure Mode Protocol Standards CISA Guidance TRUST / HDAG AgentReputation
Identity spoofing Partial (authentication not specified) Named as priority risk Not addressed Addressed (reputation layer)
Context poisoning Not addressed Named as priority risk Indirect (audit trail) Addressed (execution isolation)
No tamper-proof audit trail Not addressed Named as priority risk Directly addressed Addressed (persistence layer)
Interoperability across vendors Directly addressed Not addressed Not addressed Not addressed

The table reveals something important: TRUST Framework and AgentReputation address overlapping but not identical layers. HDAG focuses on making the audit record tamper-resistant after actions occur. AgentReputation’s persistence layer appears to serve a similar function, but the reputation services layer adds something HDAG doesn’t: a mechanism for agents to build and query track records before deciding whether to trust a counterpart. These proposals could complement each other. Whether they’re designed to interoperate is not addressed in either paper.

The Cold-Start Problem No Proposal Solves

Reputation systems require history. A new agent in a marketplace has no track record, which means a reputation-based trust framework offers no signal at deployment. Teams standing up new agentic pipelines, the most common situation right now, get no protection from AgentReputation in the early stages of operation. The paper’s authors don’t appear to address the bootstrapping problem in the summary materials available. This is a practical limitation worth naming before teams build toward a reputation-dependent architecture.

The TRUST Framework’s HDAG approach has a different limitation: it protects the audit record but doesn’t prevent the bad action from occurring in the first place. An agent can still execute a harmful task; HDAG just ensures the record of it can’t be erased. That’s valuable for forensics and compliance. It’s not a prevention mechanism.

What This Means for Three Audiences

For developers building agentic pipelines now: None of these proposals are production specifications. They’re preprints and guidance documents. But the failure mode taxonomy is settled enough to design against. Architect for the three CISA-identified risks, identity verification, context integrity, and tamper-resistant logging, even before a unified framework exists. The proposals converging on these three problems suggests they’re the right threat model.

For compliance teams preparing for agentic AI governance requirements: The EU AI Act’s treatment of agentic systems is still developing, but the documentation requirements for high-risk systems already imply audit trail integrity. TRUST Framework’s HDAG approach maps more directly to current documentation requirements than AgentReputation does. Both are worth tracking.

For enterprise architects evaluating agentic infrastructure stacks: The interoperability layer (protocol standards) and the trust layer (TRUST, AgentReputation) are being developed independently. Before committing to a stack, verify whether your chosen orchestration framework has a roadmap for integrating with emerging trust protocols. Most current frameworks don’t.

What’s Still Missing

The governance proposals address how agents behave within a system. None of them addresses human-in-the-loop design at scale, specifically, at what point in a complex agentic workflow a human review requirement is triggered, and how that requirement survives when the workflow spans multiple autonomous agents with their own sub-agents. That’s the kill-switch and escalation problem. It’s a harder architectural question than reputation or audit integrity, and nothing published in the past two weeks touches it.

The next frontier in agentic governance isn’t trust between agents. It’s the mechanism by which a human can meaningfully intervene in a running agentic system without destroying the workflow state. That problem doesn’t have a proposal yet.

View Source
More Technology intelligence
View all Technology
Related Coverage

Stay ahead on Technology

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub