Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Technology Daily Brief Vendor Claim

AI Agents Need Identity Governance Now, and Most Security Teams Aren't Ready

3 min read Darktrace Partial
Darktrace's "State of AI Cybersecurity 2026" report finds that the security industry is confronting a governance gap it didn't anticipate: AI agents are proliferating across enterprise environments, but most organizations are still managing them like software tools rather than privileged machine identities. That gap is the real risk.

AI agents don’t just process data. They act. They call APIs, move files, initiate transactions, and communicate with other systems, often with credentials that were provisioned once and never reviewed. Security teams built their identity governance frameworks for humans and static applications. Agents are neither.

According to Darktrace’s “State of AI Cybersecurity 2026” report, 92% of security professionals surveyed expressed concern about AI agents’ security impact. The same report found that 78% of organizations reported using generative AI in at least one business function. Darktrace is a cybersecurity vendor, and this is a company-commissioned survey, the methodology and sample size aren’t publicly detailed in the available report content. The figures are attributable to Darktrace, not to independent research. What they point to, though, tracks with what enterprise security leaders are reporting anecdotally: the governance frameworks haven’t kept up.

The principle the Darktrace report advocates, governing AI agents as machine identities with least-privilege access, is not a vendor invention. It’s an established framework supported independently by identity security providers including Okta and documented across enterprise security literature. Least-privilege means an agent gets exactly the access it needs for a specific task, nothing more, with every action logged and audited. That’s the same standard applied to service accounts and privileged human users. It’s not new. Applying it consistently to AI agents is.

The practical challenge is scope. A mature enterprise today might run dozens of distinct AI agents, customer service bots, code review tools, data pipeline orchestrators, document processors. Each one has credentials. Each one has a trust boundary. Historically, those boundaries were set at deployment and rarely revisited. That was manageable when agents were narrow and static. It isn’t when agents are general-purpose, capable of tool use, and updated frequently.

What least-privilege governance actually looks like for AI agents involves several concrete practices. Access is scoped to the minimum required for the specific task at deployment time, not estimated broadly at the category level. Credentials are rotated on a defined schedule, the same as any privileged service account. Agent actions are logged in a queryable audit trail, not just whether the agent ran, but what it did, what data it touched, and what downstream calls it made. Access reviews are scheduled, not triggered only by incident. These aren’t new concepts. Most security teams know them. The gap is operationalizing them for AI agents at scale, before something goes wrong.

The reason this matters now is deployment velocity. Agentic AI frameworks have made it faster to deploy agents than to govern them. A developer can stand up a functional multi-step agent in hours. Building the corresponding identity governance infrastructure takes longer. That gap widens every week agents are in production without it.

What to watch: The next enforcement signal here likely comes from regulators, not vendors. The EU AI Act and emerging U.S. federal guidance on AI system accountability both create compliance pressure around exactly this kind of access governance. Organizations deploying agentic systems in regulated industries, finance, healthcare, critical infrastructure, face the highest urgency. The question isn’t whether to implement machine identity governance for AI agents. It’s whether to do it proactively or reactively.

TJS take: The Darktrace report is a vendor document, and its statistics should be read as such. But the governance principle it’s articulating is correct and independently supported. The real signal in this story isn’t the survey percentage, it’s that a cybersecurity firm is publishing a report on AI agent identity governance at all. That’s a market signal. Enterprises that are still treating AI agents as application deployments rather than privileged machine identities have a governance gap that is only going to become more expensive to close.

View Source
More Technology intelligence
View all Technology

Stay ahead on Technology

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub