Link Copied!

Le problème du 80 contre 1 : vos agents d'IA sont des menaces internes

Les entreprises exécutent désormais en moyenne 12 agents d'IA, et la moitié fonctionnent en totale isolation. Les identités des machines dépassent en nombre les humains jusqu'à 80 contre 1, avec 44 % qui s'authentifient encore via des clés API statiques. Deux rapports historiques publiés le 5 février 2026 révèlent une crise de gouvernance de l'identité qui reflète le désastre de la prolifération du SaaS des années 2010, mais cette fois, les outils non gouvernés peuvent agir de manière autonome.

🌐
Note de Langue

Cet article est rédigé en anglais. Le titre et la description ont été traduits automatiquement pour votre commodité.

Couloir sombre du centre de données avec des centaines de lumières bleues et ambrées brillantes sur les racks de serveurs représentant des agents d'IA autonomes observant depuis l'ombre

Key Takeaways

  • 12 agents per enterprise, half in silos: Salesforce’s February 5 Connectivity Benchmark found the average company now runs 12 Artificial Intelligence (AI) agents, with 50% operating in complete isolation from each other and from governance frameworks.
  • Machine identities outnumber humans up to 80-to-1: Non-Human Identities (NHIs) from AI agents, service accounts, and API tokens now vastly outnumber human employees in enterprise environments, and legacy Identity and Access Management (IAM) systems cannot track them.
  • Only 23% have a formal governance strategy: The Cloud Security Alliance (CSA) found that 84% of organizations doubt they could pass a compliance audit on agent behavior or access controls.
  • This is the 2010s SaaS sprawl crisis replaying at machine speed: The same pattern of uncontrolled adoption, shadow deployments, and deferred governance that created the SaaS management market is repeating with AI agents, except agents can execute actions, not just store data.

The Week the Masks Came Off

On February 5, 2026, two landmark reports dropped within hours of each other. Neither made the front page of any major tech publication.

Salesforce’s 11th annual Connectivity Benchmark Report, surveying 1,050 IT leaders across nine countries, revealed that the average enterprise now runs 12 AI agents, with that number projected to hit 20 by 2028. Half of those agents operate in complete isolation: no shared context, no coordinated workflows, no centralized oversight. They are autonomous programs, running across cloud environments, accessing sensitive data, and making decisions that affect real systems and real money.

The same day, the Cloud Security Alliance and Strata Identity published “Securing Autonomous AI Agents,” a survey of 285 IT and security professionals that exposed the identity plumbing underneath the hype. The findings were grim: only 23% of organizations have a formal, enterprise-wide strategy for managing AI agent identities. Forty-four percent still use static Application Programming Interface (API) keys to authenticate their agents. Eighty percent lack real-time visibility into which agents are active in their environment at any given moment.

Put those two reports together and the picture is stark: enterprises are deploying autonomous agents faster than they can track, govern, or secure them. The AI hype cycle has delivered a genuine productivity tool. It has also delivered the largest expansion of the corporate attack surface since the invention of the cloud.

The Identity Math That Should Terrify Every CISO

The core problem is not the agents themselves. It is the identities they carry.

Every AI agent needs credentials to operate. It needs API keys to call external services, service accounts to access internal databases, OAuth tokens to authenticate with cloud platforms, and permissions to read, write, and execute across enterprise systems. Each of these credentials constitutes a Non-Human Identity (NHI), a machine credential that behaves like an employee badge but without the human attached to it.

According to security researchers at Gradient Flow, NHIs now outnumber human employees by ratios as high as 80-to-1 in enterprise environments. The CSA report found that these identities are chronically over-permissioned: agents routinely receive broader access than they need because organizations lack the tooling to define granular, role-based permissions for autonomous systems.

The math is straightforward. If a company has 5,000 employees and an estimated 400,000 machine identities, and 44% of those identities authenticate using static API keys that never expire and never rotate, the attack surface is not measured in “endpoints.” It is measured in permanent, unmonitored access tokens.

Attack Surface = NHIs × P(over-permissioned) × P(static credential)

For a mid-size enterprise with conservative estimates:

400,000 × 0.90 × 0.44 = 158,400 persistent access vectors

That is not a vulnerability. That is a standing invitation.

SaaS Sprawl 2.0: Same Movie, Faster Projector

If this story sounds familiar, it should. The enterprise technology industry has lived through this exact pattern before.

In 2013, the average enterprise used 73 Software-as-a-Service (SaaS) applications. By 2015, “Shadow SaaS,” unauthorized tools adopted by individual teams without IT approval, had become the number one headache for Chief Information Security Officers (CISOs). Employees signed up for Dropbox, Slack, and dozens of project management tools because they solved immediate problems. Nobody coordinated. Nobody governed. Data scattered across platforms like seeds in the wind.

The market responded. Integration Platform as a Service (iPaaS) vendors like MuleSoft (later acquired by Salesforce for $6.5 billion) and SaaS management platforms like BetterCloud and Zylo emerged to bring order to the chaos. The lesson was expensive but clear: uncontrolled adoption always precedes a governance crisis that creates a new market category.

The AI agent cycle is replaying the same script, but with one critical difference. SaaS tools stored data. AI agents execute actions. A rogue Dropbox folder leaks documents. A rogue AI agent can traverse internal APIs, modify production databases, trigger financial transactions, and propagate changes across interconnected systems at machine speed. The blast radius is exponentially larger.

EraAvg. Tools/AgentsGovernance GapConsequence
SaaS 201373 apps/companyShadow SaaS, data silosData leaks, compliance failures
SaaS 2018900+ apps/companyiPaaS and SaaS management emerge$6.5B MuleSoft acquisition
AI Agents 202612 agents/company (projected 20 by 2028)77% lack formal identity strategyFirst major agent breach (pending)

The SaaS sprawl cycle took roughly five years from crisis to consolidation. The agent sprawl cycle is moving faster because agent deployments scale at software speed, not human onboarding speed. Salesforce’s own data shows the number of agents per enterprise is expected to grow 67% in two years, a compression that suggests the “governance reckoning” may arrive by late 2026 or early 2027.

The Three Failure Modes

The CSA and Salesforce data converge on three specific failure modes that make the current agent sprawl qualitatively different from past technology cycles.

Failure Mode 1: The Authentication Collapse

The most alarming finding in the CSA report is how agents authenticate. Among organizations deploying AI agents:

  • 44% use or plan to use static API keys
  • 43% use username/password combinations
  • 35% use shared service accounts

These are not modern authentication methods. They are the digital equivalent of leaving the office key under the doormat. Static API keys do not expire, do not rotate, and do not generate audit trails that trace actions back to specific agents or human sponsors. When one is compromised, the attacker gains persistent, silent access to every system that key unlocks.

Only 21% of organizations maintain a real-time agent registry, and just 28% can reliably trace agent actions back to specific humans or systems. This means that if an agent is compromised in February 2026, most organizations would not detect it for hours, days, or potentially weeks.

Failure Mode 2: The Permissions Explosion

Traditional IAM was designed for a simple model: a human logs in, gets a role, and that role defines their access. AI agents break this model because they do not “log in” once. They instantiate, execute multi-step workflows across multiple systems, and may disappear within minutes. An ephemeral agent that spins up for a single task and self-terminates may never appear in a traditional security scan.

Security researchers at Gradient Flow describe this as “identity debt,” the accumulated, unresolved vulnerabilities in managing machine access that become unmanageable at agentic scale. The problem compounds because each new agent inherits broad permissions by default, since organizations lack the expertise or tooling to define least-privilege policies for autonomous systems that change behavior dynamically.

MetricHuman EmployeesAI Agents
Over-permissioned rate~70%~90%
AuthenticationSSO, MFA, biometricsStatic API keys (44%), passwords (43%)
VisibilityHR systems, Active Directory80% lack real-time registry
Audit trailLogin logs, session tracking72% cannot trace to human sponsor

Failure Mode 3: The Ownership Vacuum

Who is responsible when an AI agent goes wrong? The CSA survey reveals a fragmented answer:

  • 39% say the security team owns agent governance
  • 32% say IT operations
  • 13% say a dedicated AI security function

That fragmentation is itself the vulnerability. When three different teams claim partial ownership, nobody has full accountability. The result is predictable: 84% of organizations doubt they could pass a compliance audit on agent behavior or access controls. In regulated industries like finance and healthcare, where audit failures carry legal consequences, this doubt translates directly into liability exposure. (For more on the legal dimensions of autonomous agent liability, see the related analysis in The Autonomous Tort: Why AI Agents Are Uninsurable.)

The Steel Man: Why the Optimists Are Not Wrong

To be fair, the agent governance crisis is not hopeless, and the “everything is on fire” narrative requires qualification.

Microsoft’s Entra Agent ID, announced in January 2026, represents the most credible attempt to bring IAM into the agentic era. It assigns unique workload identities to agents, enforces human sponsorship (every agent must be traceable to a responsible person), and applies Zero Trust Conditional Access policies. Commerzbank has already scaled a 30,000-conversation banking avatar using Entra-based governance. EisnerAmper, the accounting firm, has built an AI-powered audit agent on Azure AI Foundry with Entra as the identity control plane.

These are real success stories in regulated industries. They demonstrate that agent governance is solvable when the platform vendor controls the entire stack.

But that is precisely the limitation. Microsoft’s governance works for Microsoft’s ecosystem: Copilot Studio, Azure Foundry, Microsoft 365. The Salesforce data shows that enterprises deploy agents from an average of three different development sources: 36% prebuilt SaaS, 34% embedded platform agents, and 30% custom-built. The governance crisis is not within any single platform. It is at the seams between platforms, where agents from OpenAI, Anthropic, Google, ServiceNow, and custom builds interact with each other and with enterprise data through ungoverned APIs.

The boring hypothesis is worth stating clearly: most organizations are not incompetent. They are using 2020-era identity tooling for a 2026 problem. This is a lag, not a conspiracy. But lags in security have consequences that lags in productivity do not.

The Second-Order Effects

The downstream consequences of the agent identity crisis extend well beyond the first breach.

First order: An over-permissioned agent is compromised and exfiltrates sensitive data. This is the obvious scenario, and it will happen. Gartner has projected that costs from AI agent abuses will be four times higher than those from traditional multi-agent systems through 2027.

Second order: The breach triggers a compliance earthquake. Expect the National Institute of Standards and Technology (NIST) and SOC 2 frameworks to add agent-specific audit requirements by late 2026 or early 2027. Every Information Security (InfoSec) team will suddenly need to inventory, classify, and audit every AI agent identity in their environment, a task that most cannot perform as of early 2026.

Third order: The compliance burden prices mid-market companies out of enterprise AI. Fortune 500 firms have the budget for Entra Agent ID, dedicated AI security teams, and Deloitte governance engagements. A 200-person manufacturing company using three different AI agent platforms does not. The governance tax accelerates the concentration of AI capability in the largest organizations, reinforcing a structural advantage that compounds over time.

This is the pattern that played out with SaaS governance, cloud compliance, and data privacy regulation. Each wave of technology forces a wave of governance, and each governance wave favors incumbents who can absorb the cost. The question is not whether this will happen; it is how fast.

What This Means for You

If you are a CISO or IT leader:

  • Inventory immediately. The fact that 80% of organizations cannot see their active agents is the single most dangerous data point in the CSA report. You cannot secure what you cannot see. Start with an agent registry before buying governance platforms.
  • Kill static API keys. The 44% figure is indefensible. Migrate to short-lived tokens with automatic rotation. This is the lowest-hanging fruit with the highest impact.
  • Designate a human sponsor for every agent. Only 28% can trace agent actions to humans. This is not a technology problem; it is a policy problem with a same-day fix.

If you are an enterprise buyer evaluating AI agent platforms:

  • Ask the identity question first. Before evaluating an agent’s productivity features, ask: “How does this agent authenticate, what permissions does it require, and can its actions be audited in real time?” If the vendor cannot answer clearly, walk away.
  • Prefer platforms with built-in governance over assembling governance after the fact. The historical lesson from SaaS sprawl is that bolt-on governance costs three to five times more than native governance.

If you are a security vendor or startup founder:

  • The agent identity governance market is about to undergo the same explosive growth that SaaS management saw in 2016-2019. The CSA report is the starting gun. The winners will be those who build cross-platform agent identity solutions, the equivalent of iPaaS for the agentic era.

The Clock Is Ticking

The enterprise AI agent market is in the exact position that SaaS was in 2013: real productivity gains driving adoption faster than governance can keep pace. The shadow AI era that began with unauthorized ChatGPT usage in 2023, which has already cost enterprises an average of $670,000 per breach incident, is evolving into something more dangerous: shadow agents with autonomous execution capability and no identity controls.

Somewhere in a Fortune 500 company right now, an AI agent is authenticating with a static API key that was provisioned six months ago by a developer who has since left the company. That agent has read access to a customer database, write access to a ticketing system, and execution permissions on a payment API. Nobody knows it is running. Nobody is watching what it does.

The 80-to-1 ratio is not a future risk. It is the current reality. The question is whether the first major agent breach arrives before or after the governance frameworks catch up. History suggests “before.” The SaaS governance market was built on the wreckage of data breaches that should never have happened. The agent governance market will likely be built the same way.

The only difference is that this time, the ungoverned tools can think.

Sources

🦋 Discussion on Bluesky

Discuss on Bluesky

Searching for posts...