Cybersecurity

What to do about unsecured AI agents – the cyberthreat no one is talking about

Cropped shot of a young computer programmer looking through source code; AI agents

Many companies are quickly moving AI agents from prototype into production across various functions. Image: Getty Images/PeopleImages

Eric Kelleher
President & Chief Operating Officer, Okta, Okta
This article is part of: Centre for Cybersecurity
  • Businesses are deploying artificial intelligence (AI) agents across functions from sales and research to content creation and finance.
  • These systems can make decisions and create plans autonomously but they also open organizations to new cybersecurity threats.
  • Whether starting from scratch or working with pre-built tools, organizations must build security, interoperability and visibility into their AI agents.

The modern workforce is undergoing a rapid transformation. Organizations are deploying artificial intelligence (AI) agents across an increasing number of business functions – from development to sales and customer service, research, content creation and finance.

These autonomous AI systems can make decisions and create plans to achieve complex tasks with minimal supervision by people. And companies are quickly moving these AI agents from prototype into production.

As a result of this accelerated deployment, the volume of non-human and agentic identities is now expected to exceed 45 billion by the end of this year. That’s more than 12 times the approximate number of humans in the global workforce today.

Despite this explosive growth, only 10% of respondents to an Okta survey of 260 executives report having a well-developed strategy for managing their non-human and agentic identities. This poses a significant security concern, considering 80% of breaches involve some form of compromised or stolen identity. And generative AI escalates this threat by enabling threat actors to conduct even more sophisticated phishing and social engineering attacks.

Have you read?

As businesses race to deploy agents, it’s critical they establish identity controls and prioritize security from the start. This will help organizations avoid significant risks of over-permissioned and potentially unsecured AI agents.

To protect against the speed and complexity of AI, businesses need a new approach: An identity security fabric. This new category secures every identity – human, non-human and agentic – across every identity use case, application and resource. This approach is key to protecting businesses in a future driven by AI.

How AI attacks target agents

Threat actors have been quick to leverage AI for malicious activity, using it to make existing threats more dangerous and to manufacture new, more personalized ones. Generative AI is already powering malware, deepfakes, voice cloning and phishing attacks.

The advent of AI agents introduces a new layer of complexity to the enterprise security landscape. Trained on valuable and potentially sensitive company data, these agents can become new attack vectors if they’re not built, deployed, managed and secured properly.

Organizations are incentivized to grant agents access to more data and resources to make them more effective, but with expanded access comes increased business risk. Threat actors could manipulate AI agent behaviour through a prompt injection attack, for example, where they use probing questions to attempt to trick the agent into sharing privileged information.

The more access an AI agent has, the easier it is for threat actors to infiltrate a company. This can potentially lead to data leaks, unauthorized actions or a full system compromise.

The agentic AI identity problem

Because AI agents need to access user-specific data and workflows, each one requires a unique identity. Without sufficient controls, these identities stand to have too much access and autonomy.

As "human" as these agents may sometimes seem, managing their identity is fundamentally different from managing that of a human user. Non-human and agentic identities have several distinctions.

Today, when a new employee onboards at a company, there’s a clear starting point for when that user needs access to company applications and data. They can use passwords, biometrics or multi-factor authentication (MFA) to log in to an account and validate who they are. But AI agents can’t be authenticated like human employees. Instead, they rely on things like application programming interface (API) tokens or cryptographic certificates to validate themselves.

The lifecycle of an AI agent is also uniquely non-human. Agents have dynamic lifespans, requiring extremely specific permissions for limited periods of time and often needing access to sensitive information. Organizations must be prepared to rapidly provision and de-provision access for agents.

Agents can also be more difficult to trace and log than their human counterparts, which complicates post-breach audits and remediation efforts.

These factors collectively make it critical for security teams to govern AI agents and their permissions carefully.

Preparing for the agentic AI future

Most organizations are still early in their agentic AI journeys. This presents an opportunity to establish proper identity and security protocols from the outset. For organizations deploying third-party agents, there’s no better time than during adoption to lay the groundwork for secure identity. When building agents from the ground up, identity should be prioritized during development.

Whether an organization is starting from scratch or working with pre-built tools, there are several key identity considerations for autonomous AI agents:

1. Security

The autonomous nature of AI agents means they can chain together permissions to access resources they shouldn’t. Security teams need granular access policies to ensure agents aren’t sharing any sensitive information. AI agents should only have access and authorization to resources for certain periods of time.

2. Interoperability

Organizations must ensure AI agents align to standards for interoperability. Agents are more powerful when they can connect with other agents and AI systems, but teams can’t sacrifice security along the way. Standards like Model Context Protocol (MCP) provide a framework for agents to securely connect to external tools and data sources.

3. Visibility

Without clear insights into the actions and access patterns of these agents, anomalous behaviours can go unnoticed, potentially leading to security vulnerabilities. To mitigate these risks, organizations need comprehensive monitoring and auditing capabilities to track agent activity and maintain control.

Loading...

Securing the road ahead for AI agents

Organizations are still only scratching the surface of the agentic AI future. And it’s important to remember that building and deploying an AI agent is only the first step in the security journey.

As the number of use cases continues to increase, so will the responsibilities of organizations’ security teams. It takes an ongoing commitment to visibility, governance and control to ensure AI agents are working securely and as intended.

With a strong foundation of secure identity, organizations can begin safely scaling their agentic deployments and empower more users to reap the benefits and unlock the business potential of AI tools.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Cybersecurity

Related topics:
Cybersecurity
Business
Emerging Technologies
Share:
The Big Picture
Explore and monitor how Cybersecurity is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

More on Cybersecurity
See all

Fighting Cyber-Enabled Fraud: A Systemic Defence Approach

Singapore releases quantum readiness tools, and other cybersecurity news

About us

Engage with us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2025 World Economic Forum