- Conducting Background Checks in the Corporate Security Environment
- Cisco U. Theater: Where Innovation Meets Learning - Cisco Live
- Your Guide to Cisco APIs at Cisco Live 2025: Empowering IT Teams in the DevNet Zone
- Netgear's enterprise ambitions grow with SASE acquisition
- The latest robot vacuum innovation will leave clean freaks drooling (and it's $450 off)
#Infosec2025: Concern Grows Over Agentic AI Security Risks

Agentic AI and AI tools that connect to each other without human oversight pose increasing security risks according to experts at Infosecurity Europe.
Agentic AI, or AI agents, operate with a high degree of autonomy. An agentic system might choose the AI model it uses, pass data or results to another AI tool, or even take a decision without human approval.
Agentic AI systems operate at a quicker pace than earlier-generation systems based on large language models (LLMs), as they work without the need for a human to give them instructions or prompts. They can also learn as they go, adapting the models they use and the prompts they use.
Difficulties can arise, though, when organizations chain together AI components, such as generative AI tools or chatbots, without additional checks, or allow AI agents to make autonomous decisions. This is already taking place within IT, in areas such as writing code and configuring systems.
This raises the risk that organizations’ AI deployments are moving faster than security controls.
According to research by consulting firm EY, just 31% of organizations say their AI implementation is fully mature. Further, EY found AI governance in companies lags behind AI innovation.
This is coming to the fore with agentic AI, which can magnify the risks organizations have already identified with LLMs.
Agentic AI systems are subject to all the same risks include prompt injection, poisoning, bias and inaccuracies.
However, the problems can worsen in cases where one agent passes inaccurate, biased or manipulated data to another. Even a fairly low error rate, of a few percentage points or less, can become a significant error if it is compounded across subsystems.
Security is made worse still, if AI tools are connected to data sources outside the enterprise’s control.
“Instead of AI talking directly to humans, it’s talking to other AI systems,” explained Dr Andrea Isoni, chief AI officer at AI Technologies.
“We need an intermediate AI security layer, especially if you are collecting or ingesting information from outside.”
“The more we use the technology, the more it is a weak spot that can be exploited,” Isoni added.
The rapid development of agentic AI means that security teams must work quickly to identify and report potential security risks.
EY’s research found that 76% of companies are already using agentic AI or plan to do so within the year. Meanwhile, just 56% said they were moderately or fully familiar with the risks.
“AI implementation is unlike the deployment of prior technologies,” said Cathy Cobey, EY global responsible AI Leader, assurance, commenting on the research.
“It’s not a ‘one-and-done’ exercise but a journey, where your AI governance and controls need to keep pace with investments in AI functionality.”
Boards will want to see measures to secure AI use, including agentic AI, EY says.
Breaches and risks
According to Rudy Lai, director of security for AI at Synk, the rapid uptake of agentic AI is pushing organizations to tighten up their controls and policies, as well as to look at whether agentic systems increase their attack surface.
“Agentic AI is not just in the lab anymore,” Lai noted.
Code development is one area where agents are being used.
“If agents are writing code, that needs to be secure,” Lai warned.
“You need to test agent generated code, as well as giving agents the right guardrails.”
Lai points out that developers are using agentic AI because it speeds up code production. In other businesses, enterprises are using agents to improve customer service and automation.
Whereas earlier-generation customer service bots would run out of answers and be forced to pass users to a human agent, agentic AI systems are more likely to fix problems themselves.
“AI agents are rapidly reshaping how businesses interact with customers, automate operations, and deliver services,” said Eric Schwake, director of cybersecurity strategy at Salt Security.
But, he says, this depends on both developers of AI tools and the IT teams deploying them, to make sure that the APIs that link together the AI tools are also secure.
“These interfaces are not just technical connectors, they provide the lifelines through which AI agents access data, execute tasks and integrate across platforms. Without robust API security, even the most advanced AI becomes a vulnerability rather than an asset,” Schwake explained.
Read more about the risks of agentic AI: Gartner Warns Agentic AI Will Accelerate Account Takeovers
As Snyk’s Lai warns, the risk with agentic AI systems comes not just from the components themselves, but when components are used together – “The security risk is in the gaps.”
Lai suggests AI red teaming to test that any AI implementations are secure and using tools such as AI bills of materials to check what technology being used where, as well as documenting connections and handovers between AI agents.
“CISOs don’t have visibility,” Lai said.
“That is why we have AI bills of materials that allow you to see the models and datasets you are using and dependencies in your coding and applications.”