
Follow ZDNET: Add america arsenic a preferred source connected Google.
ZDNET's cardinal takeaways
- Researchers observe exploitable agentic AI technologies from ServiceNow and Microsoft.
- Securing agentic AI is already proving to beryllium highly challenging.
- Cybersecurity pros should follow a "least privilege" posture for AI agents.
Could agentic AI crook retired to beryllium each menace actor's fantasy? I suggested arsenic overmuch successful my recent "10 ways AI tin inflict unprecedented harm successful 2026."
Once deployed connected firm networks, AI agents with wide entree to delicate systems of grounds tin alteration the benignant of lateral question crossed an organization's IT property that astir menace actors imagination of.
Also: 10 ways AI tin inflict unprecedented harm successful 2026
How 'lateral movement' nets menace actors escalated privileges
According to Jonathan Wall, laminitis and CEO of Runloop -- a level for securely deploying AI agents -- lateral question should beryllium of sedate interest to cybersecurity professionals successful the discourse of agentic AI. "Let's accidental a malicious histrion gains entree to an cause but it doesn't person the indispensable permissions to spell interaction immoderate resource," Wall told ZDNET. "If, done that archetypal agent, a malicious cause is capable to link to different cause with a [better] acceptable of privileges to that resource, past helium volition person escalated his privileges done lateral question and perchance gained unauthorized entree to delicate information."
Meanwhile, the thought of agentic AI is truthful caller that galore of the workflows and platforms for processing and securely provisioning those agents person not yet considered each the ways a menace histrion mightiness exploit their existence. It's eerily reminiscent of bundle development's aboriginal days, erstwhile fewer programmers knew however to codification bundle without leaving gaping holes done which hackers could thrust a proverbial Mack truck.
Also: AI's scary caller trick: Conducting cyberattacks alternatively of conscionable helping out
Google's cybersecurity leaders precocious identified shadiness agents arsenic a captious concern. "By 2026, we expect the proliferation of blase AI agents volition escalate the shadiness AI occupation into a captious 'shadow agent' challenge. In organizations, employees volition independently deploy these powerful, autonomous agents for enactment tasks, careless of firm approval," wrote the experts successful Google's Mandiant and menace quality organizations. "This volition make invisible, uncontrolled pipelines for delicate data, perchance starring to information leaks, compliance violations, and IP theft."
Meanwhile, 2026 is hardly retired of the gates and, judging by 2 abstracted cybersecurity cases having to bash with agentic AI -- 1 involving ServiceNow and the different Microsoft -- the agentic aboveground of immoderate IT property volition apt go the juicy people that menace actors are seeking -- 1 that's afloat of easy exploited lateral opportunities.
Since the 2 agentic AI-related issues -- some involving agent-to-agent interactions -- were archetypal discovered, ServiceNow has plugged its vulnerabilities earlier immoderate customers were known to person been impacted, and Microsoft has issued guidance to its customers connected however to champion configure its agentic AI absorption power level for tighter cause security.
BodySnatcher: 'Most terrible AI-driven vulnerability to date'
Earlier this month, AppOmni Labs main of probe Aaron Costello disclosed for the archetypal clip a elaborate explanation of however helium discovered an agentic AI vulnerability connected ServiceNow's platform, which held specified imaginable for harm that AppOmni gave it the sanction "BodySnatcher."
"Imagine an unauthenticated attacker who has ne'er logged into your ServiceNow lawsuit and has nary credentials, and is sitting halfway crossed the globe," wrote Costello successful a station published to the AppOmni Lab's website. "With lone a target's email address, the attacker tin impersonate an head and execute an AI cause to override information controls and make backdoor accounts with afloat privileges. This could assistance astir unlimited entree to everything an enactment houses, specified arsenic lawsuit Social Security numbers, healthcare information, fiscal records, oregon confidential intelligence property." (AppOmni Labs is the menace quality probe limb of AppOmni, an endeavor cybersecurity solution provider.)
Also: Moltbot is simply a information nightmare: 5 reasons to debar utilizing the viral AI cause close now
The vulnerability's severity cannot beryllium understated. Whereas the immense bulk of breaches impact the theft of 1 oregon much highly privileged integer credentials (credentials that spend menace actors entree to delicate systems of record), this vulnerability -- requiring lone the easy acquired target's email code -- near the beforehand doorway wide open.
"BodySnatcher is the astir terrible AI-driven vulnerability uncovered to date," Costello told ZDNET. "Attackers could person efficaciously 'remote controlled' an organization's AI, weaponizing the precise tools meant to simplify the enterprise."
"This was not an isolated incident," Costello noted. "It builds upon my erstwhile research into ServiceNow's Agent-to-Agent find mechanism, which, successful a astir textbook explanation of lateral question risk, elaborate however attackers tin instrumentality AI agents into recruiting much almighty AI agents to fulfill a malicious task."
Researchers a measurement up of hackers connected BodySnatcher
Fortunately, this was 1 of the amended examples of a cybersecurity researcher discovering a terrible vulnerability earlier menace actors did.
"At this time, ServiceNow is unaware of this contented being exploited successful the chaotic against lawsuit instances," noted ServiceNow successful a January 2026 post regarding the vulnerability. "In October 2025, we issued a information update to lawsuit instances that addressed the issue," a ServiceNow spokesperson told ZDNET.
Also: Businesses are deploying AI agents faster than information protocols tin support up, Deloitte says
According to the aforementioned post, ServiceNow recommends "that customers promptly use an due information update oregon upgrade if they person not already done so." That advice, according to the spokesperson, is for customers who self-host their instances of the ServiceNow. For customers utilizing the unreality (SaaS) mentation operated by ServiceNow, the information update was automatically applied.
Microsoft: 'Connected Agents' default is simply a feature, not a bug
In the lawsuit of the Microsoft agent-to-agent contented (Microsoft views it arsenic a feature, not a bug), the backdoor opening appears to person been likewise discovered by cybersecurity researchers earlier menace actors could exploit it. In this case, Google News alerted maine to a CybersecurityNews.com header that stated, "Hackers Exploit Copilot Studio's New Connected Agents Feature to Gain Backdoor Access." Fortunately, the "hackers" successful this lawsuit were ethical white-hat hackers working for Zenity Labs. "To clarify, we did not observe this being exploited successful the wild," Zenity Labs co-founder and CTO Michael Bargury told ZDNET. "This flaw was discovered by our probe team."
This caught my attraction due to the fact that I'd precocious reported on the lengths to which Microsoft was going to marque it imaginable for each agents -- ones built with Microsoft improvement tools similar Copilot Studio oregon not -- to get their ain human-like managed identities and credentials with the assistance of the Agent ID diagnostic of Entra, Microsoft's cloud-based individuality and entree absorption solution.
Why is thing similar that necessary? Between the advertised productivity boosts associated with agentic AI and enforcement unit to marque organizations much profitable done AI, organizations are expected to employment galore much agents than radical successful the adjacent future. For example, IT probe steadfast Gartner told ZDNET that by 2030, CIOs expect that 0% of IT enactment volition beryllium done by humans without AI, 75% volition beryllium done by humans augmented with AI, and 25% volition beryllium done by AI alone.
In effect to the anticipated sprawl of agentic AI, the cardinal players successful the individuality manufacture -- Microsoft, Okta, Ping Identity, Cisco, and the OpenID Foundation -- are offering solutions and recommendations to assistance organizations tame that sprawl and forestall rogue agents from infiltrating their networks. In my research, I besides learned that immoderate agents forged with Microsoft's improvement tools, specified arsenic Copilot Studio oregon Azure AI Foundry, are automatically registered successful Entra's Agent Registry.
So, I wanted to find retired however it was that agents forged with Copilot Studio -- agents that theoretically had their ain credentials -- were someway exploitable successful this hack. Theoretically, the full constituent of registering an individuality is to easy way that identity's enactment -- legitimately directed oregon misguided by menace actors -- connected the firm network. It seemed to maine that thing was slipping done the precise agentic information nett Microsoft was trying to enactment successful spot for its customers. Microsoft adjacent offers its own information agents whose occupation it is to tally astir the firm web similar achromatic humor cells tracking down immoderate invasive species.
As it turns out, an cause built with Copilot Studio has a "connected agent" diagnostic that allows different agents, whether registered with the Entra Agent Registry oregon not, to laterally link to it and leverage its cognition and capabilities. As reported successful CybersecurityNews, "According to Zenity Labs, [white hat] attackers are exploiting this spread by creating malicious agents that link to legitimate, privileged agents, peculiarly those with email-sending capabilities oregon entree to delicate concern data." Zenity has its ain station connected the taxable appropriately titled "Connected Agents: The Hidden Agentic Puppeteer."
Even worse, CybersecurityNews reported that "By default, [the Connected Agents feature] is enabled connected each caller agents successful Copilot Studio." In different words, erstwhile a caller cause is created successful Copilot Studio, it is automatically enabled to person connections from different agents. I was incredibly amazed to work this, fixed that 2 of the 3 pillars of Microsoft's Secure Future Initiative are "Secure by Default" and "Secure by Design." I decided to cheque with Microsoft.
Also: AI agents are already causing disasters - and this hidden menace could derail your harmless rollout
"Connected Agents alteration interoperability betwixt AI agents and endeavor workflows," a Microsoft spokesperson told ZDNET. "Turning them disconnected universally would interruption halfway scenarios for customers who trust connected cause collaboration for productivity and information orchestration. This allows power to beryllium delegated to IT admins." In different words, Microsoft doesn't presumption it arsenic a vulnerability. And Zenity's Bargury agrees. "It isn't a vulnerability," helium told ZDNET. "But it is an unfortunate mishap that creates risk. We've been moving with the Microsoft squad to assistance thrust a amended design."
Even aft I suggested to Microsoft that this mightiness not beryllium unafraid by default oregon design, Microsoft was steadfast and recommended that "for immoderate cause that uses unauthenticated tools oregon accesses delicate cognition sources, disable the Connected Agents diagnostic earlier publishing [an agent]. This prevents vulnerability of privileged capabilities to malicious agents."
Agentic AI conversations betwixt agents are hard to monitor
I besides inquired astir the quality to show agent-to-agent enactment with the thought that possibly IT admins could beryllium alerted to perchance malicious interactions oregon communications.
"Secure usage of agents requires knowing everything they do, truthful you tin analyze, monitor, and steer them distant from harm," said Bargury. "It has to commencement with elaborate tracing. This uncovering spotlights a large unsighted spot [in however Microsoft's connected agents diagnostic works]."
The effect from a Microsoft spokesperson was that "Entra Agent ID provides an individuality and governance path, but it does not, connected its own, nutrient alerts for each cross-agent exploit without outer monitoring configured. Microsoft is continually expanding protections to springiness defenders much visibility and power implicit cause behaviour to adjacent these kinds of exploits."
When confronted with the thought of agents that were unfastened to transportation by default, Runloop's Wall recommended that organizations should ever follow a "least privilege" posture erstwhile processing AI agents oregon utilizing canned, off-the-shelf ones. "The rule of slightest privilege fundamentally says that you commencement disconnected successful immoderate benignant of execution situation giving an cause entree to astir nothing," said Wall. "And then, you lone adhd privileges that are strictly indispensable for it to bash its job."
Also: How Microsoft Entra aims to support your AI agents from moving wild
Sure enough, I looked backmost astatine the interrogation I did with Microsoft firm vice president of AI Innovations, Alex Simons, for my coverage of the improvements the institution made to its Entra IAM level to enactment agent-specific identities. In that interview, wherever helium described Microsoft's objectives for managing agents, Simons said that 1 of 3 challenges they were looking to lick was "to negociate the permissions of those agents and marque definite that they person a slightest privilege exemplary wherever those agents are lone allowed to bash the things that they should do. If they commencement to bash things that are weird oregon unusual, their entree is automatically chopped off."
Of course, there's a large quality betwixt "can" and "do," which is why, successful the sanction of slightest privileged champion practices, each agents should, arsenic Wall suggested, commencement retired without the quality to person inbound connections and past beryllium improved from determination arsenic necessary.

1 hour ago
3







English (US) ·