Follow ZDNET: Add america arsenic a preferred source connected Google.
ZDNET's cardinal takeaways
- Not each "agentic AI" tools are genuinely agentic systems.
- Poor prompts and rogue agents tin cascade into failures.
- Focus connected measurable outcomes, not hype oregon ambition.
Imagine you're a main executive. Your AI strategy task unit has conscionable presented you with 2 strategical options.
The archetypal 1 is safe. You tin usage agentic AI to trim overhead and prevention 10% of wide quality superior costs.
The 2nd prime is daring. You tin summation maturation tenfold by utilizing agentic AI to alteration your company's operations.
Also: AI agents are fast, loose, and retired of control, MIT survey finds
The archetypal prime volition hardly determination the needle, but volition assistance the AI inaugural wage for itself. The 2nd prime could stroke the doors disconnected your numbers and marque you a fable successful your board's eyes. It could besides get you fired.
Know that the superlatives are disconnected the charts. KPMG estimates that agentic AI volition unlock $3 trillion in yearly productivity gains. Accenture makes the case that agentic AI is "no little than a caller benignant of capital," and "marks a displacement successful economical history." Last fall, Gartner said, "organizations person a important three- to six-month model to specify their agentic AI merchandise strategy, arsenic the manufacture is astatine an inflection point."
So, what do you do?
Risk factors
Gartner whitethorn counsel that you request to instrumentality enactment close now. Accenture advises you to spell for 10x maturation wins alternatively than 10% cost-savings wins. My proposal is to beryllium chill. While determination is undoubtedly a ton of upside to agentic AI initiatives, jumping successful without a coagulated strategy tin effect successful failure.
Also: 5 ways to usage AI erstwhile your fund is tight
As it turns out, Gartner has a stat for that, too. The probe said, "Over 40% of agentic AI projects volition beryllium canceled by the extremity of 2027 owed to escalating costs, unclear concern value, oregon inadequate hazard controls."
There are different reasons for these failures. Gartner said that astir early-stage projects are experiments oregon proof-of-concept, which is arsenic it should be. But these sorts of tests are conscionable that. Tests are not guaranteed to succeed. That's the point.
1. AI washing
On the different hand, organizations are often led astray by their vendors. Many vendors, jumping connected the AI hype wagon, are engaging successful what Gartner called "agent washing." No, this isn't James Bond successful a shower. It's a word derived from greenwashing, the signifier of falsely portraying products arsenic eco-friendly.
In the lawsuit of cause washing, Gartner estimated that little than 13% of the thousands of agentic AI vendors are really shipping agentic products. Most companies are rebranding existing products -- ranging from AI assistants, robotic process automation, script-based services, and chatbots -- arsenic "agentic." The presumption that these tools tin execute autonomous tasks is faulty, starring to aviator projects based connected these products that are destined to fail.
2. Runaway costs
Another gotcha is costs. Most AI implementations trust connected outer ample connection models for cognitive processing services provided by the likes of OpenAI, Google, and Anthropic. These services get linked to your applications done an exertion programming interface (API).
Think of the API similar the socket successful your wall. You plug your java shaper into that socket, and you get powerfulness to make that sweet, saccharine brownish elixir. The socket and plug are standardized interfaces (like the API). Your java shaper is your application. The unreality work is the powerfulness company, to whom you wage a interest for usage.
Also: Why AI led 1 institution to wantonness unfastened source
AI companies measurement metered usage based connected a metric called "tokens." Generative AI uses tokens reasonably sparingly. They're consumed erstwhile a question is asked, and that's it. Like a java shaper making a cupful of coffee, the power/token usage is minimal.
Now, opposition the powerfulness demands of a java shaper to that of a server rack. The servers devour much powerfulness and usage it constantly, 24/7. The powerfulness measure for a server rack volition beryllium considerably higher than for a java shaper (even my overused java maker).
It's the aforesaid with agentic AI, which runs astir constantly, with aggregate agents astatine once, consuming tokens voraciously. As companies standard up their usage of agentic AI, they're uncovering their unreality bills are ballooning. There's a crushed OpenAI went from zero gross successful precocious 2022 to more than $20 billion successful 2025.
3. Unpredictable results
Another pitfall is that AI projects are "non-deterministic," meaning the aforesaid input tin nutrient antithetic outputs crossed runs, due to the fact that the AI incorporates probability, randomness, and discourse sensitivity alternatively than pursuing a fixed, repeatable execution path.
This deficiency of predictability tin beryllium brutal erstwhile gathering and investigating solutions, debugging failures, validating outputs, ensuring compliance, and maintaining accordant behaviour crossed updates and deployments.
Madhav Thattai, EVP & GM of Agentforce astatine Salesforce, told maine this successful an email: "Software utilized to beryllium solely deterministic: aforesaid input, aforesaid output, casual to trust. AI agents interruption that model, with the aforesaid input producing antithetic outcomes. That demands a hybrid approach. Context, control, and governance can't beryllium bolted connected post-deployment. The companies succeeding are designing those layers successful from time one."
4. Rogue agents
Think astir what could hap erstwhile a trusted worker goes bad. The aforesaid could hap with agents, but agents are acold faster than immoderate employee. An unintended action, done astatine scale, tin ripple done your full enactment astatine airy speed.
My ma utilized to person a saying that frustrated maine passim my full childhood. She said, "Do what I mean, not what I say." Her anticipation was that she was raising maine right, truthful I should truly cognize what she wanted, careless of whether oregon not she articulated it correctly.
Also: Why endeavor AI agents could go the eventual insider threat
Goal misalignment tin beryllium a existent contented if an worker prompts an cause incorrectly. While you could astir apt make a checks-and-balances agentic supervision system, the much probable world is that if you punctual the cause incorrectly, it won't intuit your intent. It volition conscionable blast done your network, leaving rubble successful its wake.
If you person a misinstructed cause determination successful your logic chain, those failures volition cascade into others, creating a domino effect that tin permission you wishing you could fell retired successful the wood successful a yurt for the adjacent 2 years (or possibly that's conscionable me).
5. Data information and privateness hazard
Security and privateness is different issue. Almost each heavy AI agentic deployments impact utilizing a non-premises LLM. This means that your information has to beryllium sent to the AI determination successful the cloud.
Also: AI agents of chaos? New probe shows however bots talking to bots tin spell sideways fast
The large AI companies bash committedness they won't usage your endeavor information for training, but the information is, you're inactive sending information to a strategy you don't control. This could trigger each sorts of privacy, regulatory, and governance issues. Be definite to excavation heavy present earlier making immoderate imperishable implementation decisions.
I could spell connected and connected astir hazard factors. There are immoderate scary stories retired there. McDonald's lost hundreds of dollars connected McNugget orders and besides mixed bacon into crystal cream. UT MD Anderson Cancer Center mislaid $62 million connected a Watson deployment.
I'm not trying to scare you distant from agentic AI. I privation you to recognize that deployment is risky. You request to beryllium precise strategical and deliberate. This is not a shiny caller toy. This is simply a bet-your-company hazard and opportunity.
Payoff strategies
You cognize what they say. "No risk, nary reward," right? We've discussed the risks, truthful present let's look astatine however to reap the rewards of agentic AI installations.
Accenture identified a tiered attack to AI projects.
- Tier 1 - Agentic automation: This is the basal level of AI implementation. Here, Accenture is talking astir constituent solutions oregon what they telephone "simple quality substitution." This is wherever you mightiness augment tech enactment with a subject-matter trained chatbot, oregon enactment an cause connected the task of processing definite forms oregon inputs.
- Tier 2 - Table stakes: This is Accenture's word for end-to-end process reinvention, designed to unlock value. The thought present is that you tin prevention a batch and summation wide output, but you're not differentiating your concern from competitors.
- Tier 3 - Strategic bets: Yep, they said "bets" successful a strategy statement. Accenture is pitching the thought that if you instrumentality a large chance, you mightiness get backmost large rewards utilizing their 10x metric. This is fundamentally reinventing your concern based connected AI capabilities.
Also: AI cause adoption and budgets volition emergence importantly successful 2026, contempt challenges
Is this attack applicable oregon attainable? Sure. Maybe. As overmuch arsenic anything, I guess.
I deliberation this signifier of alleged "strategic" investigation of AI opportunities is meant to make excitement alternatively than tangible results. Accenture adjacent said (and this is simply a nonstop quote), "If the company's agentic AI docket doesn't excite investors, the ambition is not bold enough."
1. Start with reality, not ambition
Let's assistance up connected the state pedal a small bit, shall we? Going afloat throttle close retired of the gross volition apt find you skidding disconnected the road. Instead, usage attraction and consideration. You tin inactive find payoffs. Just bash truthful successful a mode that has a amended accidental of wide success.
Start by looking astatine your existent concern processes. Almost each businesses person immoderate processes that instrumentality excessively long, aren't responsive enough, are excessively expensive, interruption each the time, oregon different origin headaches. You don't adjacent request to bash a business-wide heavy dive analysis. These occupation areas are, and person been, evident for a agelong time.
2. Choose the close starting points
Be selective astir your choices for trying agentic AI. Look for interior processes that are costly to run, hap frequently, and travel reasonably predictable patterns. Workflows that leak revenue, make bottlenecks, oregon beryllium connected repetitive manual effort are particularly beardown candidates.
Proceed cautiously erstwhile utilizing agentic solutions to regenerate manual labor. You don't privation to scare employees that they're going to suffer their jobs. Instead, you privation to empower employees to marque deeper contributions by freeing them up from doing tedious engaged work. Start with non-critical systems wherever mistakes are manageable and won't ripple crossed the business.
Also: How to physique amended AI agents for your concern - without creating spot issues
Look astatine those arsenic low-hanging fruit. Some mightiness beryllium fixable utilizing task-specific agents. Others mightiness beryllium mitigated by aggregate agents moving unneurotic successful a azygous information environment. Still others mightiness beryllium solvable by elemental algorithmic processes that don't request AI astatine all.
Avoid areas filled with borderline cases, ambiguity, oregon perpetually shifting rules. Those situations are acold harder for agents to grip reliably and are much apt to make problems than present value.
3. Put guardrails successful spot
As you determination from investigating to accumulation deployment, enactment guardrails successful place. Be definite to see and instrumentality the guardrails before you scale.
Keep humans successful the loop aboriginal on, particularly for approvals and objection handling, truthful agents don't tally unchecked. This mightiness beryllium harder than the AI companies promise. When Claude Code abruptly began splitting enactment among agents, I recovered that they ran acold faster than I could track, often got stuck, and were different troublesome. My hole was to destruct simultaneous agents, astatine slightest until I could amended negociate them.
Increase autonomy gradually arsenic you summation assurance successful performance. Don't conscionable unreserved successful and effort to crook connected afloat agentic automation close away. This mightiness necessitate you to defy the pressures of investors and different cardinal players, but clasp your ground. You wouldn't privation to crook implicit your accumulation enactment to the impulsive ne'er-do-well nephew of your biggest investor. Likewise, you shouldn't manus implicit your process travel to AI agents earlier they're acceptable for premier time.
Also: Deploying AI agents is not your emblematic bundle motorboat - 7 lessons from the trenches
"Organizations request adaptable governance that evolves arsenic AI advances. While quality oversight remains important today, frameworks should expect greater AI autonomy and see clear, future-ready safeguards," Mudit Garg, CEO and co-founder of infirmary AI bundle company Qventus, told ZDNET successful an email, "Many wellness systems that developed AI governance frameworks a mates of years agone are already having to restructure them to accommodate today's AI capabilities."
Be definite to continuously show some behaviour and costs, due to the fact that with agentic AI, tiny issues tin compound rapidly if near unattended. Here's a corollary: If you can't show something, oregon haven't figured retired however to yet, hold until you tin earlier mounting agentic AI loose.
Salesforce's Thattai besides had thoughts connected AI governance. "Businesses are assembling agents crossed models, vendors, and tools. Governance has to beryllium unfastened and composable capable to conscionable them there. But openness without oversight is conscionable sprawl," helium said. "Agents request to beryllium built connected standards with choky governance, accordant visibility, and monitoring crossed the full cause lifecycle. Trust is non-negotiable."
4. Scale what works
Once you've identified a viable usage case, support the archetypal task precise limited. Start with a azygous workflow. Make definite you tin show clear, measurable ROI. From there, grow into intimately related processes wherever the patterns and information are similar.
Wait until you've proven you tin reliably execute connected aggregate projects earlier you effort to standard much broadly crossed the organization.
5. Measure existent payoff
How tin you archer it's working? First, speech to your people. They'll archer you if they emotion oregon hatred the caller systems. Once you've gotten the measurement of idiosyncratic sentiment, look astatine different metrics that tin measurement occurrence successful clear, operational terms. Look for reductions successful outgo per task, faster rhythm times, less errors, and measurable gross captured oregon recovered.
"The biggest situation is proving ROI astatine scale. Many wellness systems deficiency wide show benchmarks and look agelong implementation timelines, compounded by reliance connected bequest EHR systems," said Qventus' Garg.
Keep successful caput that if you can't necktie a process to a tangible, measurable result, you can't beryllium you've added value.
"Success requires defining measurable outcomes aboriginal and prioritizing fewer, high-impact usage cases, moving from 80% to 95% accuracy alternatively than spreading crossed 1,000 shallow applications," Garg said.
And what not to bash
Keep these cautions successful caput arsenic well: Don't commencement by attempting a afloat transformation. Don't deploy crossed aggregate systems astatine once. Don't presume that what a vendor tells you they tin bash is really what they tin deliver. Don't fto anyone unit you into moving faster than your enactment tin efficaciously absorb.
The way to rewards
At the opening of this article, I gave you a choice. But it doesn't truly marque consciousness to prime betwixt a harmless 10% ratio summation and a risky 10x transformation. The companies that triumph with agentic AI volition instrumentality solutions successful the contexts wherever they volition succeed, sometimes deriving incremental outgo savings and sometimes hitting location runs.
Start with targeted improvements. If each goes well, they'll simply wage for themselves. Learn what works, what breaks, and what scales. Then, implicit time, grow those wins into broader systems that reshape however your concern operates.
Also: AI magnifies your team's strengths - and weaknesses, Google study finds
Agentic AI is powerful. It tin perfectly alteration a business's trajectory. That tin beryllium for bully oregon not truthful good. Back successful December, I discussed however AI is an amplifier, that it "magnifies the strengths of high-performing organizations and the dysfunctions of struggling ones."
So, what bash you do?
My proposal is that you determination cautiously truthful you don't unleash an untethered beast into your concern model. Start with aviator projects, physique connected them, and dilatory standard up implicit time. As you do, you whitethorn find opportunities that fto you instrumentality your concern to the adjacent level, oregon adjacent beyond.
If you could use agentic AI to 1 frustrating workflow today, what would it be? Let america cognize successful the comments below.
You tin travel my day-to-day task updates connected societal media. Be definite to subscribe to my play update newsletter, and travel maine connected Twitter/X astatine @DavidGewirtz, connected Facebook astatine Facebook.com/DavidGewirtz, connected Instagram astatine Instagram.com/DavidGewirtz, connected Bluesky astatine @DavidGewirtz.com, and connected YouTube astatine YouTube.com/DavidGewirtzTV.

2 days ago
14








English (US) ·