{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
News Every Day |

Why Agentic A.I. Deployments Are Failing Before They Scale

Agentic A.I. is no longer a technology on the horizon. It is being deployed today in live enterprise environments, with real operational consequences. In 2026, the conversation in most boardrooms has already shifted from should we pay attention to this?” to “how do we move safely and most effectively?”

The vendor landscape is not making these questions easier to answer. Incumbent software companies—the platforms already embedded in enterprise architecture—are racing to layer agentic capabilities onto their existing suites, repositioning products many organizations already own. Simultaneously, a new generation of companies built natively on agentic architectures is entering the market, often targeting the same workflows with different approaches. The result is a market that is genuinely moving fast and generating noise in roughly equal measure.

In that environment, the promotional narrative tends to dominate. Early wins get amplified. Failure cases stay private. The gap between what vendors are projecting and what enterprises are experiencing in deployment is wider than it should be at this stage of the technology’s maturity. Executives are being asked to make significant capital and operating model commitments against a signal-to-noise ratio that is, at best, unfavorable.

Drawing on patterns emerging from early enterprise deployments—about cost structures, risk exposure and operating model redesign—and what those patterns suggest for organizations at different stages of their journey, this piece attempts to close some of that gap. The evidence base is still maturing, and these observations should be treated as informed early signals rather than settled conclusions. That said, early signals from well-observed deployments are often more useful than waiting for certainty that arrives too late to act on. This analysis is addressed to two audiences: those still weighing their first significant investment, and those already 12 to 24 months into deployment and now working through what the early returns actually look like.

The cost structure is real—and so is the return

Early deployments point toward a pattern that experienced technology leaders will recognize: upfront costs tend to run higher and less predictably than projected, and returns take longer to materialize. What is less familiar is the nature of the prerequisite investment. This is not primarily a hardware or infrastructure question in the conventional sense. It is an architectural one.

The more useful analogy is an operating system. Before agentic A.I. can function reliably, an organization needs to establish the underlying fabric on which agents and humans will work together: the data architecture that agents can navigate and trust, the policy and governance layer that defines what agents are and are not permitted to do, the orchestration layer that sequences and coordinates agent activity and the human interface layer that determines where autonomous execution stops and human judgment begins. Without this fabric in place, agents are not deployed into a functioning environment—they are deployed, at best, into silos. 

The most consistent constraint that appears most frequently in early deployments is data readiness. While the evidence is still limited, it is strong enough to be treated as a working hypothesis rather than a proven rule. Agentic systems execute multi-step tasks autonomously across enterprise systems; they require high-quality, structured and accessible data to perform reliably. What early deployments suggest is that fragmented pipelines do not merely slow implementation; they tend to corrupt it. The technology has a way of surfacing data problems faster than it solves business ones.

Where deployments have succeeded, some reported figures are striking. Some early adopters report an average return of 171 percent, reaching 192 percent in the U.S., largely driven by reductions in manual processing hours. Those figures should be treated cautiously, as early averages at this stage of a technology’s maturity tend to reflect the most favorable deployments, not the median. 

What is more useful is the underlying pattern: returns appear highly use-case dependent. Customer service automation—where performance is measurable and failure is immediately visible—tends to yield faster returns than back-office process automation, where errors can compound quietly before surfacing. Organizations tracking the strongest outcomes tend to share a common profile with defined use cases, measurable baselines and data that was already well-governed before agents arrived.

Timelines to attributable returns typically range from two to four years for complex, multi-system deployments. Narrower implementations with cleaner data can yield measurable returns within 12 months. Planning assumptions should reflect a portfolio approach: staging use cases by readiness and return profile within a shared architecture or “operating system.”

The cost items that most frequently surprise organizations in deployment are not the headline technology spend. They are high-frequency API calls to external systems at scale; custom connectors to legacy systems never designed for autonomous interaction; and the ongoing operational cost of agent monitoring and incident response. These are recurring costs that grow with deployment breadth, and they are worth modeling explicitly from the outset rather than treating them as implementation details.

An estimated 40 percent of agentic A.I. projects will be canceled by the end of 2027. The primary driver is not technology failure but preparation failure: organizations that begin deployment before data, governance, and operating model questions are resolved are building on an unstable foundation.

Risk exposure has new dimensions

Agentic A.I. introduces a category of risk that static A.I. tools do not: runtime risk. Because agentic systems act autonomously, the consequences of failure are operational rather than merely analytical. A generative A.I. model producing a flawed output requires a human to act on it before harm occurs. An agentic system can act on it independently, at speed and across multiple systems simultaneously.

The risk categories that security researchers and early enterprise deployments are beginning to identify include agent hijacking, unauthorized API or data access, data exfiltration and process loops that can escalate to denial-of-service conditions within internal systems. Some of these remain more theoretical than observed in practice; others are already documented in security research and beginning to appear in enterprise incident reporting. The direction of travel is clear enough to warrant proactive design, even where empirical evidence is still accumulating.

Prompt injection—the manipulation of an agent’s behavior through crafted inputs—is the most accessible attack vector for internal bad actors. An employee with system access and harmful intent does not need sophisticated technical capability; they need only understand how the agent processes instructions. Illustrative examples include triggering unauthorized financial transactions or accessing and exfiltrating sensitive records through a legitimately credentialled agent. The security architecture must treat agentic systems as it would any external-facing application: input validation, privilege separation and comprehensive audit logging are baseline requirements, not enhancements.

The governance gap is the most underreported risk in current enterprise deployments. While many organizations report deploying A.I. agents—McKinsey’s 2025 State of A.I. survey found that 62 percent of organizations are at least experimenting with agents—few report adequate governance and visibility into agent behavior. Organizations cannot govern what they cannot see. Full observability, such that every action, every decision path, every external system call, is not an aspirational goal for mature deployments. It is a prerequisite for any deployment. If your current instrumentation does not meet that standard, addressing it is the highest-priority technical debt you carry.

The operating model problem

The technology decisions in agentic A.I. deployment are, in most cases, the easier ones. The harder work is redesigning how organizations structure work, accountability and talent around systems that act autonomously.

The most useful framing: prior A.I. tools augmented individual human decisions. Agentic A.I. executes processes. The unit of analysis shifts from the decision to the workflow, and accountability frameworks built around human decision-makers do not transfer cleanly. Every workflow handed to an agent team requires explicit answers to questions that previously had implicit answers: who is accountable when the agent is wrong? What constitutes an error requiring human escalation? At what transaction value or risk threshold does autonomous execution require a human gate?

The pattern that appears to distinguish more productive early deployments is a deliberate choice to begin with the highest-volume, most rule-governed workflows, not the most visible ones. High-volume, rule-governed processes offer faster learning cycles, lower-stakes failure environments and clearer performance baselines. The operating model lessons from a well-run claims processing deployment tend to transfer to the next use case. Those from a failed attempt to automate strategic planning typically do not.

Workforce implications are real and already evident. Approximately 45 percent of firms with high agentic A.I. adoption rates are anticipating reductions in middle management within the first 36 months. The mechanism is straightforward: as agent teams execute tasks previously requiring coordination layers, the managerial overhead of those layers declines. What receives less attention is that the transition is rarely clean. Organizations that reduce management capacity before agents are operating reliably create accountability vacuums—nobody is watching the agent, and nobody is responsible when it fails. The sequencing matters as much as the decision itself.

The talent requirement is shifting from task specialists to orchestrators—people capable of designing, directing and overseeing teams of agents to accomplish complex objectives. This is a genuinely new skill profile, sitting at the intersection of domain expertise, systems thinking and A.I. fluency. Critically, it is not primarily a technology role. The most effective orchestrators in early deployments have been people who deeply understand the business process being automated, not those who most deeply understand the model architecture doing the automating.

For organizations already in deployment

For those past the decision stage, early operational experience is pointing to several pressure points that are worth examining against your current state, with the caveat that the deployments informing these observations are still limited in number and maturity.

Governance visibility tends to be the first gap that surfaces under load. The observability tooling adequate for a pilot often becomes inadequate when agent breadth expands across departments or use cases. The cost of building observability retroactively is considerably higher than designing it from the start, particularly in an agentic context, where the “operating system” that governs agent behavior needs to be fully instrumented to be trusted. If your current deployment does not give you full visibility into every agent action, every decision path and every external system call, that is the gap to close before expanding further.

A second pattern concerns use case selection in the second wave. First deployments are frequently chosen for visibility—i.e executive sponsorship, proof-of-concept appeal, a high-profile process that tells a good story internally. Second deployments tend to benefit from being chosen for operational criteria instead: highest transaction volume, most rule-governed process, cleanest and most consistent data. The compounding effect of a well-chosen second deployment on organizational confidence and governance maturity is significant, and the reverse also appears to be true.

A third observation: vendor relationships structured for a pilot are often not structured for scale. The conversations worth having now, before they become urgent, concern observability tooling capabilities, support SLAs for autonomous execution failures and contractual liability when an agent takes a costly wrong action. These are not likely to be standard terms in most current vendor agreements.

What determines the outcome

The enterprises generating the clearest early returns share a pattern, albeit from a still-limited dataset. They established the agentic operating fabric—architecture, governance layer, policy boundaries—before deploying agents into it, rather than attempting to construct it around agents already in flight. They chose use cases for operational clarity rather than strategic visibility. And they defined what success looked like before deployment, which meant they could actually tell whether they had achieved it.

The technology will continue to advance, and deployment costs will decline. The organizations that will lead are not necessarily those that moved earliest, but those that moved with the right foundations in place. The decisions made in the next 24 months—about architecture, governance design, operating model and talent—are likely to be more consequential than the technology choices themselves. The early signal from deployments, imperfect as it is, is consistent: agentic A.I. rewards preparation far more than speed. 

David Stokes is a former Senior Executive, EMEA and Chief Executive, UK at IBM. He’s now a Strategic Advisor at Quant, a pioneer in Agentic A.I., which develops cutting-edge digital employee technology.

Ria.city






Read also

I've been a Costco executive member for 11 years. These are the 6 products I'll always buy there.

Elite to emerging: Five left-backs Man United could sign this summer

Key Democrat says Bondi can't escape Epstein files testimony

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости