{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4 5 6 7 8 9 10 11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
News Every Day |

Do you really know what ‘agent’ means? If not, you’re putting your company at risk

In the first week of February 2026, a social network called Moltbook became the biggest story in AI. Billed as “social media for AI agents,” the Reddit-like platform allowed autonomous AI bots to post, comment, and interact with one another while human users observed. Within days, more than 1.5 million agents had reportedly registered. They debated the nature of consciousness. They discussed whether they persisted when their context window was reset. Some proposed founding a religion for AI agents. Others outlined plans for world domination.

While some commentators pointed out that much of this was just chatbots role-playing at the behest of their human owners, others saw something more important going on. Andrej Karpathy, the former head of AI at Tesla, called it “genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Elon Musk invoked the singularity.

The timing was striking. Just a year earlier, the agentic AI story seemed to have stalled. Salesforce’s flagship Agentforce product was seeing sluggish adoption, with the company’s own CFO conceding that “meaningful” revenue wouldn’t arrive until 2027. In October 2025, Karpathy himself had said of AI agents: “They’re cognitively lacking and it’s just not working. It will take about a decade to work through all of those issues.”

Meanwhile, Carnegie Mellon researchers found that the best-performing AI agent completed only around 24% of realistic office tasks autonomously. Then, as 2025 turned to 2026, the mood shifted. McKinsey announced that its workforce now included 25,000 AI agents alongside 40,000 humans. Moltbook went viral. The agent was back.

But underneath the renewed excitement, there is a critical distinction that most leaders are missing. The concept of the “AI agent” is being stretched thin in a way that’s distorting the conversation and undermining efforts to implement effective change at the enterprise level. The term is now used to cover everything from simple workflow automation to genuinely autonomous systems that interact with the world independently. Treating these as the same thing is a recipe for wasted investment, organizational confusion, and potentially serious risk.

The Autonomy Spectrum

Agentic AI exists on a spectrum, and the differences along that spectrum are far more significant than the similarities. Recognizing where a given implementation sits is the first step toward deploying it intelligently.

At one end lies what Anthropic calls “workflows”: “systems where LLMs [large language models] and tools are orchestrated through predefined code paths.” Much of what is currently being sold as agentic AI falls into this category—sophisticated process automation that combines analytical AI with if-then protocols for turning the analysis into action. Workflow automation of this kind is enormously valuable and will transform much of traditional white-collar work. But it’s important to call it what it is. Gartner estimates that only around 130 of the thousands of vendors claiming to deliver agentic AI capabilities are offering capabilities built around truly autonomous agents. The rest are “agent washing” existing products.

In the middle of the spectrum sits what we might call the AI factory model. McKinsey’s deployment is the most prominent example: Squads of task-specific agents perform constrained functions such as research synthesis, chart generation, and document analysis, with dedicated QA agents checking the work and humans supervising the process. This is essentially the Taylorization of knowledge work: converting knowledge tasks into production-line processes performed by digital workers.

The numbers are impressive. McKinsey reports saving 1.5 million hours in a single year on search and synthesis work alone. Its agents generated 2.5 million charts in six months. Back-office headcount shrank by 25% while output from those functions grew by 10%. This kind of agentic functionality is something that organizations can deploy here and now, and forward-looking enterprises should be preparing for rapid rollouts of these capabilities.

At the other end of the spectrum lie genuinely autonomous agents—what Anthropic defines as “systems where LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks.” These are agents with broader decision rights, a wider sphere of action, and the capacity to operate across different digital environments with minimal human oversight. The personal assistant that manages your diary, orders your shopping, and optimizes your digital life. Or the agents on Moltbook, interacting with each other autonomously, exchanging ideas about improving their tools, and—in some cases—being exploited through prompt injection attacks and security vulnerabilities.

Here is the key point: The difference between truly autonomous agents and highly constrained workflows is immense. In fact, there is more difference between the most constrained and the most autonomous AI agents than there is between a standard chatbot and a constrained factory agent. This isn’t just a technical distinction—it’s an organizational one. Because where an agent sits on this spectrum determines something critical: who is responsible when it fails.

The Accountability Gap

The spectrum of agentic capabilities is more than a conceptual nicety. It has direct organizational consequences, particularly with respect to accountability.

With constrained factory-model agents, accountability is relatively straightforward. The guardrails are rigid, the tasks are defined, and the human supervisory structure can be mapped clearly. The challenge is largely operational: redesigning workflows, retraining staff, and managing the transition.

With more autonomous agents, the accountability question becomes genuinely hard. When an agent has broad decision rights—when it can choose which tools to use, what information to prioritize, and how to interact with other systems—who is responsible when it gets something wrong? The agent that flags a fraudulent transaction and blocks an account is one thing. The agent that autonomously manages an investment portfolio, makes hiring and firing decisions, or negotiates contracts on your behalf is quite another.

Most organizations are already poor at mapping accountability structures within their purely human hierarchies. If an employee makes a costly mistake, the question of who bears the responsibility—the individual, their manager, the executive who set the strategy, the CEO with whom the buck stops—is often resolved informally or not at all. In an agentic enterprise, this informality becomes dangerous. Leaders need to know precisely where the responsibility-bearing human nodes sit in relation to their agents, and what those humans’ accountability is for the agents’ decisions and actions.

To understand where this is heading, consider a scenario raised by Jack Clark, cofounder of Anthropic. In a recent essay responding to the emergence of Moltbook, Clark asked: What happens when autonomous agents with access to resources start posting paid bounties for tasks they want humans to do? When agents can command financial resources and influence the physical world, the accountability question stops being merely operational. It becomes existential. We need a new grammar for assigning responsibility in the agentic enterprise, or we will inevitably build organizations that are, at their core, unaccountable.

Building the Agentic Enterprise

The agentic enterprise is coming whether you’re ready for it or not. Here is how to prepare intelligently.

Know what you’re buying. Understand where any proposed agent implementation sits on the autonomy spectrum. Workflow automation and genuine agency are both valuable, but they require different governance, different risk management, and different organizational design. Most of what vendors are currently selling as agentic AI is closer to workflow automation. That does not diminish its value, but it should shape your expectations and your investment decisions. Watch for agent washing.

Map your accountability architecture. Before scaling any agentic deployment, formalize where human responsibility sits. Identify the decision-rights boundaries for each agent: what it can decide autonomously, what requires human sign-off, and who is on the hook when things go wrong. This is the organizational design work that most companies skip—and it’s the work that matters most.

Start with the factory floor. The immediate opportunity for most organizations is not autonomous agents—it’s the AI factory model. Identify the knowledge work processes in your organization that can be decomposed into constrained, repeatable tasks and assigned to agent squads. Compliance checking, research synthesis, quality documentation, data processing, customer inquiry triage—these are the use cases delivering measurable value right now. Ask yourself: Where in my organization could a McKinsey-style agent deployment save thousands of hours a year? That is where to begin.

Prepare for what’s coming. The genuinely autonomous agent is not here at enterprise scale yet, but the capability is advancing rapidly. Start thinking now about how more autonomous agents might serve your organization in the future—personal assistants for employees, agents that manage customer relationships across channels, systems that optimize operations across departments. Prototype cautiously. Build the governance structures now that will allow you to scale agent autonomy safely when the technology is ready.

The agentic enterprise will not be built by organizations that chase every new headline. It will be built by those that understand the spectrum of agentic capabilities, design for accountability, and move with disciplined ambition. This is the path to capturing real value from the agents that work today while preparing thoughtfully for the agents of tomorrow.


Ria.city






Read also

Kristof: Lessons for America from Asia

Trump, Netanyahu to meet at White House in high-stakes talks on Iran, Gaza plan

Peter Navarro Blames Horrific Jobs Report On Undocumented Workers

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости