Do You Feel the AGI Yet?
Hundreds of billions of dollars have been poured into the AI industry in pursuit of a loosely defined goal: artificial general intelligence, a system powerful enough to perform at least as well as a human at any task that involves thinking. Will this be the year it finally arrives?
Anthropic CEO Dario Amodei and xAI CEO Elon Musk think so. Both have said that such a system could go online by the end of 2026, bringing, perhaps, cancer cures or novel bioweapons. (Amodei says he prefers the term powerful AI to AGI, because the latter is overhyped.) But wait: Google DeepMind CEO Demis Hassabis says we might wait another decade for AGI. And—hold on—OpenAI CEO Sam Altman said in an interview last month that “AGI kind of went whooshing by” already; that now he’s focused instead on “superintelligence,” which he defines as an AI system that can do better at specific, highly demanding jobs (“being president of the United States” or “CEO of a major company”) than any person could, even if that person were aided by AI themselves.
What the differences in opinion should serve to illustrate is exactly how squishy the notions of AGI, or powerful AI, or superintelligence really are. Developing a “general” intelligence was a core reason DeepMind, OpenAI, Anthropic, and xAI were founded. And not even two years ago, these CEOs had fairly similar forecasts that AGI would arrive by the late 2020s. Now the consensus is gone: Not only are the timelines scattered, but the broad agreement on what AGI even is and the immediate value it could provide humanity has been scrubbed away.
[Read: ‘We’re definitely going to build a bunker before we release AGI’]
The idea of a generally intelligent computer program first arose in the mid-20th century as a very distant goal for the then-nascent field of AI. It has always been a shaky idea. For instance, Alan Turing proposed his famous Turing Test in 1950 as a proxy for machine intelligence: He argued that if a machine could convince a human that they were talking with another person, then it would be displaying, or at least imitating, the equivalent of some sort of “thinking.” But the test has been passed a number of times by programs that nobody would call intelligent—they just happened to be convincing, to some humans, on this particular benchmark. In the early 2000s, the computer scientist Shane Legg, among others, helped establish the modern notion of AGI not as a threshold so much as a field of study—the study of generally intelligent algorithms, as opposed to narrow and targeted ones. There was never agreement on specific ways to test the presence of such general abilities in a machine. Even human intelligence itself is capacious and not well understood.
Yet the AI industry coalesced around the notion of AGI anyway—in large part because OpenAI, which kicked off today’s boom with the launch of ChatGPT in late 2022, has the goal of ensuring that AGI “benefits all of humanity” in its founding mission. At the time, it communicated about the concept constantly. (Ilya Sutskever, then the company’s chief scientist, had a habit of encouraging employees to “feel the AGI.”) The term’s ambiguity has been a boon for OpenAI and other firms that have been able to market “intelligence” without actually describing it in any meaningful way—hence the endless stream of questionable advertisements insisting that chatbots make ideal travel agents. Meanwhile, these companies have raised tremendous capital by showing the world that AI is getting better, and better at more things. As long as that seemed true—that their chatbots were progressing toward something—it was simple enough to argue that the ultimate destination was an all-powerful machine.
This case is getting harder to make. Large language models already exhibit impressive capabilities, especially in technical areas such as software engineering and solving competition-style math problems. But at the same time, AI models continue to struggle with seemingly trivial tasks, such as drawing clocks and completing simple logic puzzles. For much of last year, each new generation of bots yielded only marginal improvements, rather than leaps forward, on standard benchmarks. And those benchmarks are highly gameable: It is unclear whether AI labs are really measuring general capabilities at all, or just preparing their products for the right tests. Consider that a human chess grandmaster might lack street smarts, and that a literary theorist might struggle with algebra.
[Read: The man out to prove how dumb AI still is]
As impressive as they can be, chatbots are now a “normal technology,” as the AI researchers Arvind Narayanan and Sayash Kapoor have put it: an invention that will spread across society and change it in real but gradual ways—like other new products that people pay for and benefit from using. This is becoming conventional wisdom. The White House AI adviser Sriram Krishnan recently described AI as a “very useful technology” that “has nothing to do with ‘general intelligence.’” Satya Nadella, the CEO of Microsoft, has described AI as “a tool” and said that his benchmark for the technology’s success is not building AGI but achieving 10 percent global GDP growth.
Even the AI labs and start-ups are starting to lean toward an open embrace of old-fashioned product development. Around San Francisco, billboards advertise AI accounting tools while founders pitch AI agents that will streamline back-office workflows and SOC-2 paperwork. Google DeepMind touted how its latest model, Gemini 3, can improve your “shopping experience” and organize your inbox. Both OpenAI and Anthropic have bragged about how their bots make corporate employees more efficient in such exciting areas as “writing sales emails.” OpenAI kicked off this year by announcing that it was going to begin rolling out ads in ChatGPT, and its CEO of applications, Fidji Simo, recently wrote on her Substack that the winning AI company will be that which turns “frontier research into products.” Indeed, OpenAI has released a web browser, social-media apps, and many other AI products and features over the past several months. (OpenAI has a corporate partnership with The Atlantic.)
These product launches reflect another relevant dynamic: The major AI models are all converging on roughly the same capabilities, so firms need to carve out distinct identities based on how they’re weaving those models into various tools and services. OpenAI has all of those apps. Anthropic has Claude Code, a tool that caters to developers, and is now testing a version of the product, Claude Cowork, for everyday white-collar jobs. And xAI’s Grok isn’t just a chatbot; it’s a service that interacts with users on X. This also helps explain why their rhetoric on AGI—or powerful AI, or superintelligence—is moving in different directions as well.
At a moment when its technical lead has evaporated, OpenAI is asserting that it’s not prioritizing technical research anyway—that the company now focused on products to help people appreciate the benefits of the AGI that’s apparently already here. (Mark Chen, OpenAI’s chief research officer, told me that pairing “long-term, foundational research” with “real-world deployment strengthens our science by accelerating feedback.”) Meanwhile, Amodei continuing to maintain that powerful AI is right around the corner bolsters Anthropic’s reputation as stern, responsible, and anxious—a key selling point for his enterprise customers.
Hassabis’s longer prediction for the coming of AGI reflects the reality that DeepMind is just one component of Google—drawing from an enormous revenue stream so it can plod along while releasing AI features as they are ready. (This is akin to how the Google X lab spent years working on autonomous driving before publicly launching Waymo.) “I don’t think AGI should be turned into a marketing term for commercial gain,” Hassabis said in an interview on Thursday.
As for xAI: Musk has built a career on making grand promises and then delivering them far behind schedule, and AI seems to be his new fixation. Just this week, he announced that Tesla—the main source of his wealth—is abandoning some of its major car lines in order to produce humanoid robots, accelerating the firm’s pivot from car manufacturer to AI company. Tesla also recently said that it would invest $2 billion in xAI, and Musk is reportedly considering merging SpaceX with xAI. As Musk’s entire empire converges on intelligent machines, ratcheting up hype and expectations around the technology has become his standard playbook.
The AI industry is undergoing its biggest commercial swing yet amid mounting concerns about just how sustainable this boom is. Altman, Amodei, and Hassabis have all said that aspects of the current AI-spending craze are bubble-like—in other words, that the hundreds of billions of dollars being dumped into building godlike AGIs may not yield a commensurate return. The new justifications for all of these investments are much more concrete: sell products, sell ads, sell subscriptions. If AI is indeed a normal technology, then the labs developing it need to start making money like normal businesses.