DeepMind’s Demis Hassabis Warns AGI Remains Years Away Despite A.I. Breakthroughs
An A.I. model from Google DeepMind shocked mathletes from around the world last year when it secured a gold medal at the prestigious International Mathematical Olympiad. Why, then, do these same models also make mistakes on the most elementary math questions? Such inconsistencies are a sign of A.I.’s “jagged intelligence,” according to DeepMind CEO Demis Hassabis.
Today’s A.I. systems are “very good at certain things, but very poor at certain things,” said Hassabis while speaking at the India AI Impact Summit 2026 in New Delhi, India, today (Feb. 18). This dichotomy must be fixed before artificial general intelligence (AGI), a form of A.I. that rivals the intelligence of humans, is achieved, he added, predicting that the milestone is still five to eight years away.
Like the rest of Silicon Valley, DeepMind is racing to become the first developer to unlock the potential of advanced A.I. Acquired by Google more than a decade ago, the lab was founded in 2010 by Hassabis and a small group of researchers with the goal of eventually “solving intelligence,” and in the process addressing some of the world’s biggest questions. “I don’t think we’re there yet,” Hassabis said.
Beyond smoothing out A.I.’s jagged edges, other hurdles on the road to AGI include expanding planning capabilities to handle long-term tasks rather than just short-term goals. Hassabis is also focused on improving continual learning—ensuring systems can adapt and personalize through experience, instead of only absorbing new knowledge before being released. For now, he said, models are “kind of frozen and put out into the world.”
How the arrival of AGI will be one day be confirmed remains an open question across the tech industry. For Hassabis, the breakthrough will coincide with the emergence of “true creativity” in A.I. This isn’t limited to the arts, but extends to science—whether systems can not only solve a conjecture, but also formulate the right questions and hypotheses, a trait Hassabis said separates “great scientists from good scientists.”
The A.I. executive is especially excited about the prospect of models eventually serving as “co-scientists” as they grow more autonomous. His focus on scientific research is unsurprising given his own achievements, which include receiving a Nobel Prize in Chemistry for his work on AlphaFold, an A.I. system that predicts protein structures, and leading Isomorphic Labs, an Alphabet subsidiary using A.I. for drug discovery.
While Hassabis has long touted the scientific promise of A.I., his rivals—OpenAI CEO Sam Altman and Anthropic’s Dario Amodei—have placed more weight on the technology’s commercial and labor implications. Their differing priorities are reflected in their views on AGI timelines: Altman has suggested such systems could emerge by the end of the decade, while Amodei believes they may arrive even sooner.
One point of consensus among leading A.I. developers, however, is that AGI will bring new risks. Hassabis divides them into two categories: societal risks, when bad actors misuse A.I., and technical risks, when systems behave in unexpected and potentially harmful ways. Preparing for the former requires global dialogue and shared standards. “In order to mitigate some of the risks, we’re going to need international collaboration,” he said.