Is ‘artificial general intelligence’ an illusion? Yann LeCun thinks so
Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.
This week, I’m focusing on Big AI’s biggest sales pitch—the quest for AGI—and the idea that the industry should focus on more modest and achievable tasks for AI. I also look at Databricks’s new $4 billion-plus funding raise, and at Google’s new Gemini 3 Flash model.
Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan.
Yann LeCun calls BS on “artificial general intelligence”
Big AI companies like OpenAI and Anthropic like to talk about their bold quest for AGI, or artificial general intelligence. The definition of that grail has proved to be somewhat flexible, but in general it refers to AI systems that are as smart as human beings at a wide array of tasks. AI companies have used this “quest” narrative to win investment, fascinate the tech press, and charm policymakers.
Now one of AI’s most important pioneers, Turing Award winner Yann LeCun, is calling the whole concept into question. LeCun, outgoing Meta’s chief AI scientist, argues that even human beings aren’t really generalists. They’re good at some physical tasks, and very good at social interactions, but can easily be defeated at chess by a computer and can’t perform math as fast and accurately as a calculator can. “There are tasks where many other animals are better than we are,” LeCun said on a recent Information Bottleneck webcast.
“We think of ourselves as being general, but it’s simply an illusion because all of the problems that we can apprehend are the ones that we can think of—and vice versa,” LeCun said. “So we’re general in all of the problems that we can imagine, but there’s a lot of problems that we cannot imagine. And there are lots of mathematical arguments for this. So this concept of general intelligence is complete BS.”
Lots of people in AI and neuroscience disagree with LeCun. Just because humans aren’t the best at all tasks, or tasks we can’t imagine, it doesn’t mean we’re not generalists—especially in comparison to machine savants like calculators, they argue. I don’t know who’s right, but LeCun is making a broader point. He believes that AI labs should focus on specific real-world things that AI can do—things that create value or reduce suffering, perhaps—and bring those solutions to market.
LeCun says the transformer-based large language models of today are useful enough to be applied in some valuable ways, but also believes they aren’t likely to achieve the general or human-level intelligence needed to do high-value work tasks now reserved for human brains. In order to navigate real-world complexity like humans do, the AI would need a much-higher-bandwidth training regimen than just words, images, and computer code, LeCun argues, and a different architecture to structure all the data. Notably, The Financial Times reports that LeCun is raising $585 million at a $3-billion valuation for a new AI startup that will look to build “world models”—AI systems capable of learning from images, video, and spatial data, rather than only from text and large language models.
Databricks pulls in another $4B+, evaluation rises to $134 billion
Data and AI company Databricks raised more than $4 billion in a new Series L funding round led by Insight Partners, Fidelity, and J.P. Morgan Asset Management, with Andreessen Horowitz, BlackRock, and Blackstone kicking in. The company’s valuation rose to $134 billion with the new round.
The valuation reflects Databricks’s positioning within the booming market for AI cloud services. For years the company’s primary offering was secure cloud storage for sensitive enterprise data, including data owned by companies in regulated industries such as healthcare and finance. Over the past five years, Databricks has gone deep on developing the AI side of its business. Its value proposition is allowing customers to run their data through powerful AI models hosted within the same secure cloud. More recently, the company has set up a secure platform for developing and deploying autonomous agents that can, for example, assemble complex business intelligence reports based on diverse datasets stored in the Databricks cloud.
The company also enables customers to run their data through third-party models from OpenAI and Anthropic, among others, hosting those models natively within the secure cloud. Now Databricks says both its data-warehousing business and its AI business each have revenue run rates of more than $1 billion. The company reported a revenue run rate of $4.8 billion during the third quarter of 2025, representing growth of about 55% from the same period in 2024.
Almost exactly a year ago, Databricks raised a massive $10 billion funding round, one of the largest ever for an AI company, and achieved a $62 billion valuation. (The valuation moved up to $100 billion when the company raised a $1 billion round in August.)
The San Francisco-based company says it’ll use the new capital to develop new AI-driven applications, fund future acquisitions, support R&D, and pay employees (most likely including expensive AI research talent). With hundreds of customers each contributing more than $1 million in annual revenue, and a high customer retention rate, Databricks is considered a strong IPO candidate. The company may be waiting for the optimal market conditions in which to file.
Google releases a Gemini 3 model, “Flash,” for the rest of us
Now even people who can’t afford a monthly subscription can enjoy the magic of Google DeepMind’s new Gemini 3 model. Google released the first Gemini 3 model, Pro, in November, but it was available only to paid subscribers. Its new Gemini 3 Flash variant is now the default in the Gemini app, and is available globally in Google Search’s AI Mode.
Flash is said to be three times faster at responding than Gemini 2.5 Pro, and almost as good at reasoning as the Gemini 3 Pro model. Flash is designed to be cost-effective, making it a great option for developers and businesses, according to Google.
The new model shows some impressive marks on PhD-level reasoning and knowledge benchmarks such as GPQA Diamond (90.4%) and Humanity’s Last Exam (33.7% without tools). Those scores come close to those of larger models including Gemini 3 Pro and OpenAI’s GPT-5.2. Flash also achieved the highest score of any model—81.2%—on the MMMU Pro benchmark, which measures the ability to understand and reason over a mix of text and visual data.
When processing at the highest thinking level, Gemini 3 Flash can “modulate” how much it “thinks,” Google says. For more complex questions it’ll spend more time processing the data it collects in its memory to get to an answer. But it also uses 30% fewer tokens (on average) than Gemini 2.5 Pro to complete simpler, everyday tasks. Researchers at Big AI labs have been working hard to make AI models store the (often voluminous) contextual data they collect in memory more efficiently, and use it more effectively.
More AI coverage from Fast Company:
- Every AI founder thinks they want a mega investing round. Trust me, you don’t
- 5 predictions for AI’s growing role in the media in 2026
- DOGE leader at Treasury is looking to buy thousands of ChatGPT licenses
- Who should pay for the power grid’s race to keep up with data centers?
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.