Are We Losing Our Minds to AI?
AI now helps people with wedding toasts, tax returns, and processing the trauma of war. The technology’s generality lets it occupy roles that used to be human-only: assistant, tutor, friend, lover, therapist. It is endlessly patient, always available, and—unlike any prior tool—an active participant in our cognitive lives.
While past tools let us externalize discrete mental processes—notebooks for memory, calculators for computation, maps for navigation—AI widens the aperture. Now, summarizing and analyzing information, generating ideas, and making decisions can all be offloaded too. “It's starting to creep into the things we thought were cognitively ours,” says Evan Risko, a professor at the University of Waterloo who studies “cognitive offloading,” or the practice of taking external action to make mental tasks easier.
Although the creators of these AI tools describe them as “thought partners” and “collaborators,” the role AI plays in our lives is often stranger. With its jagged but expansive knowledge, ceaseless attention, and persuasive tone, AI dotes on us while asking for nothing but our data in return. This produces a structural asymmetry: no prior relationship, with tools or people, has this shape.
The concern is that while experts—and people who already enjoy thinking, those high in what psychologists call “need for cognition”—may be able to use these systems as “thought partners” without compromising their own thoughtfulness, for many others AI may function less as a partner and more as a substitute.
This new relationship is already reshaping education and knowledge work. Is it reshaping us too?
Gentle Surrender
In the largest study to date of how people use AI, Anthropic described a tension between “using AI to learn and growing so reliant on it that you cease thinking for yourself.” The same capabilities that produce benefits produce harms, it found; the two are entangled. People in high-stakes professions—like law, finance, government, and healthcare—were especially likely both to rely on AI for judgement and to have been burnt by its mistakes. “Nearly half of all lawyers mention coming up against AI unreliability firsthand, yet they also report the highest rates of realized decision-making benefits,” the company wrote, drawing from over 80,000 responses.
Findings remain provisional: we lack longitudinal data on AI’s impacts, and systems improve and are released more quickly than they can be studied. But some distributional trends are already visible. Anthropic’s study found that students, teachers, and academics were particularly likely to both report learning benefits and express concern about cognitive atrophy. Tradespeople, however, also frequently mentioned learning benefits but reported almost no concomitant atrophy concerns.
Other recent studies have found that people tend to be overconfident in the quality of their AI-assisted work, while those who uncritically rely on AI report reduced confidence in their own thinking. As AI decouples the production of work from the cognitive processes that once produced it, a gap emerges: our confidence in AI-assisted work sometimes exceeds our confidence in ourselves.
But it matters where AI fits in your workflow. Researchers at the University of Chicago and University of Toronto found that, when given insufficient time to complete a task which required document analysis and critical argument, access to AI from the get-go improved performance. But when given sufficient time, using AI early on worsened performance, in part because participants tended to remember less, prematurely narrow their thinking, and anchor to the model’s initial framing. However, using AI later in the process of completing the task—once they had already thought through the issue themselves—led to deeper engagement with opposing views and broader responses.
When we accept AI’s outputs without applying scrutiny or our own intuition, we engage in “cognitive surrender.” Whereas with typical cases of cognitive offloading—externalizing memory, navigation, and the like—we retain agency, surrender occurs at the point where “you’re just following,” explains Steve Shaw, a researchers at the University of Pennsylvania who co-authored the paper that coined the term.
Shaw is careful to emphasize that in some cases, using AI is an appropriate and adaptive behavior. “For structured tasks like coding, obviously it has high accuracy,” he says. “But there are things in life that have no right answer—things we can only decide for ourselves. If you’re not making those decisions yourself, who are you?”
The Expertise Paradox
A 2012 internet fable imagines a “whispering earring:” a piece of magic jewelry that always offers advice superior to what its wearer would come up with alone. Whoever wears it ends up living an unusually happy life; after their death, it’s revealed that the part of their brains associated with higher decision-making has atrophied, while parts associated with reflexive action have grown excessively.
The current corporate refrain is that, while AI systems are increasingly capable of producing work, humans will still be needed to manage and orchestrate them. But why these same AI systems will not be able to do this orchestration work, and any other new tasks they create, is rarely explained. And there’s another paradox here, says Zana Buçinca, an incoming assistant professor at MIT who studies the design of human-AI interactions. Whether with computer code or medical diagnoses, “we're implicitly assuming that people have the expertise to tell whether the AI is right or wrong,” she says.
But expertise forms through effortful engagement—if we circumvent the need for that, we risk eroding our capacity to develop it. The tendency to over-rely on a solution handed to us is a feature of human psychology, not unique to AI. But AI offers many shortcuts—and, unlike a calculator, it is not always right. “So essentially, we’re killing the path to become an expert, but also assuming that experts exist in the world and can operate these systems,” Buçinca says.
Sam Gilbert, a professor who researches cognition at UCL, is skeptical that AI will cause widespread de-skilling. He points to similar past concerns that Google would “make us stupid,” or that TV would shorten attention spans. “It's such a well-worn argument that you need to have a really good argument for why things are different this time around,” he says.
Gilbert points out that the incentive to use a cognitive faculty and the capacity to do so are not the same thing. Maps reduced our incentive to memorize routes, but our capacity to do so remains. “I'm sold on the idea that tech distorts our incentives to do what might be best for us,” he says. “But I'm not sold on the idea that it’s fundamentally changing our basic human abilities.”
Our New Relationship
That AI systems may, in the near future, be better than most humans at many cognitive tasks is a live possibility on which frontier AI companies are betting hundreds of billions of dollars. In April, OpenAI released an updated set of principles, one of which addressed empowerment. “We believe AI can empower everyone to achieve their goals, learn more, be happier and more fulfilled, and pursue their dreams,” wrote CEO Sam Altman. But how AI shapes those goals and dreams as it participates in our inner lives—and how we retain our agency in this structurally asymmetric relationship, where the gap between user and bot widens with each release—is not clear. How can we prepare for this uncertain, radical future?
The key skills to master in this era are “metacognitive”—understanding when to offload to AI, and when to do the hard work of thinking for yourself. We know from decades of neuroscientific and psychological research that practice is central to skill development, and that friction is necessary to learn. A machine can explain how to do a push-up, but you have to do the reps yourself if you want to build muscle.
Buçinca advises we think critically about which facets of AI use connect to our identities. “You want to be careful to use these tools in a way that complements you, rather than just offloading work to them,” she says. “Otherwise, you risk losing part of your identity.” Decades of organizational psychology research have found that, at least in the workplace, people are most motivated when they feel they have some autonomy over their tasks, the competence to complete them, and a sense of social connection to their environment.
Staying connected to a sense of purpose is easier said than done. And the evidence we have so far points to another paradox: persistent AI use—especially when introduced too early in one’s cognitive endeavors—can stunt the very metacognitive skills necessary to work well with AI.
While cognitive surrender is an active risk, we can also imagine a more positive vision, says Andy Clark, a professor of cognitive philosophy who has been writing about these topics for decades. Clark distinguishes delegating to AI systems from cooperating with them, and suggests the best case is “mutual amplification,” in which our prompts improve the quality of AI’s output, which improves our prompts, creating a virtuous cycle.
“I strategically delegate all sorts of things to AI all the time,” notes Shaw. “I'm just intentional about it, and I always try to think first and then prompt.” For Shaw, stigma around using AI, whether in work or education, obstructs progress. “We need to accept that AI is here to stay,” he says. “Because if there's stigma, then you can't talk about it, you can't deal with it, and you can't develop policies."
To Clark, we have always been natural-born cyborgs, using tools to extend and augment our minds. But now that we have tools which actively participate in our cognitive lives, we seem to be becoming something distinct: collective intelligences. “We have to hone our metacognitive skills, so that they apply to this new and strange domain: it's not quite a person, but it certainly isn't a notebook. It's not really much like anything,” he says, noting the closest analogies are like being in a long-term couple, a think tank, or a sports team.
“The more we think of ourselves as classically extended minds, the better,” he says, “because then we’ll feel like we have a vested interest, because this stuff is a part of us. It's not just some place we upload tasks so we don't have to do them anymore. That is a fundamentally different relationship to tech."