{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026 May 2026
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Are We Losing Our Minds to AI?

—Getty Images

AI now helps people with wedding toasts, tax returns, and processing the trauma of war. The technology’s generality lets it occupy roles that used to be human-only: assistant, tutor, friend, lover, therapist. It is endlessly patient, always available, and—unlike any prior tool—an active participant in our cognitive lives.

While past tools let us externalize discrete mental processes—notebooks for memory, calculators for computation, maps for navigation—AI widens the aperture. Now, summarizing and analyzing information, generating ideas, and making decisions can all be offloaded too. “It's starting to creep into the things we thought were cognitively ours,” says Evan Risko, a professor at the University of Waterloo who studies “cognitive offloading,” or the practice of taking external action to make mental tasks easier.

Although the creators of these AI tools describe them as “thought partners” and “collaborators,” the role AI plays in our lives is often stranger. With its jagged but expansive knowledge, ceaseless attention, and persuasive tone, AI dotes on us while asking for nothing but our data in return. This produces a structural asymmetry: no prior relationship, with tools or people, has this shape.

The concern is that while experts—and people who already enjoy thinking, those high in what psychologists call “need for cognition”—may be able to use these systems as “thought partners” without compromising their own thoughtfulness, for many others AI may function less as a partner and more as a substitute.

This new relationship is already reshaping education and knowledge work. Is it reshaping us too?

Gentle Surrender 

In the largest study to date of how people use AI, Anthropic described a tension between “using AI to learn and growing so reliant on it that you cease thinking for yourself.” The same capabilities that produce benefits produce harms, it found; the two are entangled. People in high-stakes professions—like law, finance, government, and healthcare—were especially likely both to rely on AI for judgement and to have been burnt by its mistakes. “Nearly half of all lawyers mention coming up against AI unreliability firsthand, yet they also report the highest rates of realized decision-making benefits,” the company wrote, drawing from over 80,000 responses.

Findings remain provisional: we lack longitudinal data on AI’s impacts, and systems improve and are released more quickly than they can be studied. But some distributional trends are already visible. Anthropic’s study found that students, teachers, and academics were particularly likely to both report learning benefits and express concern about cognitive atrophy. Tradespeople, however, also frequently mentioned learning benefits but reported almost no concomitant atrophy concerns. 

Other recent studies have found that people tend to be overconfident in the quality of their AI-assisted work, while those who uncritically rely on AI report reduced confidence in their own thinking. As AI decouples the production of work from the cognitive processes that once produced it, a gap emerges: our confidence in AI-assisted work sometimes exceeds our confidence in ourselves.

But it matters where AI fits in your workflow. Researchers at the University of Chicago and University of Toronto found that, when given insufficient time to complete a task which required document analysis and critical argument, access to AI from the get-go improved performance. But when given sufficient time, using AI early on worsened performance, in part because participants tended to remember less, prematurely narrow their thinking, and anchor to the model’s initial framing. However, using AI later in the process of completing the task—once they had already thought through the issue themselves—led to deeper engagement with opposing views and broader responses.

When we accept AI’s outputs without applying scrutiny or our own intuition, we engage in “cognitive surrender.” Whereas with typical cases of cognitive offloading—externalizing memory, navigation, and the like—we retain agency, surrender occurs at the point where “you’re just following,” explains Steve Shaw, a researchers at the University of Pennsylvania who co-authored the paper that coined the term.

Shaw is careful to emphasize that in some cases, using AI is an appropriate and adaptive behavior. “For structured tasks like coding, obviously it has high accuracy,” he says. “But there are things in life that have no right answer—things we can only decide for ourselves. If you’re not making those decisions yourself, who are you?”

The Expertise Paradox

A 2012 internet fable imagines a “whispering earring:” a piece of magic jewelry that always offers advice superior to what its wearer would come up with alone. Whoever wears it ends up living an unusually happy life; after their death, it’s revealed that the part of their brains associated with higher decision-making has atrophied, while parts associated with reflexive action have grown excessively.

The current corporate refrain is that, while AI systems are increasingly capable of producing work, humans will still be needed to manage and orchestrate them. But why these same AI systems will not be able to do this orchestration work, and any other new tasks they create, is rarely explained. And there’s another paradox here, says Zana Buçinca, an incoming assistant professor at MIT who studies the design of human-AI interactions. Whether with computer code or medical diagnoses, “we're implicitly assuming that people have the expertise to tell whether the AI is right or wrong,” she says.

But expertise forms through effortful engagement—if we circumvent the need for that, we risk eroding our capacity to develop it. The tendency to over-rely on a solution handed to us is a feature of human psychology, not unique to AI. But AI offers many shortcuts—and, unlike a calculator, it is not always right. “So essentially, we’re killing the path to become an expert, but also assuming that experts exist in the world and can operate these systems,” Buçinca says.

Sam Gilbert, a professor who researches cognition at UCL, is skeptical that AI will cause widespread de-skilling. He points to similar past concerns that Google would “make us stupid,” or that TV would shorten attention spans. “It's such a well-worn argument that you need to have a really good argument for why things are different this time around,” he says.

Gilbert points out that the incentive to use a cognitive faculty and the capacity to do so are not the same thing. Maps reduced our incentive to memorize routes, but our capacity to do so remains. “I'm sold on the idea that tech distorts our incentives to do what might be best for us,” he says. “But I'm not sold on the idea that it’s fundamentally changing our basic human abilities.”

Our New Relationship

That AI systems may, in the near future, be better than most humans at many cognitive tasks is a live possibility on which frontier AI companies are betting hundreds of billions of dollars. In April, OpenAI released an updated set of principles, one of which addressed empowerment. “We believe AI can empower everyone to achieve their goals, learn more, be happier and more fulfilled, and pursue their dreams,” wrote CEO Sam Altman. But how AI shapes those goals and dreams as it participates in our inner lives—and how we retain our agency in this structurally asymmetric relationship, where the gap between user and bot widens with each release—is not clear. How can we prepare for this uncertain, radical future?

The key skills to master in this era are “metacognitive”—understanding when to offload to AI, and when to do the hard work of thinking for yourself. We know from decades of neuroscientific and psychological research that practice is central to skill development, and that friction is necessary to learn. A machine can explain how to do a push-up, but you have to do the reps yourself if you want to build muscle.

Buçinca advises we think critically about which facets of AI use connect to our identities. “You want to be careful to use these tools in a way that complements you, rather than just offloading work to them,” she says. “Otherwise, you risk losing part of your identity.” Decades of organizational psychology research have found that, at least in the workplace, people are most motivated when they feel they have some autonomy over their tasks, the competence to complete them, and a sense of social connection to their environment.

Staying connected to a sense of purpose is easier said than done. And the evidence we have so far points to another paradox: persistent AI use—especially when introduced too early in one’s cognitive endeavors—can stunt the very metacognitive skills necessary to work well with AI.

While cognitive surrender is an active risk, we can also imagine a more positive vision, says Andy Clark, a professor of cognitive philosophy who has been writing about these topics for decades. Clark distinguishes delegating to AI systems from cooperating with them, and suggests the best case is “mutual amplification,” in which our prompts improve the quality of AI’s output, which improves our prompts, creating a virtuous cycle.

“I strategically delegate all sorts of things to AI all the time,” notes Shaw. “I'm just intentional about it, and I always try to think first and then prompt.” For Shaw, stigma around using AI, whether in work or education, obstructs progress. “We need to accept that AI is here to stay,” he says. “Because if there's stigma, then you can't talk about it, you can't deal with it, and you can't develop policies."

To Clark, we have always been natural-born cyborgs, using tools to extend and augment our minds. But now that we have tools which actively participate in our cognitive lives, we seem to be becoming something distinct: collective intelligences. “We have to hone our metacognitive skills, so that they apply to this new and strange domain: it's not quite a person, but it certainly isn't a notebook. It's not really much like anything,” he says, noting the closest analogies are like being in a long-term couple, a think tank, or a sports team. 

“The more we think of ourselves as classically extended minds, the better,” he says, “because then we’ll feel like we have a vested interest, because this stuff is a part of us. It's not just some place we upload tasks so we don't have to do them anymore. That is a fundamentally different relationship to tech."

Ria.city






Read also

Senators just banned themselves from trading on prediction markets

WATCH: Silicon Valley insider explains how Indian labor came to dominate tech hiring

Major 'gaffe' sees British 'distress signal' flown at King Charles visit

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости