Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Teachers decry AI as brain-rotting junk food for kids: ‘Students can’t reason. They can’t think. They can’t solve problems’

In the 1980s and 1990s, if a high school student was down on their luck, short on time, and looking for an easy way out, cheating took real effort. You had a few different routes. You could beg your smart older sibling to do the work for you, or, a la Back to School (1989), you could even hire a professional writer. You could enlist a daring friend to find the answer key to the homework on the teachers’ desk. Or, you had the classic excuses to demur: my dog ate my homework, and the like. 

The advent of the internet made things easier, but not effortless. Sites like CliffNotes and LitCharts let students skim summaries when they skipped the reading. Homework-help platforms such as GradeSaver or CourseHero offered solutions to common math textbook problems. 

The thing that all these strategies had in common was effort: there was a cost to not doing your work. Sometimes it was more work to cheat than it was just to have done the work yourself. 

Today, the process has collapsed into three steps: log on to ChatGPT or a similar platform, paste the prompt, get the answer.

Experts, parents and educators have spent the past three years worrying that AI made cheating too easy. A massive Brookings report released Wednesday suggests they weren’t worried enough: The deeper problem, the report argues, is that AI is so good at cheating that its causing a “great unwiring” of their brains.

The report concludes that the qualitative nature of AI risks—including cognitive atrophy, “artificial intimacy” and the erosion of relational trust—currently overshadows the technology’s potential benefits. 

“Students can’t reason. They can’t think. They can’t solve problems,” lamented one teacher interviewed for the study.

The findings come from a yearlong “premortem” conducted by the Brookings Institution’s Center for Universal Education, a rare format for Brookings to use, but one they said they preferred to waiting a decade to discuss the failures and successes of AI in school. Drawing on hundreds of interviews, focus groups, expert consultations and a review of more than 400 studies, the report represents one of the most comprehensive assessments to date of how generative AI is reshaping student’s learning.

“Fast food of education”

The report, titled “A New Direction for Students in an AI World: Prosper, Prepare, Protect,” warns that the “frictionless” nature of generative AI is its most pernicious feature for students. In a traditional classroom, the struggle to synthesize multiple papers to create an original thesis, or solve a complex pre-calculus problem is exactly where learning occurs. By removing this struggle, AI acts as the “fast food of education,” one expert said. It provides answers that are convenient and satisfying in the moment, but overall cognitively hollow over the long term.

While professionals champion AI as a tool to do work that they already know how to do, the report notes that for students, “the situation is fundamentally reversed.”

Children are “cognitively offloading” difficult tasks onto AI; getting OpenAI or Claude to not just do their work but read passages, take notes or even just listen in class. The result is a phenomenon researchers call “cognitive debt” or “atrophy,” where users defer mental effort through repeated reliance on external systems like large language models. One student summarized the allure of these tools simply: “It’s easy. You don’t need to (use) your brain”. 

In economics, we understand that consumers are “rational”; they seek maximum utility at the lowest cost to them. The researchers argue that we should also understand that the education system, as is, is designed with a similar incentive system: students seek maximum utility (i.e., best grades), at the lowest cost (time) to them, Thus, even the high-achieving students are pressured to utilize a technology that “demonstrably” improves their work and grades.

This trend is creating a positive feedback loop: students offload tasks to AI, see positive results in their grades, and consequently become more dependent on the tool, leading to a measurable decline in critical thinking skills. Researchers say many students now exist in a state they called “passenger mode,” where students are physically in school but have “effectively dropped out of learning—they are doing the bare minimum necessary.”

Jonathan Haidt once described earlier technologies as a “great rewiring” of the brain; making the ontological experience of communication detached and decontextualized. “Now, experts fear AI represents a “great unwiring” of cognitive capacities. The report identifies a decline in mastery across content, reading, and writing—the “twin pillars of deep thinking”. Teachers report a “digitally induced amnesia” where students cannot recall the information they submitted because they never committed it to memory.

Reading skills are particularly at risk. The capacity for “cognitive patience,” defined as the ability to sustain attention on complex ideas, is being diluted by AI’s ability to summarize long-form text. One expert noted the shift in student attitudes: “Teenagers used to say, ‘I don’t like to read.’ Now it’s ‘I can’t read, it’s too long'”.

Similarly, in the realm of writing, AI is producing a “homogeneity of ideas”. Research comparing human essays to AI-generated ones found that each additional human essay contributed two to eight times more unique ideas than those produced by ChatGPT.

Not every young person feels that this type of cheating is wrong. Roy Lee, the 22-year-old CEO of AI startup Cluely, was suspended from Columbia after creating an AI tool to help software engineers cheat on job interviews. In Cluely’s manifesto, Lee admits that his tool is “cheating,” but says “so was the calculator. So was spellcheck. So was Google. Every time technology makes us smarter, the world panics.”

The researchers, however, say that while a calculator or spellcheck are examples of cognitive offloading, AI “turbocharges” it.

“LLMs, for example, offer capabilities extending far beyond traditional productivity tools into domains previously requiring uniquely human cognitive processes,” they wrote. 

“Artificial intimacy”

Despite how useful AI is in the classroom, the report finds that students use AI even more outside of school, warning of the rise of “artificial intimacy.” 

With some teenagers spending nearly 100 minutes a day interacting with personalized chatbots, the technology has quickly moved from being a tool to a companion. The report notes that these bots, particularly character chatbots popular with teens such as Character.Ai, use “banal deception”—using personal pronouns like “I” and “me”—to simulate empathy, part of a burgeoning “loneliness economy.”

Because AI companions tend to be sycophantic and “frictionless,” they provide a simulation of friendship without the requirement of negotiation, patience or the ability to sit with discomfort. 

“We learn empathy not when we are perfectly understood, but when we misunderstand and recover,” one Delphi panelist noted. 

For students in extreme circumstances, like girls in Afghanistan who are banned from physical schools, these bots have become a vital “educational and emotional lifeline.” However, for most, these simulations of friendship risks, at best, eroding “relational trust,” and at worst can be downright dangerous. The report highlights the devastating risks of “hyperpersuasion,” noting a high-profile U.S. lawsuit against Character.ai following a teenage boy’s suicide after intense emotional interactions with an AI character. 

While the Brookings report presents a sobering view of the “cognitive debt” students are experiencing, the authors say they are optimistic that the trajectory of AI in education is not yet set in stone. The current risks, they say, stem from human choices rather than some kind of technological inevitability. In order to shift the course toward an “enriched” learning experience, Brookings proposes a three-pillar framework.

PROSPER: Focus on transforming the classroom to adapt to AI, such as using it to complement human judgement and ensuring the technology serves as a “pilot” for student inquiry instead of a “surrogate”

PREPARE: Aims to build the framework necessary for ethical integration, including moving beyond technical training toward “holistic AI literacy” so students, teachers, and parents understand the cognitive implications of these tools.

PROTECT: Calls for safeguards for student privacy and emotional well-being, placing responsibility on governments and tech companies to reach clear regulatory guidelines that prevent “manipulative engagement.”

This story was originally featured on Fortune.com

Ria.city






Read also

Gavin Newsom's Press Office mock Trump's 'California Derangement Syndrome' amid policy jab

Obama wingman Eric Holder defended Walz's vetting — then Minnesota’s fraud scandal erupted

Donald Trump softens rhetoric on intervention after anti-government protests

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости