{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
23
24
25
26
27
28
News Every Day |

Why the greatest risk of AI in higher education is the erosion of learning

Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating. Will students use chatbots to write essays? Can instructors tell? Should universities ban the tech? Embrace it?

These concerns are understandable. But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom.

Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag “at-risk” students, optimize course scheduling, or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarize and study, instructors use them to build assignments and syllabuses, and researchers use them to write code, scan literature, and compress hours of tedious work into minutes.

People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labor of research and learning, what happens to higher education? What purpose does the university serve?

Over the past eight years, we’ve been studying the moral implications of pervasive engagement with AI as part of a joint research project between the Applied Ethics Center at UMass Boston and the Institute for Ethics and Emerging Technologies. In a recent white paper, we argue that as AI systems become more autonomous, the ethical stakes of AI use in higher ed rise, as do its potential consequences.

As these technologies become better at producing knowledge work—designing classes, writing papers, suggesting experiments, and summarizing difficult texts—they don’t just make universities more productive. They risk hollowing out the ecosystem of learning and mentorship upon which these institutions are built, and on which they depend.

Nonautonomous AI

Consider three kinds of AI systems and their respective impacts on university life:

AI-powered software is already being used throughout higher education in admissions review, purchasing, academic advising, and institutional risk assessment. These are considered “nonautonomous” systems because they automate tasks, but a person is “in the loop” and using these systems as tools.

These technologies can pose a risk to students’ privacy and data security. They also can be biased. And they often lack sufficient transparency to determine the sources of these problems. Who has access to student data? How are “risk scores” generated? How do we prevent systems from reproducing inequities or treating certain students as problems to be managed?

These questions are serious, but they are not conceptually new, at least within the field of computer science. Universities typically have compliance offices, institutional review boards, and governance mechanisms that are designed to help address or mitigate these risks, even if they sometimes fall short of these objectives.

Hybrid AI

Hybrid systems encompass a range of tools, including AI-assisted tutoring chatbots, personalized feedback tools, and automated writing support. They often rely on generative AI technologies, especially large language models. While human users set the overall goals, the intermediate steps the system takes to meet them are often not specified.

Hybrid systems are increasingly shaping day-to-day academic work. Students use them as writing companions, tutors, brainstorming partners, and on-demand explainers. Faculty use them to generate rubrics, draft lectures, and design syllabuses. Researchers use them to summarize papers, comment on drafts, design experiments, and generate code.

This is where the “cheating” conversation belongs. With students and faculty alike increasingly leaning on technology for help, it is reasonable to wonder what kinds of learning might get lost along the way. But hybrid systems also raise more complex ethical questions.

One has to do with transparency. AI chatbots offer natural-language interfaces that make it hard to tell when you’re interacting with a human and when you’re interacting with an automated agent. That can be alienating and distracting for those who interact with them. A student reviewing material for a test should be able to tell if they are talking with their teaching assistant or with a robot. A student reading feedback on a term paper needs to know whether it was written by their instructor. Anything less than complete transparency in such cases will be alienating to everyone involved and will shift the focus of academic interactions from learning to the means or the technology of learning. University of Pittsburgh researchers have shown that these dynamics bring forth feelings of uncertainty, anxiety, and distrust for students. These are problematic outcomes.

A second ethical question relates to accountability and intellectual credit. If an instructor uses AI to draft an assignment and a student uses AI to draft a response, who is doing the evaluating, and what exactly is being evaluated? If feedback is partly machine-generated, who is responsible when it misleads, discourages, or embeds hidden assumptions? And when AI contributes substantially to research synthesis or writing, universities will need clearer norms around authorship and responsibility—not only for students, but also for faculty.

Finally, there is the critical question of cognitive offloading. AI can reduce drudgery, and that’s not inherently bad. But it can also shift users away from the parts of learning that build competence, such as generating ideas, struggling through confusion, revising a clumsy draft, and learning to spot one’s own mistakes.

Autonomous agents

The most consequential changes may come with systems that look less like assistants and more like agents. While truly autonomous technologies remain aspirational, the dream of a researcher “in a box”—an agentic AI system that can perform studies on its own—is becoming increasingly realistic.

Agentic tools are anticipated to “free up time” for work that focuses on more human capacities like empathy and problem-solving. In teaching, this may mean that faculty may still teach in the headline sense, but more of the day-to-day labor of instruction can be handed off to systems optimized for efficiency and scale. Similarly, in research, the trajectory points toward systems that can increasingly automate the research cycle. In some domains, that already looks like robotic laboratories that run continuously, automate large portions of experimentation, and even select new tests based on prior results.

At first glance, this may sound like a welcome boost to productivity. But universities are not information factories; they are systems of practice. They rely on a pipeline of graduate students and early-career academics who learn to teach and research by participating in that same work. If autonomous agents absorb more of the “routine” responsibilities that historically served as on-ramps into academic life, the university may keep producing courses and publications while quietly thinning the opportunity structures that sustain expertise over time.

The same dynamic applies to undergraduates, albeit in a different register. When AI systems can supply explanations, drafts, solutions, and study plans on demand, the temptation is to offload the most challenging parts of learning. To the industry that is pushing AI into universities, it may seem as if this type of work is “inefficient” and that students will be better off letting a machine handle it. But it is the very nature of that struggle that builds durable understanding. Cognitive psychology has shown that students grow intellectually through doing the work of drafting, revising, failing, trying again, grappling with confusion, and revising weak arguments. This is the work of learning how to learn.

Taken together, these developments suggest that the greatest risk posed by automation in higher education is not simply the replacement of particular tasks by machines, but the erosion of the broader ecosystem of practice that has long sustained teaching, research, and learning.

An uncomfortable inflection point

So what purpose do universities serve in a world in which knowledge work is increasingly automated?

One possible answer treats the university primarily as an engine for producing credentials and knowledge. There, the core question is output: Are students graduating with degrees? Are papers and discoveries being generated? If autonomous systems can deliver those outputs more efficiently, then the institution has every reason to adopt them.

But another answer treats the university as something more than an output machine, acknowledging that the value of higher education lies partly in the ecosystem itself. This model assigns intrinsic value to the pipeline of opportunities through which novices become experts, the mentorship structures through which judgment and responsibility are cultivated, and the educational design that encourages productive struggle rather than optimizing it away. Here, what matters is not only whether knowledge and degrees are produced, but how they are produced and what kinds of people, capacities, and communities are formed in the process. In this version, the university is meant to serve as no less than an ecosystem that reliably forms human expertise and judgment.

In a world where knowledge work itself is increasingly automated, we think universities must ask what higher education owes its students, its early-career scholars, and the society it serves. The answers will determine not only how AI is adopted, but also what the modern university becomes.


Nir Eisikovits is a professor of philosophy and the director of the Applied Ethics Center at UMass Boston.

Jacob Burley is a junior research fellow at the Applied Ethics Center at UMass Boston.

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Ria.city






Read also

Pakistan-Occupied Gilgit-Baltistan in Limbo: Extraction, Exclusion, and Rising Anger

Giant tortoises make a triumphant return to Galapagos island

Johnson scores 23, Bradley takes down Illinois State 74-60

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости