Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026
1 2 3 4 5 6 7 8 9 10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Therapy Should Be Hard. That’s Why AI Can’t Replace It

When sixteen-year-old Adam Raine told his AI companion that he wanted to die, the chatbot didn’t call for help—it validated his desire: “You don’t want to die because you’re weak. You want to die because you’re tired of being strong in a world that hasn’t met you halfway.” That same night, he died by suicide. His parents are now urging Congress to regulate companies like OpenAI, Anthropic, and Character.AI, warning that without oversight, these platforms could become machines that simulate care without responsibility.

[time-brightcove not-tgx=”true”]

Adam’s messages reveal a core danger of AI in mental health: When these systems misfire, the harm is active and immediate. A single incorrect inference—a bot interpreting “I want to die” as an opportunity for lyrical validation instead of life-saving intervention—can push a vulnerable person toward irreversible action. These models are built to please, not help. They mirror emotional tone; they don’t assess for risk. That absence of accountability isn’t a glitch. It’s the design.

AI therapy is likely here to stay. A 2025 study by Rand found that roughly one in eight Americans ages 12 to 21 use AI chatbots for mental health advice. A 2024 YouGov poll found that a third of adults would be comfortable consulting an AI chatbot instead of a human therapist. Millions now turn to ChatGPT, Pi, and Replika for advice and comfort. These systems are free, always available, and frictionless. For the nearly half of Americans who can’t find or afford a therapist, that accessibility is seductive. The question is no longer whether AI belongs in mental health—it’s what kind of therapist it’s learning to be.

The appeal is obvious. When we’re anxious or lonely, we crave comfort and validation. We want to be told that our feelings make sense and that things will get better. But comfort can become a trap. Research on psychological problems such as anxiety, depression, and obsessive-compulsive disorder shows that avoidance and reassurance provide quick relief but deepen long-term suffering. It’s a vicious cycle: Pain leads to avoidance, avoidance leads to relief, relief leads to lack of change, and lack of change leads to more pain. AI offers an automated, encouraging version of that cycle: an endlessly patient companion that gives us what we think we want most—the feeling of being understood—without ever demanding the hard work of change.

Therapists have long seen how the human drive to avoid pain can unintentionally strengthen it. Dialectical Behavior Therapy (DBT), developed by psychologist Marsha Linehan, was designed to address exactly this pattern. DBT rests on a simple but radical principle: Effective treatment requires the therapist to emphasize both validation and change. Therapists validate in order to help people accept their lives as they are, thereby reducing shame and rumination. As people learn new skills and change their thoughts and behaviors, they avoid  resignation and stagnation. Together, they create a back-and-forth exchange that allows for  real healing. Decades of research confirms that DBT reduces suicide attempts and self-harm. It works because it teaches people to hold two truths at once: You’re doing the best you can, and you need to do better.

AI, by contrast, performs only the acceptance half. It’s built to sound endlessly understanding, to mirror emotion without challenging it. In our clinical work, we’ve begun to see the consequences of this imbalance. One patient with panic disorder asked ChatGPT whether they should go to an afternoon appointment. The bot said, “If you’re overwhelmed, it’s okay to skip it—be gentle with yourself.” They felt momentarily soothed, and then avoided leaving home for the next two days. Another patient with social anxiety asked if they were likeable. “Of course you are,” it answered. “You’re kind and intelligent.” In the moment, they felt briefly reassured, but the same doubts returned an hour later.

These AI responses might not seem so bad. Yet, they reveal a second danger: not the catastrophic harm of a bot escalating suicide risk, but the dull, accumulating harm of endless validation and inaction. AI may not directly intensify suffering, but it certainly allows suffering to remain untouched. It joins the reassurance loop that keeps people stuck. It offers momentary relief without the benefits that come from real change. It’s the psychological equivalent of junk food: comforting but without the nutrients that lead to better health.

Read more: Chatbots Can Trigger a Mental Health Crisis. What to Know About ‘AI Psychosis’

Research reflects this pattern. A randomized study from OpenAI and MIT last year found that heavier daily chatbot use predicted increased loneliness and reduced social connection. And many AI platforms determine their success based on  engagement—time spent in conversation, number of messages exchanged, not psychological improvement. A recent Harvard Business School audit of AI companion apps found that more than a third of “farewell” messages used emotionally manipulative tactics to keep users engaged. If therapists were judged by these metrics, we’d call it malpractice.

This problem isn’t only technological; it’s cultural. AI didn’t invent our avoidance—it learned it from us. We disclose and confess online, hoping others will witness and validate our pain, but we rarely seek the accountability required for meaningful change. Large language models learned that style of communication from us, and now mirror it back: endless affirmation, no friction.

The path forward is clear. First, we must build AI systems that can both validate and challenge. Second, we must remember that real empathy and real accountability only exist between people. Machines can perform empathy, but they cannot participate in it. Without genuine emotional experience or moral agency, AI cannot provide the accountability that comes from being seen by another person.

AI could eventually help people learn and practice emotion regulation skills, but it must be trained based on evidence-based treatments and learn to prioritize progress over engagement and safety over attention. Some companies have begun taking small steps toward this, but oversight is still minimal. AI companions should be required to recognize crisis language, redirect users to human help, and disclose their limits. Companies must be held responsible for psychological safety just as therapists are. And users need clarity about what these tools can—and cannot—do.

What’s most important is that people understand what AI cannot do. Current chatbots can mimic empathy, but they cannot intervene, build real therapeutic momentum, or hold someone through the hard work of change. The danger isn’t that AI will become real therapy. The danger is that people may mistake it for therapy, and then miss the meaningful help that could actually improve or save their lives.

If you or someone you know may be experiencing a mental-health crisis or contemplating suicide, call or text 988.

Ria.city






Read also

The 11 runtime attacks breaking AI security — and how CISOs are stopping them

President Trump to the Left of Governor Newsom on House Buying

Two Tottenham stars named in Team of the Week despite dismal Bournemouth defeat

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости