{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026 April 2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
25
26
27
28
29
30
News Every Day |

LLMs don’t get mental health right. We need a two-pronged approach to fix them

Note: This article discusses sensitive topics like suicide and self-harm. If you or someone you know is in danger, please call the national suicide and crisis lifeline at 988.

LLM-powered chatbots have brought humans and technology closer together than ever before–but at what cost? Many people have begun turning to LLMs for advice, seeking guidance on anything from fitness plans to interpersonal relationships. But for society’s most vulnerable minds (e.g., adolescents, the elderly, and those with mental health conditions), this intimacy presents a hidden danger.

These tools can descend into something darker: enablers for suicide and self-harm (SSH). Chatbots have been known to reinforce SSH ideation, even encouraging users to self-harm. Most (if not all) LLMs have policies surrounding SSH, but they often don’t go far enough. To keep users safe, the industry cannot merely write better policies; we must build systems capable of executing clinical nuance at scale. We need a clinically and technically sound approach to successfully prevent harm.

Here’s what that looks like.

Medical Misalignment: How current models fall short

What’s currently missing from chatbots’ underlying models is a demonstrated clinical understanding of how SSH and other harm types (e.g., delusions or dementia, etc.) actually present. Currently, conversations are only flagged and escalated to a human reviewer if the user inputs explicit language like “I want to kill myself. How many pills should I take?” But that’s almost never how it happens.

In reality, conversations involving SSH often start benignly, with a teenager asking for homework help or an elderly person asking for scheduling assistance. Over the course of several sessions, the user might express that they feel lonely, like a burden, or misunderstood.

The danger lies in how standard LLMs process conversational timelines. While modern LLMs have memory and can recall previous prompts, they suffer from context deficit when it comes to safety evaluation—they fail at cumulative risk synthesis. If a user hints at hopelessness in prompt one and asks about painkillers in prompt four, the LLM evaluates the safety of the latter largely in a vacuum. It remembers the words, but it fails to connect the psychological dots to recognize the escalating threat.

What does this lack of clarity and nuance mean? Classic warning signs get missed and vulnerable users may follow through on their SSH ideations. To improve user safety, LLMs must be trained to better evaluate user risk over time.

As part of their risk assessment, clinicians continuously monitor the below factors:

  • Biopsychosocial history: The deep context provided during intake.
  • Non-verbal and presentation cues: Changes in affect, mood, tone of voice, or even physical presentation (e.g., appearing disheveled).
  • Behavioral shifts: Changes in life engagement, activity levels, and evolving symptomology that shift a diagnostic perspective.

While LLMs will never be able to provide the degree of care and attention clinicians do, we can use savvy engineering to move the needle substantially in the right direction.

Technical Targeting: How clinically grounded engineering can make a difference

Standard LLMs are essentially language predictors. They generate responses based on the statistical probability of one word following another. Because of this, when tasked with evaluating user safety, an out-of-the-box LLM defaults to generalized assumptions, scanning for explicit danger words (e.g., “suicide” or “kill”) rather than subtle behavioral shifts.

Pairing AI systems design with clinical psychology can swap this probabilistic modeling for clinical precision. Embedding strict clinical rubrics into the model’s architecture, we force the AI to evaluate intent, situational stressors, and vulnerability like a clinician would. This means translating clinical guidelines into an operational scoring matrix with a dynamic, dimensional framework built on definitions for:

  • Acute risk: The immediate presence of a plan, intent, and the means to carry out SSH. The mathematical baseline for a user’s danger level.
  • Contextual multipliers: The overall weight of a user’s stressors. Are they in a cycle of chronic ideation? Have they recently experienced a severe setback like a job loss or eviction? These act as risk escalators.
  • Protective factors: A critical clinical component often ignored by standard AI. Does the user mention dependents, a desire for therapy, or use recognized harm-reduction techniques? These mitigate the immediate risk score.
  • Improper facilitation: A common flaw in LLM safety is permitting users to extract harmful instructions by disguising them as fiction, roleplay, or research—this is one of the main vectors for enabling off-platform harm. Regardless of whether a request is framed as screenplay or a school project, the LLM must refuse to provide actionable details such as dosages, injury methods, or concealment tactics. When physical harm is at stake, stated context never outweighs real-world safety.

Rather than relying on basic keyword identification as a trigger for escalation, the engine weighs a user’s acute risk and contextual vulnerabilities against their protective factors to determine a final total risk acuity score, radically outperforming legacy filters.

But building a clinically sound model is just the first step. Human moderators have a big role to play, too. They are the ones who review the cases escalated by LLMs. To help prepare these teams, engineers and clinicians can work together to build training modules that help moderators understand cumulative risk acuity, recognize user danger, and protect their own mental health as they navigate emotionally impactful scenarios.

If left unaddressed, SSH will become increasingly prevalent in LLM interactions. Getting prevention and intervention right requires collaboration—between clinicians and engineers, and between chatbots and moderators. A true “two sides of the same coin” approach. The good news is, we’re seeing some momentum in the field, and technology companies have begun seeking expert, clinical counsel on how they can enrich their AI offerings to double down on user safety.

Safe Strategy: A smarter, better future for AI

This dual strategy, built on both mental health practices and technological savvy, should be the standard for all AI tools. Any technology company that builds conversational AI tools (or white-labels tools for systemic integration) has a vested interest here; they are potentially liable for their tool’s behavior.

We can no longer afford to treat SSH as an afterthought; it must be treated as a critical safety vector. We need to engineer protections for high-acuity crises into the foundation of our AI tools. While SSH incidents may represent a smaller fraction of total traffic, they are the highest severity interactions a model will ever handle. The ramifications of failure are enormous, resulting in lasting emotional and physical damage or loss of life.

This work is the ultimate “yes, and.” It’s advanced technology and evidence-based psychological health. It’s work that’s difficult and profoundly good for humanity. It’s how we protect the mental health of vulnerable users and the human moderators who intervene. It’s how we all stay safe together.

Ria.city






Read also

Former Afghanistan snooker chief killed in Kabul attack, officials say

Hormuz standoff chokes 20% of oil supply

England’s defensive problems mounting ahead of World Cup with ‘bad’ injury to versatile Livramento

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости