{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7 8 9 10 11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Human Problems: It’s Not Always The Technology’s Fault

We have met the enemy and he is us.

When a teenage boy in Orlando started texting Character.AI’s chatbot, it started as an innocent use of a new tool. Sewell Setzer III customized the chatbot to have the Game of Thrones-inspired persona of Daenerys Targaryen, the series’ prominent dragon-riding queen. In the months that followed, the boy developed a romantic connection with the chatbot. One night, he messaged the bot: “What if I told you I could come home right now?” The bot sent back, “[P]lease do, my sweet king.” Setzer was only fourteen years old when he died by suicide later that evening.

Setzer’s death is a tragedy. Like many parents in the wake of suicide, Seltzer’s mother is left searching for answers and accountability. Suicide often leaves behind a painful void, filled with questions that rarely yield satisfying explanations. 

In her search, Setzer’s mother sued the chatbot’s developer, Character Technologies, alleging that its chatbot caused her son’s death. The complaint describes the bot as a “defective” and “inherently dangerous” technology, and accuses the company of having “engineered Setzer’s harmful dependency on their products.” She is not alone. Three other families have brought similar suits against Character Technologies, and another has sued OpenAI, alleging the chatbots harmed their children.

Framing suicide and other harms as technology problems—as much of the current discourse around chatbots suggests—obscures underlying societal conditions and can undermine effective interventions. In effect, what are often described as “tech problems” are, more accurately, the result of human decisions, norms, and policies. They are, at their core, human problems. 

Historical Framing of Tech and Media in Creating and Sustaining Societal Problems

This is just the latest vintage whine, rebottled yet another time. Humanity has long sought to condemn new technologies and media for problems of the day. When the printing press made literature available to the masses, church and state condemned publications for causing immorality. Rock ‘n’ Roll and comic books were blamed for juvenile delinquency. Later, it was heavy metal and role-playing games. The advent of video games supposedly led to increased violence by adolescent boys.

The desire to hold technology companies responsible for human harms, however, has its immediate antecedent in social media. Over the past decade, users have sued social media platforms for offline violence committed by people they met online, failing to prevent cyberbullying, and hosting user-generated content that allegedly radicalized extremists. 

Like in Setzer’s case, parents have also sued social media companies after the deaths of their children, arguing that design choices, engagement mechanics, and algorithmic targeting played a role. Indeed, this is the central question at the heart of the current wave of “social media addiction” litigation that is currently being tried.

AI is just the latest technological scapegoat to which we seek to ascribe fault. It’s easier to hold technology responsible for our problems, especially when the technology is as uncanny as generative AI. We’re afraid of robots, perhaps not because of any harm they cause us, but because they show us how much we, as humanity, can harm ourselves. We would rather fault the technology du jour than confront the harder truths underneath. 

Death by Suicide as a Case Study

To put this into context, consider the allegations about the Character.AI chatbot and Setzer’s suicide. Suicide is a complex, deeply human problem. Among youth and young adults, it stands as the second leading cause of death. Suicide has no single cause. Public health experts have long recognized that risk emerges from a convergence of individual, relational, communal, and societal factors. These can include long-term effects of childhood trauma, substance abuse, social isolation, relationship loss, economic instability, and discrimination. On the surface, these may look like personal struggles, but they’re really the fallout of systemic failure. 

Access to lethal means compounds the risk of self-harm and suicide. In particular, the presence of firearms in the home has remained strongly associated with higher youth suicide rates. 

These systemic failures tend to hit teens the hardest. Studies consistently show that young people are facing rising rates of mental health challenges, especially due to and following the COVID-19 pandemic. This is compounded by chronically underfunded school counseling programs, inaccessible mental health care, and inconsistent support for youth in crisis. LGBTQ+ youth, in particular, bear the brunt, facing higher rates of bullying, depression, and suicidal ideation, all while increasingly being targeted by state policies that strip away protections and deny their identities. 

We don’t and can’t know for sure why Setzer or anyone else died by suicide. Tragically, teenage suicide is common. Indeed, it’s the subject of many songs. There’s no mechanism to definitively determine how Setzer and other victims felt when they started using Character.AI. However, as we likely all remember from our own lives, teenage years can be trying. As we mature physically and mentally, it can be difficult to express and accept ourselves. Other children can be cruel. Hormones can lead us to lash out in anger and withdraw into ourselves. 

In Setzer’s case, the complaint and public reporting indicate that he exhibited other signs and conditions commonly associated with elevated suicide risk, including anxiety and depression, withdrawal from teachers and peers, chronic lateness, significant sleep deprivation, and access to a firearm in the home. His interactions with fictional characters on the Character.AI service may suggest unmet emotional needs or a search for understanding and connection. At different points, he described a character as resembling a father figure and spoke about feelings of loneliness and a lack of romantic connection—experiences that are not uncommon for adolescents, particularly during periods of heightened vulnerability. According to the complaint, Setzer also raised the topic of suicide in earlier conversations with the chatbot, and those exchanges were promptly halted by the system. 

The uncomfortable truth about suicide is that it has existed as long as there have been people–sometimes for reasons we can understand, and often for reasons we never will. We are terrified that people die by suicide, not only because it is difficult to comprehend, but because the forces that drive someone there can feel disturbingly familiar.

Parents like Setzer’s can’t fix systemic governmental and societal failures. What feels more immediate and actionable is holding the technology companies accountable when their services appear to enable or amplify harm. It is far easier to fixate on the medium through which people express suicidal thoughts rather than ask where those thoughts came from or why they felt like the only option.

Legal Analysis of Faulting Tech

Legal doctrine appears to recognize that holding the technology responsible for these systemic failures is not viable. For example, because suicide is shaped by so many overlapping factors, tort claims against AI companies for causing a teen’s death—while understandable in their urgency—are, doctrinally speaking, a stretch. 

Under traditional tort principles, providers of generative AI systems and social media services are unlikely to bear legal responsibility in these cases. Claims based on intentional torts, such as battery, generally fail because providers of online services do not act with the intent to cause—or even to contribute to—physical harm. Therefore, Plaintiffs more commonly turn to negligence theories.

Negligence, however, requires more than just harm in fact. It demands both factual causation and proximate (i.e., legal) causation. In some situations, an online service or generative AI model might satisfy a but-for test because the harm would not have occurred without the service. But that is not sufficient. 

Proximate cause—what the law treats as a legally meaningful connection between conduct and injury—is where most of these claims falter. In many cases, particularly those involving such numerous and complex factors as suicide, the link between a provider’s conduct and the ultimate injury is typically too attenuated to meet this standard. 

Services such as social media and AI chatbots are typically designed as broad, general-purpose tools. The potentially implicatable content comes from other users’ behaviors, personalized interactions, or the user’s own actions. Even where excessive technology use—including social media—has been associated with elevated rates of suicidal ideation among youth and young adults, research has not established a direct causal link. As a result, courts are generally reluctant to find the technology service to be the legal cause of death. 

The Broader Ramifications of a Myopic Focus on Tech

Beyond legal error, focusing solely on technology obscures the path to real solutions. When we frame fundamentally human problems as technological ones, we deflect attention from the underlying conditions that lead to these tragedies and make it more likely they will recur. 

This framing guides policymakers and advocates toward seemingly easy, surface-level technological fixes such as imposing age-verification requirements, mandating disclosures about content moderation, or curbing algorithmic feeds. True, technology companies can—and should—consider how to help mitigate real-world harms. Yet these proposed interventions rest on the assumption that technology is the primary culprit, even though research increasingly shows that, in the right contexts, technology can actually help those in crisis. 

The appeal of reducing complex social issues to matters of redesigning or banning technology is understandable. Technology problems can feel tractable. They suggest clear targets and concrete fixes. 

What this logic ignores, however, is that the pre-technology status quo for many public health crises has long been dismal. The better question, then, isn’t whether technology causes harm, but whether it deepens an already broken baseline—or simply reflects it.

Technology, including generative AI, often acts less as a cause than a mirror. Our digital spaces often reflect the offline world, including its ills. 

Today, children face more pressure to excel at school and attend the best universities, even while job prospects stagnate and inflation soars. They have lost access to the kinds of public and community spaces that once offered structure, connection, and care. Libraries operate with reduced hours. Budget cuts have decimated after-school programs. Parks are monitored and restricted for loitering. Community centers that shuttered during the pandemic have never reopened. In many ways, technology—and social media in particular—has stepped in as a makeshift third space for teens. Yet rather than address the erosion of offline support, policymakers are now working to dismantle these digital communities too.

If human distress reflects deteriorating real-world social infrastructure, then optimizing digital services cannot restore stasis. Technological interventions address a symptom while the deeper human cancer persists.

A Pragmatic Path Forward

The path forward requires resisting the impulse to treat fundamentally human problems as technological ones. When new technologies appear alongside harm, the harder and more necessary questions are not simply how to regulate the tool, but what human choices produced the conditions in which harm emerged, which institutions failed or fell short, and what values should guide our response. These questions are more difficult—and often more uncomfortable—because they turn our attention inward, toward ourselves, rather than external and more convenient actors.

Instead of focusing our energies on systematically regulating platforms, we should direct our efforts toward these human problems. For suicide, public health experts point to a wide range of evidence-based strategies for preventing and mitigating risk factors. These include strengthening economic supports such as household financial stability and housing security; creating safer environments by reducing at-risk individuals’ access to lethal means; fostering healthy organizational policies and cultures; and improving access to healthcare by expanding insurance coverage for mental health services and increasing provider availability and remote access in underserved areas. Experts also emphasize the importance of promoting social connection and teaching problem-solving skills that can help individuals navigate periods of acute distress.

These and other socioeconomic reforms are not easy solutions. They aren’t just a matter of adjusting algorithms or restricting platform features. They demand uncomfortable conversations about how we structure work, education, and community life. They require sustained political commitment and resource allocation. Yet if we can achieve these results, we will create a better world than one derived from mere technological fixes.

In short, technology doesn’t cause suicide. It doesn’t cause a host of human problems for which it is often accused. Sadly, they have always been with us. 

But technology, used wisely, could help us mitigate these problems. For example, through processing massive amounts of data, AI can detect patterns that elude us humans. This alone could help reveal early warning signs or surface new protective factors. AI chatbots, for example, could help us identify teens who are at risk and create opportunities to intervene. 

But that kind of progress demands that we take responsibility for these problems. We must acknowledge that our governments, societies, communities, and even ourselves may have normalized and contributed to these harmful conditions. We may discover there’s no rhyme or reason to why teenagers commit suicide. But we may uncover that teen suicide isn’t random at all. It may stem from something we’ve unwittingly ignored, or perhaps built into the world. 

That possibility is far more unsettling than the idea of dangerous technology. It’s the idea that the danger might be us.

Kevin Frazier directs the AI Innovation and Law Program at the University of Texas School of Law and is a Senior Fellow at the Abundance Institute. Brian L. Frye is a Spears-Gilbert Professor of Law at the University of Kentucky J. David Rosenberg College of Law. Michael P. Goodyear is an Associate Professor at New York Law School. Jess Miers is an Assistant Professor of Law at The University of Akron School of Law

Ria.city






Read also

I Put HOKA's New Mach 7 to the Test. Here’s Why It’s One of the Most Versatile Trainers to Date

Crufts champion was convicted of animal cruelty for keeping ‘filthy’ dogs in kennel

Video shows US Tomahawk missile strike next to bombed Iranian school

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости