{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

‘Could it kill someone?’ A Seoul woman allegedly used ChatGPT to help carry out two murders in South Korean motels

Careful how you interact with chatbots, as you might just be giving them reasons to help carry out premeditated murder.

A 21-year-old woman in South Korea allegedly used ChatGPT to help answer questions as she planned a series of murders that left two men dead and another briefly unconscious.

The woman, identified solely by her last name, Kim, allegedly gave two men drinks laced with benzodiazepines that she was prescribed for a mental illness, the Korea Herald reported

Although Kim was initially arrested on the lesser charge of inflicting bodily injury resulting in death on Feb. 11, it wasn’t until Seoul Gangbuk police found her online search history and chat conversations with ChatGPT and upgraded the charges, her questions establishing her alleged intent to kill.

“What happens if you take sleeping pills with alcohol?” Kim is reported to have asked the OpenAI chatbot. “How much would be considered dangerous? 

“Could it be fatal?” Kim allegedly asked. “Could it kill someone?”

In a widely publicized case dubbed the Gangbuk motel serial deaths, prosecutors allege Kim’s search and chatbot history show the suspect asking for clarification on whether her cocktail would prove fatal.

“Kim repeatedly asked questions related to drugs on ChatGPT. She was fully aware that consuming alcohol together with drugs could result in death,” a police investigator said, according to the Herald

Police said the woman admitted she mixed prescribed sedatives containing benzodiazepines into the men’s drinks, but previously stated she was unaware it would lead to death.

On Jan. 28, just before 9:30 p.m., Kim reportedly accompanied a man in his twenties into a Gangbuk motel in Seoul, and two hours later was spotted leaving the motel alone. The following day, the man was found dead on the bed. 

Kim then allegedly carried out the same steps on Feb. 9, checking into another motel with another man in his twenties, who was also found dead with the same deadly cocktail of sedatives and alcohol.

Police allege Kim also attempted to kill a man she was dating in December after giving him a drink laced with sedatives in a parking lot. Though the man lost consciousness, he survived and was not in a life-threatening condition.

The questions Kim asked the chatbot follow a factual line of questioning, a spokesperson for OpenAI told Fortune, meaning the questions wouldn’t raise alarms, that say, would arise were a user to express statements of self-harm (ChatGPT is programed with respond with the suicide crisis hotline in that instance). South Korean police do not allege the chatbot provided any other responses other than factual ones in response to Kim’s alleged questions above.

Chatbots and their toll on mental health

Chatbots like ChatGPT have come under scrutiny as of late for the lack of guardrails their companies have in place to prevent acts of violence or self-harm. Recently, chatbots have given advice on how to build bombs or even engage in scenarios of full-on nuclear fallout.

Concerns have been particularly heightened by stories of people falling in love with their chatbot companions, and chatbot companions have been shown to prey on vulnerabilities to keep people using them longer. The creator of Yara AI even shut down the therapy app over mental health concerns.

Recent studies have also shown that chatbots are leading to increased delusional mental health crises in people with mental illnesses. A team of psychiatrists at Denmark’s Aarhus University found that the use of chatbots among those who had mental illness led to a worsening of symptoms. The relatively new phenomenon of AI-induced mental health challenges has been dubbed “AI psychosis.” 

Some instances do end in death. Google and Character.AI have reached settlements in multiple lawsuits filed by the families of children who died by suicide or experienced psychological harm they allege was linked to AI chatbots.

Dr. Jodi Halpern, UC Berkeley’s School of Public Health University chair and professor of bioethics as well as the codirector at the Kavli Center for Ethics, Science, and the Public, has plenty of experience in this field. In a career spanning as long as her title, Halpern has spent 30 years researching the effects of empathy on recipients, citing examples like doctors and nurses on patients or how soldiers returning from war are perceived in social settings. For the past seven years, Halpern has studied the ethics of technology, and with it, how AI and chatbots interact with humans. 

She also advised the California Senate on SB 243, which is the first law in the nation requiring chatbot companies to collect and report any data on self-harm or associated suicidality. Referencing OpenAI’s own findings showing 1.2 million users openly discuss suicide with the chatbot, Halpern likened the use of chatbots to the painstakingly slow progress made to stop the tobacco industry from including harmful carcinogens in cigarettes, when in fact, the issue was with smoking as a whole.

“We need safe companies. It’s like cigarettes. It may turn out that there were some things that made people more vulnerable to lung cancer, but cigarettes were the problem,” Halpern told Fortune. 

“The fact that somebody might have homicidal thoughts or commit dangerous actions might be exacerbated by use of ChatGPT, which is of obvious concern to me,” she said, adding that “we have huge risks of people using it for help with suicide,” and chatbots in general.

Halpern cautioned in the case of Kim in Seoul, there aren’t any guardrails to stop a person from going down a line of questioning.

“We know that the longer the relationship with the chatbot, the more it deteriorates, and the more risk there is that something dangerous will happen, and so we have no guardrails yet for safeguarding people from that.”

If you are having thoughts of suicide, contact the 988 Suicide & Crisis Lifeline by dialing 988 or 1-800-273-8255.

This article has been updated with remarks from OpenAI regarding the content of Kim’s alleged questions with the chatbot.

This story was originally featured on Fortune.com

Ria.city






Read also

'India would've been knocked out': Ex-Pakistan star sparks fresh controversy

Nancy Mace under investigation by House Ethics Committee

Ratings v Boro (12 replies)

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости