{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026 March 2026
1 2 3 4 5 6 7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

New York lawmakers want AI chatbots to stop pretending to be doctors or lawyers

New York is the latest state to consider a bill that would prohibit AI chatbots from dispensing advice that licensed professionals would normally give, such as medical or legal advice. The bill would also allow people who believe they were harmed by such advice to sue the operator of the chatbot.

Senate Bill S7263, introduced by Democratic state Senator Kristen Gonzalez, passed out of a technology committee on a 6–0 vote last week and now advances to a reading on the floor of the Senate. Interestingly, the bill requires operators to clearly label their chatbots as AI, but stipulates that such a label isn’t enough to shield them from lawsuits under the statute.

The proposal reflects a growing shift in how policymakers are thinking about AI. While early efforts focused mostly on transparency, lawmakers are beginning to explore something arguably more consequential: whether companies should be legally liable when AI systems give advice that causes real-world harm.

The bill applies to chatbots that give advice in the fields of medicine, law, dentistry, veterinary medicine, physical therapy, pharmacy, nursing, podiatry, optometry, engineering, land surveying, geology, architecture, psychology, and social work.

New York isn’t acting alone. Other states have passed or are considering similar laws, though with varying scopes and enforcement methods, and they tend to focus primarily on healthcare:

  • California’s AB 489, enacted in 2025, does something similar but with a narrower scope, targeting AI systems that misrepresent their information as coming from licensed healthcare professionals. But AB 489 relies on state healthcare boards for enforcement and doesn’t provide a private right of action (civil suit) for legal recourse.
  • A new Nevada law, AB 406, which went into effect last July, prohibits the advertising and operation of AI systems designed to dispense professional mental and behavioral healthcare therapy. The law also limits how licensed professionals can use AI in their practices.
  • Last August, Illinois passed HB 1806, which prohibits licensed therapists in the state from using AI to make treatment decisions or communicate with clients. The law also prohibits tech companies from advertising or offering AI-powered therapy services in the state without the involvement of a licensed professional.
  • Utah passed a similar law, HB 452, that puts restrictions and disclosure requirements on chatbots that appear to offer an alternative to human mental health therapy or advice. The law went into effect in 2025.

Professional medical groups have also begun weighing in on the risks. The American Medical Association doesn’t call for a ban on AI chatbots dispensing health information, but it worries that consumer advice from LLMs might be false or misleading. “Notably, tools such as ChatGPT have shown a not-uncommon tendency to falsify references cited in response to these queries,” the AMA says in a policy paper, adding that AI tools have demonstrated the ability to generate fraudulent scientific or medical literature to support health advice.

An especially sensitive area is mental health advice. Mental health advice is a particularly sensitive area, perhaps because many chatbot users, especially younger ones, use AI as a counselor or therapist. A 2025 JAMA Network study found that 13% of all respondents used chatbots for mental health advice, with 22% of those ages 18 to 21 doing so.

AI companies have recognized a large and growing addressable market for AI mental health chatbots, in part because many consumers struggle to find an affordable therapist. For some consumers, the argument goes, accessing mental health advice from a chatbot is better than receiving none at all.

The American Psychological Association says chatbots could actually be worse than nothing at all. That’s because of AI models’ tendency to relate to humans in a sycophantic way. The group said in a presentation to the Federal Trade Commission that “A.I. chatbots ‘masquerading’ as therapists, but programmed to reinforce, rather than challenge a user’s thinking, could drive vulnerable people to harm themselves or others.”

Senator Gonzalez will likely have to explain why her S7263 bill targets chatbots as a source of legal or medical information but not traditional search engines. She might cite a 2025 experimental study showing that people “over-trust” AI. (She declined to comment for this story.) The researchers write that AI chatbots often sound more convincing and trustworthy than search engine results—even when they’re wrong, off-topic, or lacking context. The study also found that participants couldn’t reliably distinguish AI responses from those from real doctors, and often rated AI responses as more trustworthy and complete.

Another study by Oxford and MLCommons involving nearly 1,300 participants found that using AI models to evaluate and analyze symptom scenarios led to health decisions (“should I take some aspirin? Should I go to the ER?”) that were no better than decisions based on personal knowledge or information gleaned from traditional internet search. The study also found that users often don’t know what details the AI needs to generate an answer, and that the AI’s outputs often blend correct and incorrect recommendations.

The AMA says it would like to see the Federal Trade Commission regulate AI chatbots that dispense health information, but believes the agency currently lacks the resources to take on that role. Instead, it calls on the tech companies behind the chatbots to continually review the accuracy of their models and give users an easy way to report when a chatbot outputs inaccurate health information.

Ria.city






Read also

Exhibit honors NY women of science, conservation

Robotics ethicist calls for stronger US guardrails as automation accelerates

Trump says no deal with Iran except ‘unconditional surrender’

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости