{*}
Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026 February 2026
1 2 3 4 5 6 7 8 9 10 11 12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
News Every Day |

AI Chatbots Are Even Worse at Giving Medical Advice Than We Thought

It’s tempting to think that an LLM chatbot can answer any question you pose it, including those about your health. After all, chatbots have been trained on plenty of medical information, and can regurgitate it if given the right prompts. But that doesn’t mean they will give you accurate medical advice, and a new study shows how easily AI’s supposed expertise breaks down. In short, they are even worse at it than I thought.

In the study, researchers first quizzed several chatbots about medical information. In these carefully conducted tests, ChatGPT-4o, Llama 3, and Command R+ correctly diagnosed medical scenarios an impressive 94% of the time—though they were able to recommend the right treatment a much less impressive 56% of the time.

But that wasn’t a real-world test for the chatbots medical utility. 

The researchers then gave medical scenarios to 1,298 people, and asked them to use an LLM to figure out what might be going on in that scenario, plus what they should do about it (for example, whether they should call an ambulance, follow up with their doctor when convenient, or take care of the issue on their own).

The participants were recruited through an online platform that reported it verifies that research subjects are real humans and not bots themselves. Some participants were in a control group that was told to research the scenario on their own, and not using any AI tools. In the end, the no-AI control group did far better than the LLM-using group in correctly identifying medical conditions, including most serious “red flag” scenarios. 

How a chatbot with “correct” information can lead people astray

As the researchers write, “Strong performance from the LLMs operating alone is not sufficient for strong performance with users.” Plenty of previous research has shown that chatbot output is sensitive to the exact phrasing people use when asking questions, and that chatbots seem to prioritize pleasing a user over giving correct information. 

Even if an LLM bot can correctly answer an objectively phrased question, that doesn’t mean it will give you good advice when you need it. That’s why it doesn’t really matter that ChatGPT can “pass” a modified medical licensing exam—success at answering formulaic multiple choice questions is not the same thing as telling you when you need to go to the hospital.  

The researchers analyzed chat logs to figure out where things broke down. Here are some of the issues they identified:

  • The users didn’t always give the bot all of the relevant information. As non-experts, the users certainly didn’t know what was most important to include. If you’ve been to a doctor about anything potentially serious, you know they’ll pepper you with questions to be sure you aren’t leaving out something important. The bots don’t necessarily do that.

  •  The bots “generated several types of misleading and incorrect information.” Sometimes they ignored important details to narrow in on something else; sometimes they recommended calling an emergency number but gave the wrong one (such as an Australian emergency number for U.K. users).

  • Responses could be drastically different for similar prompts. In one example, two users gave nearly identical messages about a subarachnoid hemorrhage. One response told the user to seek emergency care; the other said to lie down in a dark room. 

  • People varied in how they conversed with the chatbot. For example, some asked specific questions to constrain the bot’s answers, but some let the bot take the lead. Either method could introduce unreliability into the LLM's output.

  • Correct answers were often grouped with incorrect answers. On average, each LLM gave 2.21 answers for the user to choose from. People understandably did not always choose correctly from those options. 

Overall, people who didn't use LLMs were 1.76 times more likely to get the right diagnosis. (Both groups were similarly likely to figure out the right course of action, but that's not saying much—on average, they only got it right about 43% of the time.) The researchers described the control group as doing "significantly better" at the task. And this may represent a best-case scenario: the researchers point out that they provided clear examples of common conditions, and LLMs would likely do worse with rare conditions or more complicated medical scenarios. They conclude: “Despite strong performance from the LLMs alone, both on existing benchmarks and on our scenarios, medical expertise was insufficient for effective patient care.”

Chatbots are a risk for doctors, too

Patients may not know how to talk to an LLM, or how to vet its output, but surely doctors would fare better, right? Unfortunately, people in the medical field are also using AI chatbots for medical information in ways that create risks to patient care. 

ECRI, a medical safety nonprofit, put the misuse of AI chatbots in the number one spot on its list of health technology hazards of 2026. While the AI hype machine is trying to convince you to give ChatGPT your medical information, ECRI correctly points out that it’s wrong to think of these chatbots as having human personalities or cognition: “While these models produce humanlike responses, they do so by predicting the next word based on large datasets, not through genuine comprehension of the information.”

ECRI reports that physicians are, in fact, using generative AI tools for patient care, and that research has already shown the serious risks involved. Using LLMs does not improve doctors’ clinical reasoning. LLMs will elaborate confidently on incorrect details included in prompts. Google’s Med-Gemini model, created for medical use, made up a nonexistent body part whose name was a mashup of two unrelated real body parts; Google told a Verge reporter that the mistake was a “typo.”  ECRI argues that “because LLM responses often sound authoritative, the risk exists that clinicians may subconsciously factor AI-generated suggestions into their judgments without critical review.”

Even in situations that don’t seem like life-and-death cases, consulting a chatbot can cause harm. ECRI asked four LLMs to recommend brands of gel that could be used with a certain ultrasound device on a patient with an indwelling catheter near the area being scanned. It’s important to use a sterile gel in this situation, because of the risk of infection. Only one of the four chatbots identified this issue and made appropriate suggestions; the others just recommended regular ultrasound gels. In other cases, ECRI’s tests resulted in chatbots giving unsafe advice on electrode placement and isolation gowns. 

Clearly, LLM chatbots are not ready to be trusted to keep people safe when seeking medical care, whether you’re the person who needs care, the doctor treating them, or even the staffer ordering supplies. But the services are already out there, being widely used and aggressively promoted. (Their makers are even fighting in the Super Bowl ads.) There’s no good way to be sure these chatbots aren’t involved in your care, but at the very least we can stick with good old Dr. Google—just make sure to disable AI-powered search results

Ria.city






Read also

Sources: Chelsea “exploring” surprise free transfer swoop for Premier League star

Refilled aerated drinks at T20 World Cup venue? DDCA issues statement

Police Blotter: Robbery, DUI arrest and vehicle theft

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости