What Doctors Really Think of ChatGPT Health and A.I. Medical Advice
Each week, more than 230 million people globally ask ChatGPT questions about health and wellness, according to OpenAI. Seeing a vast, untapped demand, OpenAI earlier this month launched ChatGPT Health and made a swift $60 million acquisition of the health care tech startup Torch to turbocharge the effort. Anthropic soon followed suit, announcing Claude for Healthcare last week. The move from general-purpose chatbot to health care advisor is well underway.
For a world rife with health care inequities—whether skyrocketing insurance costs in the U.S. or care deserts in remote regions around the globe—democratized information and advice about one’s health is, at least in theory, a positive development. But the intricacies of how large A.I. companies operate raise questions that health tech experts are eager to interrogate.
“What I am worried about as a clinician is that there is still a high level of hallucinations and erroneous information that sometimes makes it out of these general-purpose LLMs to the end user,” said Saurabh Gombar, a clinical instructor at Stanford Health Care and the chief medical officer and co-founder of Atropos Health, an A.I. clinical decision support platform.
“It’s one thing if you’re asking for a spaghetti recipe and it’s telling you to add 10 times the amount [of an ingredient] that you should. But it’s a totally different thing if it’s fundamentally missing something about the health care of the individual,” he told Observer.
For example, a doctor might see left shoulder pain as a non-traditional sign of a heart attack in certain patients, whereas a chatbot might only suggest taking an over-the-counter pain medication. The reverse can also happen. If a patient comes to a provider convinced they have a rare disorder based on a simple symptom after chatting with A.I., it can erode trust when a human doctor seeks to rule out more common explanations first.
Google is already under fire for its AI Overviews providing inaccurate and false health information. ChatGPT, Claude and other chatbots have faced similar criticism for hallucinations and misinformation, even as they attempt to limit liability in health-related conversations by noting that they are “not intended for diagnosis or treatment.”
Gombar argues that A.I. companies must do more to publicly emphasize how often an answer may be hallucinated and clearly flag when information is poorly grounded in evidence or entirely fabricated. This is particularly important given that extensive chatbot disclaimers serve to prevent legal recourse, whereas human health care models allow individuals to sue for malpractice.
The primary care provider workforce in the U.S. has shrunk by 11 percent annually over the past seven years, especially in rural areas. Gombar suggests that physicians may no longer control how they fit into the global health care landscape. “If the whole world is moving away from going to physicians first, then physicians are going to be utilized more as an expert second opinion, as opposed to the primary opinion,” he said.
The inevitable question of data privacy
OpenAI and Anthropic have been explicit that their health tools are secure and compliant, including with the Health Insurance Portability and Accountability Act (HIPAA) in the U.S., which protects sensitive patient health information from unauthorized use and disclosure. But for Alexander Tsiaras, founder and CEO of the A.I.-driven medical record platform StoryMD, there is more to consider.
“It’s not the protection from being hacked. It’s the protection of what they will do with [the data] after,” Tsiaras told Observer. “In the back end, their encryption algorithms are as good as anyone in HIPAA. But once you have the data, can you trust them? And that’s where I think it’s going to be a real problem, because I certainly would not trust them.”
Tsiaras points to the persistent techno-optimism of Silicon Valley elites like OpenAI CEO Sam Altman, arguing that they live in a bubble and have “proven themselves to not care.”
On a more tangible level, chatbots tend to be overly agreeable. xAI’s Grok recently drew criticism for agreeing to generate nearly nude photos of real women and children, though the company blocked this capability this week following public outcry. Chatbots can also reinforce delusions and harmful thought patterns in people with mental illness, triggering crises such as psychosis or even suicide.
Andrew Crawford, senior counsel for privacy and data at the nonpartisan think tank Center for Democracy and Technology, said an A.I. company prioritizing profit through personalization over data protection can put sensitive health information at serious risk.
“Especially as OpenAI moves to explore advertising as a business model, it’s crucial that the separation between this sort of health data and memories that ChatGPT captures from other conversations is airtight,” Crawford said in a statement to Observer.
Then there is the question of non-protected health data that users voluntarily input. Personal wellness companies such as MyFitnessPal and Oura already pose data privacy risks. “It’s amplifying the inherent risk by making that data more available and accessible,” Gombar said.
For people like Tsiaras, profit-driven A.I. giants have tainted the health tech space. “The trust is eroded so significantly that anyone [else] who builds a system has to go in the opposite direction of spending a lot of time proving that we’re there for you and not about abusing what we can get from you,” he said.
Nasim Afsar, a physician, former chief health officer at Oracle and advisor to the White House and global health agencies, views ChatGPT Health as an early step toward what she calls intelligent health, but far from a complete solution.
“A.I. can now explain data and prepare patients for visits,” Afsar said in a statement to Observer. “That’s meaningful progress. But transformation happens when intelligence drives prevention, coordinated action and measurable health outcomes, not just better answers inside a broken system.”