ChatGPT wants your medical records – these are the risks
Dr ChatGPT will see you now. The artificial intelligence (AI) chatbot is launching a new health mode to give medical advice.
ChatGPT Health, which is not yet available in the UK, is a tab within the app where users can ask health-related questions.
The function was announced yesterday by OpenAI, the tech giant behind the virtual assistant.
Health can analyse lab results, explain unclear messages from your doctor and provide your clinical history, OpenAI said.
Other usage examples include ‘post-surgery food guides’, health insurance provider picks or outlining drug side effects.
But the company stressed it’s not putting the ‘GP’ in ChatGPT, saying: ‘Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment.’
The company is encouraging users to connect their medical records and wellness apps, such as Apple Health, to get ‘personalised’ responses.
Patient paperwork will be encrypted, OpenAI added, and conversations won’t be used to train the chatbot.
When linking an app with the bot, ChatGPT says you’ll be in full control of how much access it has.
OpenAI said: ‘The first time you connect an app, we’ll help you understand what types of data may be collected by the third party. And you’re always in control: disconnect an app at any time and it immediately loses access.’
Can you join the ChatGPT health waitlist?
ChatGPT Health is not available in the UK yet, with only a small group of early users able to test it.
Only people in the US can sign up for the waitlist right now.
How can I use ChatGPT Health?
Once you have access, a ‘Health’ tab will appear in the sidebar of the app and the desktop version of ChatGPT.
Medical files can be uploaded from tools (+) or ‘Apps’ in Settings.
This function, however, will only be available in the US – the UK and the European Union have strict privacy laws.
Why do people use AI for health questions?
ChatGPT is a type of generative AI that learns skills by analysing enormous amounts of digital data.
Around one in four patients turn to AI to better understand their health, according to a survey by the digital health system Semble.
Yana Welinder, head of AI for Amplitude, said on X she has been using ChatGPT ‘constantly’ for health questions, both for herself and her family.
She added: ‘The only downside was that all of this lived alongside my very heavy other usage of ChatGPT. Projects helped a bit, but I really wanted a dedicated space… So excited about this.’
Would you use AI for health advice?
-
Yes, it can be helpful.
-
No, it's too risky and should be left to professionals.
-
Maybe, but only under strict regulations.
-
I'm not sure.
But medical experts have long expressed unease about this, warning it’s the latest version of people Googling health symptoms.
As much as the system can easily pass a medical licensing exam, researchers say it can spit out inaccurate or false information.
AI chatbots’ medical advice has also caused real harm, with a man being taken to hospital with an 18th-century condition after taking ChatGPT’s alleged advice to replace salt in his diet with sodium bromide.
Chatbots also often try to please us by reinforcing what we say, meaning leading questions such as ‘Don’t you think I have the flu?’ could prompt it to agree with you, regardless of the facts, experts warn.
People, especially young adults, are also using the system as a form of therapy, to the alarm of psychological experts Metro spoke with.
Christoph Lippuner, co-founder and CEO at Semble, said there are many reasons why people use AI for health reasons.
Many say they seek advice from chatbots due to frustrations with the health system, such as limited GP appointments or lengthy waiting lists.
‘AI gives immediate access to knowledge drawn from enormous datasets, in this case, about health,’ he said. ‘It’s convenient, non-judgmental and free, and the combination of accessibility and privacy is drawing patients in.
‘The question is no longer if patients will use AI, but how to make that use responsible and effective.’
Using ChatGPT for health ‘could be dangerous’, says expert
Sophie McGarry, a solicitor at the medical negligence law firm Patient Claim Line, warned that AI health advice can be ‘very dangerous’.
She told Metro: ‘On one end of the spectrum, it could lead to some people being over-diagnosed or informed that their symptoms are indicative of serious, sinister conditions.
‘This, in turn, could lead to potentially unnecessary stress and worry and could lead people to urgently seek medical attention from their GP, urgent care centres or A&E departments, which are already stretched, adding more unnecessary pressure or could lead to people attempting to treat their AI-diagnosed conditions themselves.’
McGarry added that the bots may underdiagnose or misdiagnose people instead, giving them a false sense of reassurance.
‘As a clinical negligence solicitor, I see far too many cases of people’s lives being turned upside down because of misdiagnosis or delays in diagnosis where earlier, appropriate input would have led to a better, often life-changing, sometimes life-saving outcome,’ she said.
‘False reassurances from AI health advice could lead to the same devastating outcomes.’
OpenAI said it worked with more than 260 physicians to provide feedback on health model outputs over the last two years, including teaching it to recommend follow-ups with a doctor when needed and not oversimplify.
Get in touch with our news team by emailing us at webnews@metro.co.uk.
For more stories like this, check our news page.