How AI is changing the patient journey
AI is no longer the future of healthcare; it’s already reshaping how patients are diagnosed and treated. Some of the most interesting developments involve systems that sense and respond to human emotion. Cedars-Sinai’s Connect platform, for example, adapts care based on patient sentiment; CompanionMx interprets vocal and facial cues to detect anxiety; and Feel Therapeutics uses emotion-sensing wearables to tailor interventions in real time.
At the same time, clinical tools are evolving. Hospitals are pairing large language models (LLMs) with AI note-taking apps such as Nabla and Heidi, which can listen, summarize, and respond to the nuances of doctor–patient conversations. Investment in medical scribing technologies alone hit around $800 million last year.
A SHIFT TO AI ADAPTATION
All of this points to a bigger shift from AI that automates tasks to AI that adapts. Traditional AI sped up paperwork and crunched data. Adaptive AI helps clinicians make better judgments, understand patients more deeply, and respond in context. You can already see this shift in breast cancer screening, genomics, and drug discovery, where high quality data and constant validation are driving real progress.
Emotionally-aware tools, when designed responsibly, can strengthen the connection between clinicians and patients, personalize care, and ease pressure on overstretched systems. But as adaptive AI becomes more widely available, success depends less on technical brilliance and more on how systems are built. The tools that succeed will be able to flex around people, fitting patients’ needs, clinicians’ workflows, and the realities of care. Good AI needs to be anticipatory and sensitive to context, built for the full diversity of patients.
Even the most empathetic AI cannot, of course, erase the imperfections of human systems. Recent studies, for example, show that medical AI tools and LLM‑based assistants routinely downplay symptoms in women and treat Black and Asian patients with less empathy than for white men. AI does not cleanse the biases of the real world; it carries them forward and often widens their impact. We have seen this pattern before.
DEPLOYMENT MATTERS
That’s why deployment conditions matter as much as technology. A system that mimics empathy does not automatically grasp nuance, context, or risk. Without firm ethical boundaries, so-called emotional intelligence can give a false sense of security. Clinicians still need to make the final calls, protecting patients and maintaining trust. AI can be a helpful care partner, but it cannot take on the weight of human responsibility.
Building trust requires strengthening the foundations on which it is used. Involving patients, families, and carers from the start surfaces blind spots early and helps balance compassion with practicality. It also clarifies where automation should step back and human care needs to step in. Our Cancer Platform, developed with the Cancer Awareness Trust, illustrates this in practice, showing how empathetic design creates dependable, genuinely helpful tools.
AI isn’t here to replace people. It’s here to support them in their expertise and scale their impact. Ideally we will build machines to handle complexity and pattern recognition, freeing clinicians to focus on what humans do best: exercise judgement, build connection, and provide care. Machines might learn to care, but it is up to us to create the ecosystem where that care is trustworthy, fair, and meaningful—a challenge, yes, but one full of opportunity.
Nicki Sprinz is CEO of ustwo.