Google Kills AI Health Feature After Safety Backlash
Google has removed “What People Suggest,” an experimental Search feature that used AI to organize health-related perspectives from online discussions.
The feature had been introduced as a way to help users quickly see how other people described living with certain conditions, but it drew criticism because the material came from forum-style conversations and social posts rather than medical professionals.
Google told The Guardian that the removal was part of a broader simplification of the Search results page, not a response to quality or safety concerns.
Google first unveiled the feature at its March 2025 Check Up event. At the time, the company said AI could organize online discussions into readable themes so people could learn from others with similar lived experiences, such as how patients with arthritis manage exercise. The feature was initially available on mobile in the US.
Why Google pulled it
The feature’s core issue was straightforward: it packaged health advice from ordinary internet users into a cleaner, more authoritative-looking format inside Google Search. That raised concern about whether anecdotal guidance could be mistaken for medically sound information. Android Authority reported that the tool summarized health tips from internet users, not clinicians, even though it appeared within a product many people already rely on for medical searches.
The removal also arrived after scrutiny of Google’s AI-generated health answers elsewhere in Search. The Verge reported in January that Google’s AI Overviews had delivered misleading medical information, including advice telling some pancreatic cancer patients to avoid high-fat foods, the opposite of what experts said should be recommended. The same report also cited incorrect information tied to liver function tests. Google told The Verge that it invests heavily in the quality of AI Overviews for health topics and updates results when additional context is needed.
What the removal signals
Google has not said it is stepping away from AI in health search more broadly. But removing “What People Suggest” shows how narrow the margin for error becomes when AI tools package sensitive medical information into quick summaries. Search features that work for travel, shopping, or product discovery face a different standard when the subject is medical guidance.
For Google, the change closes off one more experiment that tried to blend AI with health information from the open web. For users, it is a reminder that search products built around lived experience and community discussion can still create risk when they appear in a format that looks polished, organized, and authoritative. The feature may be gone, but the broader challenge for AI health search has not gone away.
Also read: YouTube’s growing role in Google’s AI health search results highlights how the sourcing behind AI-generated medical answers is drawing fresh scrutiny.
The post Google Kills AI Health Feature After Safety Backlash appeared first on eWEEK.