Grammarly’s AI Expert Review Sparks Backlash Over Consent
Grammarly’s latest AI feature has landed in uncomfortable territory. Its “Expert Review” tool offers writing feedback framed through the voices of real authors, journalists, and academics, including people who never agreed to participate and at least one scholar who recently died.
That is what makes this more than a routine AI product dispute. Grammarly is not just summarizing publicly available writing or suggesting edits in a generic voice. It is packaging those suggestions around real identities, turning familiar names into a product feature without clear permission from the people behind them.
What Expert Review does
Grammarly says Expert Review is designed to help users refine drafts through expert-informed feedback. The company describes the feature as drawing on publicly available expert content, and says the tool can identify relevant subject-matter experts from a user’s text and suggest edits from those experts’ perspectives.
That pitch sounds straightforward enough. The trouble starts when those “experts” are presented in ways that feel less like a general AI summary and more like direct feedback from a real person.
The Verge found Grammarly generating comments tied to real newsroom staffers, including Nilay Patel, Sean Hollister, Tom Warren, and David Pierce, none of whom had given permission. Its testing also found inaccurate or outdated expert descriptions, and said some source links behind the feature pointed to archived, spammy, or mismatched pages. The publication also noted that comments shown in Google Docs could look a lot like feedback from an actual person rather than an AI-generated suggestion.
That last detail helps explain why the feature has rubbed so many people the wrong way. Grammarly is not simply borrowing ideas from public work. It is borrowing credibility from real people, then delivering suggestions in a format that can feel personal, specific, and human.
When public work becomes a product persona
One of the most striking examples involved historian David Abulafia. The Verge, citing earlier Wired reporting, said Grammarly’s expert list included him. The University of Cambridge said Professor David Abulafia died on January 24, 2026, making his inclusion especially hard to defend in a feature built around AI-generated expert feedback.
In a statement to The Verge, Alex Gay, vice president of product and corporate marketing at Grammarly’s parent company, Superhuman, said the feature does not claim endorsement or direct participation and that the experts appear because their published works are publicly available and widely cited.
That explains Grammarly’s reasoning, but it does not settle the consent question at the center of the story. The company is not just drawing from public work. It is turning real people into product personas, including one who is no longer here to object.
Also read: Consent fights around AI-generated identity are surfacing elsewhere too, including the Sydney Sweeney deepfake backlash.
The post Grammarly’s AI Expert Review Sparks Backlash Over Consent appeared first on eWEEK.