What Was Grammarly Thinking?
To me, the best first sentence of any piece of journalism is the one in Joan Didion’s 1987 book, Miami, which begins like this: “Havana vanities come to dust in Miami.”
I love that sentence and that propulsive first chapter so much that I once sat down to try to figure out how she did it. I looked at the sentences one at a time to assess what purpose each one was serving, and I counted how many of them Didion had needed to accomplish each thing she wanted to accomplish. Then I thought about how she figured out what order to put them in to have maximum page-turning impact. And then I compared all of it unfavorably with the flailing and feeble way in which I would have pursued the same goals. I marked up my copy of the book in a somewhat desperate fashion and then became depressed.
That type of copying is pretty normal, and they teach it in school. It’s how you learn (and how you become depressed). But in the age of generative AI, there are many new kinds of copying. For instance, Wired reported last week on a tool offered by Grammarly, which briefly offered users the opportunity to put their writing through something called “Expert Review.” This produced AI-generated advice purportedly from the perspective of a bunch of famous authors, a bunch of less-famous working journalists (including myself, per The Verge’s reporting), and a bunch of academics (including some who had recently died).
[Margaret Atwood: Murdered by my replica?]
I say “briefly” because the company deactivated the feature today. A lot of people got really mad about it because none of the experts had agreed for their work to be used in such a way, or to serve as uncompensated marketing for an app that people use to help them write more legible emails. “We hear the feedback and recognize we fell short on this,” the company’s CEO, Shishir Mehrotra, wrote on his LinkedIn page yesterday. Not long after, Wired reported that one of the journalists whose name had been used in the feature, Julia Angwin, was filing a class-action lawsuit against Grammarly’s owner, Superhuman Platform. In a statement forwarded by a spokesperson, Mehrotra repeated apologies made in his LinkedIn post and added, "We have reviewed the lawsuit, and we believe the legal claims are without merit and will strongly defend against them.”
Before the tool went down, I spent a few hours experimenting with it, trying to see what it might be like to be edited by myself. I was hesitant to do this, because I had once asked ChatGPT to write something as if it were me (just for fun!) and found the experience humiliating. The result was sentimental and ditzy—it was studded with cloying rhetorical questions, had a bizarre number of unnecessary exclamation points, and sounded exactly like me.
But I still wondered, out of self-obsession, how an AI imitation of me might advise the real me if I fed it prose that I had written, and whether it could possibly make that prose better. Clearly, this experiment was sort of a gimmick. I assumed the suggestions would exist on a spectrum from obvious to dumb, though I was open to being surprised. If I’m being honest, what I was most interested in was seeing who I am in this latest iteration of The Computer. I also wanted to see whether the tool was good enough that someone might someday use it instead of hiring a human editor. If it was, I would have to have a difficult but compassionate conversation with my boss.
To my dismay, I was unable to summon the AI version of myself. I pasted in numerous articles I’d written and numerous fake articles that I had asked a chatbot to make up. But Grammarly seemed to think other writers were more expert in these articles’ subject matter and therefore more qualified to advise me. It suggested tech journalists, pop-culture academics, and legendary practitioners of narrative nonfiction. I wouldn’t appear. My boss tried too. He messaged me: “i have both claude and chatgpt writing fake essays in an attempt to fool a different AI into presenting me with an unauthorized simulacrum of one of my writers.” He failed. We both felt badly about the way we were spending our time.
So I gave up on that and started engaging with the experts I had been given. The tool was really pretty funny. It was not impersonating people in exactly the way that I’d imagined it would. I wasn’t getting a message from a bot pretending to be the New Yorker writer Susan Orlean. At no point did Grammarly say, “Hi, I’m Susan Orlean.” Instead, it would say, “Taking inspiration from Susan Orlean,” “Applying ideas from John McPhee,” “Using concepts from Bruce V. Lewenstein” (an undergraduate professor of mine, coincidentally), and so on.
[Stephen King: My books were used to train AI]
The inspiration, ideas, and concepts that the tool drew from these writers and thinkers were, with no exception, incredibly stupid and unhelpful (thank God). When I pasted in a story that I had written about TikTok, for instance, Grammarly told me it was drawing inspiration from my co-worker Charlie Warzel’s Galaxy Brain newsletter and then suggested changing the headline from “TikTok’s New Paranoia Problem” to “TikTok’s Zeroed-Out Voices: The New Paranoia Problem.”
When I asked it to look at an excerpt from my 2022 book on One Direction fans, it told me that it was going to improve the first sentence with a suggestion inspired by Joan Didion’s The White Album. Amazing! But then the idea was just to open with a quote from a young woman I had written about, which didn’t seem uniquely Didion-esque. The bot clarified. “In The White Album, Joan Didion emphasizes the importance of personal narratives in understanding reality, stating, ‘We tell ourselves stories in order to live.’” (As you may know, this super famous and often-misquoted line actually refers to how we have to delude ourselves constantly in order to stave off the certainty that all is meaningless.) Then it made up a fake quote that I might consider using.
I was sometimes offered suggestions inspired by the sociologist Sherry Turkle or by the famed memoirist Mary Karr. But for some reason, Grammarly offered suggestions inspired by the essayist Leslie Jamison over and over, almost insistently. I heard from both “Gia Tolentino” and the New Yorker writer Jia Tolentino. None of the suggestions was about structure, organization, or trimming the fat from a story. All of the suggestions were wordy additions. Some were needlessly floral elaborations and fabricated details clearly meant to add color and voice. For instance, a long and fake story about my late grandmother appeared in the middle of one draft. Others were stilted explainer-y tangents that seemed written for readers with no preexisting knowledge of the world. One idea was to pop a several-sentence capsule history of the entire feminist movement into the middle of a paragraph that mentioned the “girlboss” trope inspired by the philosopher Amia Srinivasan.
I tried to talk with the chatbot integrated into Grammarly about the situation, but it had no idea what I was asking about. It insisted that Expert Review was done by anonymous human editors, none of whom was famous, and assured me that Grammarly would never claim to be Joan Didion while giving me advice. We had a confusing exchange about that for a while before it revealed that its knowledge of the world and its own platform went up only to June 2024. Soon after, I learned that someone else had asked the tool to do an Expert Review on a bunch of “lorem ipsum” nonsense text and that it had obliged with recommendations inspired by Stephen King. (And then, as mentioned, the CEO killed it via LinkedIn.)
Now that I’ve looked more closely at this not-very-useful feature, and now that it’s shut down, the whole situation seems a little absurd. This was just a weird and inappropriate thing that a company tried to do to make money without putting in very much effort. The primary reason it became a news story at all was that it touched on widespread anxiety about whose work is worth what, whose skills will continue to be marketable in the age of AI, and whether any of us are really as complex, singular, and impossible-to-imitate as we might hope we are.
When I started working in journalism, in 2015, commenters (usually men) would reply to my stories and tell me to “learn to code.” This was a common taunt and catchphrase of the era (Gamergate), and it was a nod to the massive cultural, political, and economic shifts under way at that time. Tech was ascendant in every sphere, its hard skills were worth more money than ever before, and people like me—people who knew only words—seemed soft and useless in such a world.
Lately, there have been rumblings about a reversal. Large language models are very good at things such as coding, programming, and dealing with numbers. Users on X recently resurfaced a 2024 interview clip in which one of the most influential technologists of our time, Peter Thiel, said he thought the post-AI labor market would actually be “much worse for the math people than the word people.”
You might think I’m bringing that up to boast about how I came out on top in the end—it all worked out for me, and the latest AI failure proves that no bot can do what I do and no bot ever will. That’s not what I’m saying. What I’m saying is that the “learn to code” guys committed the crime of hubris, but I won’t.