AI writing is the technology’s bleakest use case
Will artificial intelligence enable an even higher level of creativity, or turn us into cognitively deflated Sims spitting out chatbot responses?
This question has polarized much of the internet. On one hand, you have the ardent defenders who believe that AI writing speeds up their process, allowing them to quickly transform bullets of information into elegant and typo-free copy. On the other hand, there are the critics who contend that AI writing seems to violate something sacred, and that by using a large language model (LLM) to write, you’re not only degrading the craft, but also yourself.
AI, in its ideal form, is a technology that allows us to off-load or complete a range of tasks in a smarter, faster way. In the past month, I have used AI in a variety of extremely helpful ways, including translating pages of a table of contents from Russian to English so that I could better find intersecting sections in a book; building and modifying a graphic that would have taken far longer to create myself; and producing code that could, in turn, ingest a bunch of data into an easy-to-read data frame. Even when utilized as a search engine, Claude and ChatGPT produce far more expansive and helpful results—especially for niche topics—than the platforms I used to use. (Sorry, Google!)
But writing, in its ideal form, is a finalized and set ordering of your thoughts, the penultimate step in the creative process before sending something off to be read and digested by other people. Writing isn’t just thinking; it’s the thinking you commit to when you’re finally ready to speak from a place of authority.
AI has a ton of real and powerful use cases, including in the research process. But outsourcing the writing to AI is to lose the part of the process that requires you to declare your command over the material, including your understanding of how you might have used AI to come to your conclusions. To rely on AI writing is to work under a model rather than over it, and to turn to an insurgent and powerful technology for possibly its least astonishing application.
The woes and worries of the AI writing wars
Some of these arguments against AI writing are very practical. The first, and most significant, is that AI hallucinates things and makes statements, even confidently, that are flat-out wrong. It’s true that models are getting better, but it’s also true that they can insert mistakes into their responses. In recent tests, Google announced that Gemini—the same AI model that dictates your summarized Google results—is wrong about 10% of the time. I catch AI making factual errors frequently, especially in domains that are less discussed on the internet and likely less well-represented in training material.
Another worry is that AI writing—and reading AI writing—will cause deleterious sociological and psychological effects. For instance, some research suggests that AI writing is homogenizing our language and pushing to a sort of digital common denominator that strips us of our cultural, individual, even grammatical context. As Megan O’Rourke noted in a New York Times op-ed last year: Reading AI writing can sometimes feel like the equivalent of processed food. It’s good but something, eerily, feels off.
There’s also the even more alarming anxiety that AI is making us much stupider. Some research suggests this might actually be happening. A group of researchers based at MIT and nearby universities suggests that relying on LLMs could reduce our neural activity. (The research on this question is ongoing, though.)
And there’s another rub: AI writing is bad, many argue, because it lacks the inventiveness of human writing. It used to be that AI writing, some alleged, could be identified by certain grammatical features. (Remember the AI em dash apocalypse?)
Still, as AI has evolved, critiques of AI-generated prose have evolved, too, and even become more convoluted. AI proponents also have a million responses to these arguments and shifting goalposts. These are ongoing debates that force us to wrestle with what AI writing is and what human thinking is, and various other really important questions.
AI produces errors, but so do humans. AI writing may strike you as bad right now, but AI is also getting better and better, and in terms of quality, it’s getting harder and harder to tell the difference. Also, the tech industry often suggests we can engineer away the flaws of the products it produces. And maybe it’s true that AI is making us stupid, but we’re also naturally incentivized to seek efficiency and use tools that help us. In a lot of ways, it feels really smart to use it.
Navigating how to use AI tools for research requires, critically, an understanding of deterministic and stochastic processes, a healthy understanding of false positives and false negatives, and the limits of search engines and data. You need to be fluent in the kind of errors AI can make, in the same way you need to know that your office intern can make stuff up and mess up. If you aren’t, then you shouldn’t be using AI for anything serious.
Great uses of artificial intelligence exist above the model. In other words, you need to be the one in charge. No, you might not be particularly literate in the weights and biases that make an LLM tick, or the inner layers of a neural network. But you need to have some sense of an AI’s innards. You need to know what it’s doing so that you can coach it to be better, whether that’s catching its mistakes or pushing it toward more productive outputs. AI is your bloodhound, not a sentinel.
Ted Chiang argued in The New Yorker a few years back that “if an A.I. generates a ten-thousand-word story based on your prompt, it has to fill in for all of the choices that you are not making.” There’s a corollary here. The process of writing is messy and frustrating and often quixotic, but in the end, you need to arrive at a sort of final peace between what thoughts are tumbling around your brain and the words on the page. This is one of the most satisfying—and important—parts of the process.
If you are not working through your thoughts, and writing them up, you are not coming to this peace—and you have not participated in the critical steps in what it takes to decide something is true, even just true to you. In writing, there’s an old adage: Write what you know. This, of course, requires actually knowing something.