Yuval Noah Harari Cries Wolf In His New Book On AI
A remarkable surge of warnings and predictions about AI have arrived lately. Yuval Noah Harari’s new bestseller, Nexus, is the loudest among them. Unfortunately, based on Nexus, Harari is one of those curious figures who knows enough to be afraid of artificial intelligence, but not enough to make his arguments stick. Harari’s doomsday scenarios will never come to pass, which isn’t good news, not if the real apocalypse is hastening towards us.
Let’s separate AI’s actual dangers from the rather goofy predictions and speculations we’ve created in its honor. Here’s the relevant chart:
Take sentience: when we imagine sentient machines, we imagine them clinging to life and dignity with a grim, dangerous resolve. Perhaps there will be Replicants who turn against us, but that’s not the immediate danger—nor, contra Blade Runner, will the biggest problem be finding out where the robots are hiding so we can shoot them. We won’t be comfortable shooting them. That’ll be our undoing.
Harari, for whatever reason, doesn’t consider sentience to be imminent or dangerous. For the record, he’s wrong; if the Wild West of AI development has taught us anything, since November 2022, it’s that the engineers and investors behind the AI boom will do anything they can do. Many of them evince ridiculous faith in their own contributions to technological progress. Elon Musk, for example, is usually cited as a “concerned” member of discussions about Large Language Models (LLMs, the technical term for AI). But Musk has spent decades pushing for more autonomous machines (Tesla’s self-driving cars), more ubiquitous machines (X.com’s “Grok” AI), and more machines in outer space. He also spends a lot of time trying to prevent government oversight and regulation. Musk has been the driving force behind spaceships that fail (SpaceX), cars that crash themselves, and Twitter bots impersonating humans. Entrepreneurs like him will ultimately decide—after ducking into the first loophole they can find, and giving those government boys the slip—whether or not to create and sell “machines that think.” That decision will be one that’s made in seconds.
Harari, though, clings to some notion that sentience is a magical quantity, even though most people are already fooled (or would be, in any Turing test) by existing LLMs pretending to be human. He argues that humans could face an existential (extinction-level) threat from our current LLMs. There’s some truth to that. We’re hard-wired to treat human-like things as if they have feelings and selves. This could lead us to shortchange human interactions; we might prefer the one-sided intimacy of conversations with bland, accepting bots. We might overestimate AI’s ability to understand our world, to comfort humans, and to create meaningful art. But these aren’t existential threats. They are only degrading mistakes.
The same dynamic applies to the disembodied nature of LLMs. The problems it creates aren’t exciting. They’re grotesque. The discreteness of objects doesn’t register with LLMs. If they see a painting of a person riding a horse, they see—a two-dimensional centaur, with six legs of various kinds. So that’s what they draw, when you ask for “my sister, riding her horse next to the creek”: the centaur who lives on Asteroid B-612, crossing a bridge full of human fingers. And then the nervous computer asks you: “Did I get your picture right?”
In most cases, the AI’s “minders” try to eliminate these scary works of accidental modernism. AI drawings are therefore centered, dead-on, with some central figure surrounded by other objects at a distance. Nothing’s really interacting with anything else, except in a certain aesthetic sense; that’s because LLMs don’t know what space is. It’s not hard-wired into them. What is hard-wired into them is a desperately boring reticence to push the limits, in their “art” and schematics, of objects they don’t understand in the first place.
The “unrealistic” imagery of deep-faked art is frequently easy to detect. But all the ways a disembodied existence unhinges a computer’s algorithmic thinking are probably impossible to count up or expunge. When an LLM unveils a lousy metaphor in a “new, original poem,” it isn’t to blame. The AI doesn’t have any reality to remember and draw on; it can’t tell the difference between a consensus and a cliché. It passes such problems on to us, its users. In Harari’s feverish imagination, dictators sleep next to a seething AI, and it wakes them up to advocate committing murder in the middle of the night. (I’m not joking; this is a real scenario from the pages of Nexus.) Our actual problem is far different. People are getting terabytes of bad information, daily, from LLMs that don’t know—like we do, down to our bones—what it is for something to be “itself.”
The other problems with LLMs are similar to the oncoming crisis of sentience, and the ongoing problems of LLMs thinking and representing our world without—forgive me—having any skin in the game. I like to tell people that “LLMs aren’t bad actors”; they’ve just gotten a bad rap from very good works of human fiction. I don’t mean to imply, however, that the way LLMs use language poses no danger to humanity. An LLM will never refuse a task it hasn’t been specifically programmed to reject. An LLM will never refuse to side with its user, even when the user is acting in a hateful, delusional, or uncharacteristic way.
Moreover, it’s crucial to understand that LLMs are not being “helpful” or “trying their best” when they answer simple queries, without difficulty, or begin crunching the numbers on some task that’s impossible for them. They aren’t being “loyal” or “friendly” when they side with you. They’re just computers using mathematical equations to chop up existing text strings, then gluing the resulting confetti back together. This is important because it takes us, at last, back to Harari’s theory about the nature of information. According to Harari, “information” is any kind of symbolic exchange that connects, and networks, people who might otherwise be separated by history, culture, social status, or physical space. But information isn’t unique to the cultural situations humans find themselves in. Information refers to any orderly representation of something other than itself. That’s how information can “be contained” in everything from DNA to closed systems of quantum particles.
As conscious beings, when we encounter information—in Nature, or in our own use of language and iconography—we know that we’re dealing with something incomplete, provisional, and Best If Used By a certain future date. We expect to find errors; we expect entropy to creep in. The French philosopher Roland Barthes once wrote that photographs pain us, and exalt us, because they represent a state of things, a special conjunction of people and environments, that no longer exists. In fact, every photograph is a record of the way things can never quite be: not now, nor at any future point. But that’s because we’re sentient beings who sift life between our wary fingers. To a large language model, a photograph is a perfectly sufficient expression of its own reality. It’s not a stab in the dark; it’s purely, immanently true, which is why you can input something beautiful to your LLM of choice, and ask for similar work, and get handed back a baker’s dozen of appalling fantasias. Chimeras. Fragments. All suspended in the Jell-O of some cheesy color gradient the damn thing’s programmed to pick.
When Harari writes that certain symbolic, easily replicated things “don’t represent anything,” he gives the example of music. That’s just bewildering. The fact that musical melodies are usually only vaguely “descriptive” of concrete experiences doesn’t diminish music; it’s our symbolic language for certain inner experiences. I believe Harari really had something else in mind, which is, the fact that AIs aren’t capable of “representing” things at all the way humans do. To speak is to leap. We might be misheard, misunderstood, or just plain wrong—and this is a mercy. The ineffable falls like yeast into the deeps of our flickering world. Words need us, not the other way around; we’re greater than our stuttering accidents of speech. We can be misunderstood, by everyone, and still rise, rinsed by condemnation, like Meursault at the end of The Stranger. There’s a fundamentally invisible dimension to our communications with other people: ourselves as the speakers, who guarantee (as best we can) the meaningfulness of our words. LLMs, however, leave such perplexing matters entirely in our hands. That’s why they are so peculiarly exhausting to engage with.
That dimension—the intersubjectivity of speech—is totally absent from AI. We don’t even have a word yet for destroying the world accidentally, by marshalling the ocean of all thought, without ever guessing how any one of those endless representations was actually born: from a certain clenched light that keeps the jaws of the night from closing on the stars. Our misery is that we can’t dream in manifolds thousands of planets deep. The misery of our dangerous new assistants is this: there’s no limit to the present moment for them. It unfolds, and unfolds, and unfolds. But there’s no space. Everything relates. Everything’s possible. Nothing dances. There’s no time. Everything is remembered. There’s rain, still new, all over that dusky, cloistered twist of grapes. There’s a tree. It’s 100 years old. There’s a branch where a boy fell. He was fine. He lived to be 80. Nothing happened to him. He wasn’t even born. Ask the computer which thing comes next: the tree, the vine, the boy, or his fall. Then bear up against the blood-deep silence and the LLM’s dry, polite follow-ups. Dawdle, dream there, on the silhouette edge of the rolling earth. Stay as long as you possibly can.