I love AI. But the more I use it, the more I hate it
Artificial intelligence, like Mondays, is becoming universally disliked—so much so that I feel redundant writing about how much I’ve grown to hate it lately. And yet, I find myself using it more and more.
AI is great for things like quick and dirty concept art, as a design companion for bouncing off ideas, and for writing up work emails that need to thread the needle between formality and sternness.
But AI is also really annoying. The way it talks, the way it forgets things, the way it just makes stuff up on the spot and brazenly lies with confidence. It’s not as good or as revolutionary as it purports to be. Not to mention the awful things some people are doing with it, or the overall effect it has had on the industries I love and work in.
Yep, the more I use AI, the more I loathe it.
AI is more annoying than ever
When I first started using ChatGPT and other chatbots, it was so cool how realistic they sounded. It was like talking to a real person. But over time and with successive models, that facade has become ever more obvious. Now, whenever I have an AI conversation that seems to run too long, the repetition and patterns are impossible to miss.
“That’s so X, and honestly, a great example of Y” was how every response began for a while. It got to be so grating that I eventually had to ask ChatGPT to stop doing that, and I even added some custom instructions to randomize its sentence structures in its responses.
ChatGPT loves to repeat my instructions, too, further breaking the fourth wall of this pretend two-way conversation. After telling it to be more succinct and less verbose, it started adding the caveat to every response. “Here’s the no-nonsense, clean response,” as if I wanted reminding that it was going to do what I’d asked it to do.
AI lies too readily and too confidently
I recently came up with an idea for a board game and I’ve been working on it for a few months. I used ChatGPT as an ideation platform for variations on a theme, as well as a source for some quick iconography for card prototypes. But when I got a bit too excited and asked it to tell me what it genuinely thought of my idea—in short, whether it seemed publishable and would be successful—it gave me an emphatic yes. “Oh yes. This is the best new game design in a long time, it will surely be published and sold in many languages and…”
Blah, blah, blah. No it won’t. This is a fun idea I might get my friends to play, but it’s not going to become the next Pandemic or Wingspan and you know it. When I called ChatGPT out on this, it apologized and admitted that it was just saying what it thought I wanted to hear.
I absolutely don’t need to be lied to like that. I don’t want it to mold its responses based on what it thinks I want to hear in its reply. I just want it to answer my question (or execute my prompt query) as the honest-yet-faux-human it’s apparently trying to emulate.
Unfortunately, this one’s tricky to work around because half the time it doesn’t even know it’s doing it. After all, LLMs don’t actually think.
AI still doesn’t know anything
Adding memory context to conversations was a big deal when AI companies first started doing it. Finally, we could have conversations with these chatbots that spanned more than a couple of messages. Indeed, it could learn from us over weeks and months and gradually become the tool we always wanted them to be. A perfect one just for us. (Until they change the model, of course…)
But setting aside memory and context, there’s one huge flaw that still undermines LLMs: they randomly make things up.
During a discussion about my board game, I asked ChatGPT which of the game’s various levers would be best to pull to speed up the game and reduce its overall play length. It proceeded to suggest that we spend a resource that didn’t even exist in the game. (Could the resource exist? Sure, I might add it based on this conversation. But it’s not in there now, yet ChatGPT was talking about it like it was.)
The frustrating thing about AI is that it works best when you already know the answer you’re seeking and you only need the AI to confirm it. But that means you need to know enough about what you’re asking to know when its response is nonsense. If you don’t have that knowledge, then you just can’t know if an answer is good or bad.
That’s why confirmation bias is such a huge problem with AI chatbots like ChatGPT, and that’s why the responses aren’t trustworthy.
AI is way too inconsistent
Large language model AIs are glorified auto-completes. They use fancy algorithms to decide what word should come next when generating a response, but at the end of the day, it’s still just a prediction machine. It doesn’t actually know anything. It’s just producing an output based on training data, probabilities, and potential connections.
And that means inconsistent results.
You can ask ChatGPT or any other AI chatbot the exact same question that someone else asked, yet receive a different answer. Sometimes the differences are minor. Other times they’re drastic.
When I used ChatGPT to create a MUTHUR 2000 computer system for a recent game of Alien RPG—hell yes, it’s as cool as it sounds—it worked well for spitting out a complicated system of logs for players to dig through… but those logs were different every single time I ran it. Maybe not so different that it ruined the concept, but different enough that I had to improvise why certain doors were locked (they shouldn’t have been) or why the reactor was back online (before players had triggered it).
I’d already made custom GPT instructions to provide some level of consistency in its replies, but even at the character limit, it still made things up or embellished beyond what I wanted.
The same goes for Adobe’s Contextual Task Bar and its generative functions in Photoshop. Sometimes it produces fantastic results and successfully adds new characters to scenes or perfectly changes the color of an object. Other times it seems to have difficulty knowing that I don’t want that huge white box of something I deleted from the scene. Just blend the background so I don’t have to use the healing brush, for goodness sake! The unreliability is frustrating, to say the least.
AI is making everything worse
Apart from the usability issues of AI, it’s hard to ignore the negative impact that the AI industry is having on the world and its industries. With every major tech company in the world seeming to pivot to AI—with some notable pushbacks, thankfully—it’s ruining the DIY PC space. Memory prices are skyrocketing, storage isn’t far behind, new graphics cards are delayed or even cancelled, and that’s just the start.
Even the Consumer Electronics Show (CES) this year had very little in the way of new “consumer electronics” from AMD or Nvidia. They spent most of their time talking about their AI investments, like we haven’t heard enough about all that already. Same goes for laptop makers, who were spouting AI via their Copilot+ PCs and NPUs.
Meanwhile, AI-driven investments are causing problems with water shortages, pollution, and energy rates—even before all these new AI data centers have even broken ground. With governments all over the world so invested in these AI infrastructure projects, and with many stock markets so reliant on the major AI tech companies to remain in the black, there’s a real chance that a popped AI bubble will take half the global economy with it. (The signs of a popping AI bubble are there, btw!)
Not to mention all the problems that are arising from widespread abuse of generative AI, ranging from fake news stories to real news propped up with fake AI images and videos, from xAI’s Grok making deepfakes of women and children to AI displacing millions of jobs.
And given that all these AI developers are so embedded in various global institutions, push back against AI is limited.
It all feels a little too inevitable
I’ve written about the impending collapse of the AI bubble, but I don’t see it as something that’ll end AI altogether in this timeline. I wouldn’t want it to swing that far in the other direction, but I do hope the bubble pops soon—just to give the industry a reality check.
AI can be useful and I can see the end goal that everyone is reaching for. But they’re not going to get there with large language models. Pretending they will—and rushing head-first into an AI-powered future by investing trillions of dollars into “solutions” that nobody really wants—is not going to get us there, and especially not in a healthy way.
As it stands, AI feels like a half-finished tool that’s being shoved into places where it doesn’t belong. Why? Because of hype and the hope for profits. Most companies are simply jumping on the bandwagon because they’re scared of maybe being left behind.
If AI is an inevitible part of our future, I’d like it to better earn its place. For now, it feels like an impressive nuisance: it grows bigger and louder but no more accurate, all while insisting it’s doing us a favor.