Is AI Born Biased?
Poster image for Steven Spielberg’s AI. (Detail.)
The bias problem with Elon Musk’s Grok AI chatbot was easy to see last summer when it started spewing antisemitism and calling itself “MechaHitler.” This episode came into being because Musk reportedly did not like some correct answers that Grok generated about right-wing political violence, so Musk and his team reset the parameters for Grok, instructing it to discount the traditional mainstream media and “not shy away from making claims which are politically incorrect.” The result was an explicitly antisemitic AI.
It’s an extreme case, to be sure – but it should raise some bigger questions about the limits of this still emerging technology. One might assume that as long as there isn’t someone strongly ideological like Elon Musk manipulating the settings behind the scene, an AI model’s responses are purely fact based. That would be wrong. Large Language Models like ChatGPT, Claude, Gemini, and DeepSeek are all biased. In fact, they can be said to be born biased.
This fact was made clear in recently published research by scholars from the University of Oxford and the University of Kentucky, who were able to reveal biases buried in ChatGPT that are much more subtle than Grok praising Adolf Hitler. As one commentator noted, some of these biases seem to stem from anti-Black racist ideas: Mississippi, the state with the largest share of its population that is Black, is rated as the laziest state, while African nations are rated the least intelligent countries. The limited data available on AI systems harming particular groups suggests that race-based harms are the most common.
Why is this so? Consider the old computer science saying: garbage in, garbage out. If one inputs bad data into a computer program, then the computer output will also be bad. AI chatbots like ChatGPT are trained on massive troves of documents created by human beings, some of whom are antisemitic, racist, and/or sexist. All human biases are within the generative database of these AI products. One doesn’t need an Elon Musk for these biases to influence the output from AI.
These types of AI are good at stereotyping because, in essence, that is what they are designed to do. The scholars from Oxford and the University of Kentucky write about the “averaging bias” of AI systems. When an AI model creates an averaged response based on the available documents in the training data, it is basically stereotyping. ChatGPT and other similar models are not designed to find the truth; they are designed to produce stereotypical output that would be agreeable to a typical user.
There are computer scientists who think that the bias problems in AI systems can be fixed with a few instructions to counteract overt bias. But the evidence suggests that the problem of bias can’t completely be fixed. The only method to counteract AI bias is the same one we use to counteract human bias: strong and effective diversity, equity, and inclusion policies. New technologies that merely automate existing biases risk embedding racism even more deeply in our society.
This first appeared in the Detroit News.
The post Is AI Born Biased? appeared first on CounterPunch.org.