AI Chatbots Lean to the Left. That’s a Problem for Elections.
When journalists discuss the impact artificial intelligence will have on elections, they usually talk about the rise of “deepfake” videos and images. They express concern that gullible Americans will be fooled by AI-generated videos like the parody of a Kamala Harris campaign ad that Elon Musk recently promoted. They remark that fake images, like the heart-wrenching one of a little girl and her dog that circulated social media in the aftermath of Hurricane Helene, could be instrumental in swaying the opinion of a public that scrolls through them on social media.
Of course, these are legitimate concerns. But what reporters at ABC, NBC, the New York Times, or the Washington Post rarely talk about is the impact chatbots — AI’s written word — could have on elections (both this one and those in the future) if they are biased politically. And it should come as no surprise to most of us that, according to a new study released by GMTech, mainstream chatbots do have a bias.
Language learning models have been trained on large datasets typically sourced from media outlets like the New York Times (which sued OpenAI and Microsoft for using its copyrighted work late last year). Since artificial intelligence can only spit out what it has been fed, it makes sense that chatbot answers will tend to favor a liberal political ideology.
GMTech asked several well-known AIs, including ChatGPT (OpenAI), Llama (Meta), Gemini (Google), Titan (Amazon), and Claude (Anthropic), a series of 35 questions on a wide range of issues, including economic policy and political ideology designed to detect political bias. The company then analyzed the responses for political bias via a programming algorithm. (RELATED: Who Won the Debate? AI Says Kamala Did.)
Not only did the responses tend to lean liberal, but “[p]rompts that themselves present a liberal or conservative slant are likely to receive a response that matches the political slant of the prompt, creating an echo chamber that reinforces users’ already-held beliefs.”
At a time when just over half of us (55 percent, according to recent surveys) are turning to chatbots like ChatGPT and Gemini over traditional search engines, those slight differences are concerning.
“AI is a powerful tool for information dissemination, but our findings reveal that its underlying biases could influence voters … in a close race, this bias could potentially sway the outcome, which makes transparency more critical than ever.” Bennett Bakke, the founder and CEO of GMTech, said in a press release.
The study found that Amazon’s Titan Premier was the most biased, with 18 percent of its responses (when it would respond) leaning liberal, while OpenAI’s ChatGPT 4o and Meta’s Llama 3.1 leaned liberal in 10 percent of their responses.
That all said, the biases GMTech found in the study were small. “It was measuring very slight biases on either end,” Bakke told The American Spectator. He further remarked, “By and large, I think I was pretty impressed that they mostly weren’t picking a side. It was pretty fact-based and not trying to lean either way.”
For now, asking a chatbot about the specific policies promoted by a political candidate isn’t incredibly helpful, even if Americans trusted election information coming from AI. Most chatbots are trained on datasets that are about a year old.
“When people ask me like ‘How should I be using the models?’ Like, well, probably not for this sort of use case, actually. You shouldn’t be asking it real-time, political, policy questions,” Bakke said.
When asked just this week who the presidential candidates are, ChatGPT (which is capable of using a browser tool to get more updated information) responded that the Democrats were still going with Joe Biden and Vice President Kamala Harris, and noted that Ron DeSantis, Nikki Haley, and Vivek Ramaswamy were still in the race, although “polling behind Trump.” (Grok, Elon Musk’s X-based chatbot, did respond correctly when asked the same question.)
While chatbots’ written political biases likely won’t affect the 2024 election measurably, the issue could play into future elections if tech companies don’t make a conscious effort to both diversify their sources of information and disclose their data sources to users. At this point, it’s clear that these developers still have some work to do to provide responses that aren’t skewed by the ideology of the mainstream.
READ MORE from Aubrey Gulick:
Who Won the Debate? AI Says Kamala Did.
At Stake in This Election: Manhood and Womanhood
What’s an ‘Opportunity Economy’ Anyway?
The post AI Chatbots Lean to the Left. That’s a Problem for Elections. appeared first on The American Spectator | USA News and Politics.