People who use chatbots for news consider them unbiased and “good enough,” new study finds
Chatbots are already a burgeoning source of news. Seven percent of respondents in the U.S. use chatbots for news every week, according to a report last summer from the Reuters Institute for the Study of Journalism (RISJ). In India, that number jumps to nearly 20 percent.
As AI chatbot usage only continues to grow, there’s real potential for these products to become a fixture in the way many discover and read the news — and upend traditional search traffic in the process. As of yet, though, little is known about the people who have already turned to ChatGPT to keep up with the headlines.
A new report from the Center for News, Technology, & Innovation (CNTI) takes a deeper look at the news consumption habits of these users. CNTI interviewed 53 participants across the U.S. and India, the two largest markets for ChatGPT, which have seen similarly early adoption of other chatbots products.
CNTI only recruited interviewees that use AI chatbots at least once a week and who said they “keep informed about issues and the events of the day” at least somewhat closely. The most popular products among them were ChatGPT, Microsoft’s Copilot, and Google’s AI products, including Gemini and AI Mode in search.
While there is helpful survey research out there about chatbots, for most people, the distinction between “getting news” through chatbots and “seeking out information” is slippery at best. That makes self-reporting chatbot usage especially difficult. In this study, though, researchers asked the participants to not only discuss their own usage in detail, but to demonstrate specific use cases and walk through their chat histories.
“A lot of the people we interviewed didn’t necessarily think of what they were doing as ‘news’ even if the output cited dozens of news stories,” said Jay Barchas-Lichtenstein, the senior research manager at CNTI, and one of the co-authors of the report. “You can’t design a good survey question until you’re fairly certain you know what the behaviors are and how people talk about them.”
The study found that right now even regular chatbot users aren’t relying on them alone for news. Most interviewees continued to access traditional news sources, while using chatbots to supplement these habits. One U.S. interviewee, a ChatGPT loyalist, continues to use Google News as their main news aggregator, and also pays for a subscription to The New York Times and two local news outlets in their hometown.
The study also found interviewees in the U.S. and India rarely used chatbots as a basic news digest. While six respondents in India prompted chatbots broadly for the “top headlines,” just one interviewee in the U.S. said they regularly asked a chatbot about the “latest headlines in [their zip code].” Instead, interviews tended to seek out information with chatbots “to inform their choices and actions.” That could include information needed to make financial investments, purchase consumer products, book travel plans, or navigate government bureaucracy.
For example, respondents in the U.S. asked chatbots how they could best respond to the recent federal government shutdown, as well as how changing tariffs would affect them professionally. One person asked a chatbot to tell them who to vote for in an election. Another toggled between ChatGPT, Gemini, and linked law firm websites “to better understand their rights when ICE agents are active in their area.”
While chatbot users in the U.S. tended to ask for information to inform their own predictions, in India users were more likely to ask chatbots to make predictions for them outright. For example, users asked chatbots to predict Bitcoin prices and astrological charts. One Indian interviewee asked a chatbot for a “predictive analyses of stock sectors” using quarterly earnings reports. That’s despite the interviewee admitting they had lost money from chatbot investment advice in the past.
Broadly speaking, the researchers compared this chatbot usage to “service journalism” — providing readers with practical, actionable information and advice to make decisions. In both countries, chatbots were less likely to keep folks plugged into the latest breaking news, and more likely to inform their actions.
One of the most interesting findings from the report was just how much interviewees trusted chatbots. Even when they talked about chronic factual errors and complained about chatbots failing to provide up-to-date information, interviewees were quick to forgive chatbots and continue using them to stay informed. No matter how many wrong answers they’d received, these shortcomings weren’t dealbreakers. That is in stark contrast to the general distrust they expressed about the news media.
Interviewees told CNTI that they were concerned about political bias from traditional news sources in the U.S. They spoke about sensationalism and the news media’s alignment with corporate interests in India. In last year’s RISJ Digital News Report, 43 percent of respondents in India expressed an overall trust in news. In the U.S., only 30 percent of respondents said the same.
Chatbots in comparison were often framed by interviewees as a “neutral” source of information — balanced and opinionless. Interviewees rarely discussed algorithmic bias, or acknowledged the fact that news publishers were often cited in their conversations with chatbots. In fact, many took the mere existence of citations and sources in their chatbot conversations as “an assurance of accuracy.” The researchers, though, rarely observed users clicking through to check whether or not the cited source was being portrayed accurately. One interviewee in the U.S. said “you can always double check if you want to,” but told researchers they never felt the need.
As one interviewee in the U.S. described it, AI chatbots are an “80/20” proposition. “It’ll give me 80% of the information in 20% of the effort, but you know, you gotta be careful about taking that 80% to the bank, and there’s always 20% missing,” they said.
In Barchas-Lichtenstein’s view, users like this appear to “have more positive attitudes in part because they’re expecting good enough, rather than perfect.”
Another reason the study points to is the sense of collaboration interviewees feel with chatbots in conversations that they are framing and prompting themselves. Chatbots are responsive and users feel like partners in soliciting information. “Interviewees gently chide the AI chatbot to modify, clarify, or correct the output,” the researchers write. “The interactivity seems to allow for second chances, while the interviewees have no such patience for fixed text.”
This responsiveness also allows users to feel a sense of “decision-making power,” as the researchers put it. “The information environment is, frankly, overwhelming. Broadly, we’re seeing that people feel like they have to do a tremendous amount of work to determine what sources of information are credible,” Barchas-Lichtenstein told me. “People want help to make those judgments but don’t want it made for them — that is the decision-making power we’re talking about.”
In other words, what chatbots lack in accuracy for these users, they compensate for with a sense of control.