Pressured by chatbots, newsrooms push past the one-story-fits-all model
This annual feature has always focused on what happens to journalism, but the better question might be what’s happening to audiences. Because they’re changing, and quickly.
Are we ready to adapt?
Chatbots are coming to news — strike that; they’re here already — and we’re going to have to find a way to work in that world, despite all their well-documented limitations. This isn’t an argument that chatbots are a good source of news, but it’s a prediction that more and more people will turn to them because they provide something legacy newsrooms don’t: personalized information that’s more useful for each user. A person reading about a tax proposal in Washington wants to know how it’ll affect their small business in Ohio, and less about political infighting in D.C. or what it’ll mean for the midterms; a young parent in Florida is more interested in the impact of a school board vote on their child’s education than on debates over diversity.
A recent Reuters Institute study noted that about 7% of adults already use chatbots as a source of news — but that number doubles to 15% for the under-25 crowd. How soon before it hits Facebook levels (36%) or more — and what will that mean for news, journalism, society, democracy, and more?
Large language models are language models, not fact models, which partly explains why they spew hallucinations with depressing regularity. But they are impressively good — and getting better all the time — at parsing, “understanding,” and creating language. Some news organizations are harnessing those capabilities and tying them to verified information — using retrieval augmented generation, or RAG — to turn out far more precise and accurate answers than the main commercial models do. The results are mixed, but the technology is improving quickly.
But there’s a deeper, and more structural, change that LLMs will drive in the news industry. As LLMs’ language handling capabilities get better, our role will shift from writing narratives to asking questions, gathering facts, collecting and structuring information, and understanding our readers; generative AI will likely be the ones creating — at scale — “stories” that take into account users’ needs, situations, and levels of information. Our business models will invert from trying to get as many people to read one version of our story to superserving a tight community of users with the information most relevant to them.
There are real dangers to this world, not least that we’ll fall into filter bubbles of one. But there’s promise, too — that communities that were never seen, or catered to, by the media’s one-size-fits-all business model might get news that is more useful to them, and from their perspectives.
This doesn’t happen overnight in 2026. But we can already see the shifts in the landscape, and news organizations need to start preparing for that world.
To be sure, there will always be a need for high-end, high-quality investigations, analysis, and commentary that isn’t personalized; whose value is in fact in the universality of their narratives and ideas. But most news isn’t that. It’s the day-to-day coverage of school board meetings, fires and floods, city hall votes, and business announcements.
News organizations need to start figuring out how to collect, store, and reuse structured information with an eye to finding ways for LLMs to turn those into personalized narratives; they need, at the same time, to be trying to understand what readers need and care about — at an individual level — so that those narratives are truly personalized. They need, too, to be investing in the kind of investigative and analysis work that truly differentiates them from simply writing the news of the day.
And we, as an industry, need to think about how to combat the smaller and smaller filter bubbles we’ll be falling into. Arguing that everyone has to read the same story isn’t a solution; that reality has failed to serve so many communities. How can we insert enough friction into systems so that, whatever a reader’s biases and perspectives, they’re exposed to different ideas and points of view? Can we build tools — as I’ve experimented with — that help surface uncomfortable facts and alternative explanations in palatable ways?
Next year is the year we should be taking those experiments and ideas seriously; if we don’t, we risk ceding the space to companies that will optimize for engagement, not civic participation.