The AI winners will recognize that knowledge needs humans
In January 2026, Wikipedia will turn 25 years old, a reminder that one of the internet’s most‑visited sites began as an experiment many experts assumed would fail. It grew from a simple idea to more than 65 million articles in over 300 languages not by ignoring criticism, but by listening: adding quality controls, building policies, and keeping humans at the center of oversight and content creation. That path offers a useful lens on how AI‑driven search and open knowledge will evolve in the coming year.
By 2026, a substantial share of search will be handled by AI assistants and large language models that summarize and curate answers directly, rather than pointing users to a page of links. Research already shows that AI overviews and answer boxes capture user attention and often satisfy information needs with zero clicks, depressing traffic for publishers across news, reference, and niche information sites.
This is not just an SEO story; it is a power shift. When discovery happens inside opaque “answer engines,” the visible marketplace of links becomes a single, pre‑digested verdict where ranking, sourcing, and even dissenting views are hidden. For media organizations, that means less direct audience contact, weaker brand recognition, and fewer chances to explain context, nuance, and methodology in their own voice. In this environment, the real leverage lies in shaping the knowledge that AI systems rely on behind the scenes.
AI companies must lean on structured, high‑reliability sources to reduce hallucinations, curb the spread of misinformation, address reputational risk, and improve the quality of the information they share. Many are building AI systems and knowledge discovery platforms that pull in reliable public data sources — encyclopedias, open databases, government records — and plug them directly into their own AI tools and services. Without these well‑governed datasets, the models become less accurate, less diverse, and harder to defend when they get information wrong.
Wikipedia is one example of this shift: it increasingly functions as background infrastructure for everything from voice assistants and search engines to AI chatbots, even when users never encounter a Wikipedia link in their results. The same is true for many newsrooms that publish structured election results, investigative databases, and explainers that quietly feed AI systems. This need shows that open, human‑generated knowledge has become a critical resource for the AI era and the pace of innovation, not just a nice‑to‑have public good.
My prediction for 2026 is that at least some leading AI companies will be pushed — by technical necessity, public pressure, and regulation — to adopt something closer to Wikipedia’s playbook. They will need stronger quality controls that reach down into their data, not just safety filters on outputs, and clearer policies for what gets included, corrected, or removed. Growing scrutiny around training data, attribution, and legal accountability will force a shift from “scrape now, fix later” toward models of partnership and support with the sources they depend on.
Wikipedia’s story suggests why this is the logical progression. It did not become trusted because it was perfect; it became trusted because people could see how it tried to correct itself (and still does), through transparent policies, visible edit histories, and a culture of human stewardship. As AI systems intermediate more of the audience relationship, newsrooms, archives, and reference sites will similarly need to signal not only what they know, but how they know it — and insist that AI companies preserve that provenance instead of flattening it into a single “authoritative” answer.
For the media ecosystem, this means two things in 2026. First, value will increasingly lie in underlying, reusable knowledge: structured data, open licenses, and machine‑readable archives that can flow into many products, AI systems included. Second, organizations that treat knowledge quality and transparency as core pillars will become indispensable partners to AI companies trying to manage risk and maintain trust.
If the early 2000s were about building the web of links, the late 2020s will be about negotiating the web of answers. Projects like Wikipedia show that open, human‑governed knowledge can survive paradigm shifts in technology. The opportunity now is for AI companies and the wider media ecosystem to recognize that their long‑term business depends on nurturing, not strip‑mining, the human knowledge they rely on.
Maryana Iskander is the CEO of the Wikimedia Foundation, the global nonprofit that supports Wikipedia and other free knowledge projects.