ChatGPT is now indexing Grok’s AI slop
More and more of the web is filling up with LLM-generated text, images, and even videos and music. It’s an even bigger problem than it seems because the “AI” systems that have scoured the web to generate their large language models are now re-indexing all that output. It’s an ouroboros of AI slop…. and now ChatGPT—which, by most measures, is the most popular LLM—is indexing Grokipedia.
Grokipedia is an AI-generated encyclopedia created last year by xAI, sister company to Elon Musk’s social media site. It’s almost entirely auto-generated with the Grok LLM, which has been integrated into the social network as well. Grokipedia is positioned as a conservative alternative to Wikipedia, which Musk considers “woke” and “propaganda.”
Grokipedia is filled with inaccuracies and AI hallucinations—at an apparently higher rate than even normal LLM systems—as Grok itself has been intentionally tweaked to conform to Musk’s dictates. The system has been observed promoting conspiracy theories and other material that range from merely delusional to actively harmful.
Now it appears that OpenAI’s ChatGPT is indexing Grokipedia to answer at least some users’ queries. According to an investigation in The Guardian, ChatGPT 5.2 is selective about when it returns info gleaned from Grokipedia—it won’t give you immediate Grok-generated answers for the page’s most well-known and documented falsehoods, such as HIV and AIDS misinformation. But when users pushed ChatGPT to go into more detail on controversies surrounding the Iranian government or Holocaust denier David Irving, the system did return info gleaned from Grok-generated pages.
The massive volume of text spat out by LLMs—estimated to be more than half of all new published articles as of late 2025—is becoming a problem. “AI” errors (or “hallucinations”) can be spread, replicated, and repeated, essentially overwriting established knowledge with a copy error. The fundamentally iterative nature of large language models can also be weaponized. Google’s Gemini AI has been seen repeating the Chinese Communist Party’s official positions on the country’s human rights abuses (or, according to Gemini, its lack thereof), and some security researchers believe Russia is pumping out LLM-generated propaganda text with the specific aim of having it integrated into other large language models.
Grok itself has been observed repeating explicitly hateful material, with the chatbot referring to itself as “MechaHitler.” It also AI-generated millions of sexualized images of minors via tools accessible on X starting in December 2025. The tool was disabled for free users in early January and restricted on X to disable the tool as applied to real people in revealing clothing. Countries around the world have opened investigations into Grok/X following the incident, citing possible violations of various laws. Indonesia and Malaysia have outright blocked access to Grok.
Exactly why OpenAI chose to integrate Grok’s output into ChatGPT—not only seeking out auto-generated text but training its own systems on a rival and competitor’s product—is not clear. It may simply be that the ever-hungry nature of large language models, which are dependent on new input in order to iteratively adapt and change, means that OpenAI cannot be selective with its training.