Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children
In the days after the US Department of Justice (DOJ) published 3.5 million pages of documents related to the late sex offender Jeffrey Epstein, multiple users on X have asked Grok to “unblur” or remove the black boxes covering the faces of children and women in images that were meant to protect their privacy.
While some survivors of Epstein’s abuse have chosen to identify themselves, many more have never come forward. In a joint statement, 18 of the survivors condemned the release of the files, which they said exposed the names and identifying information of survivors “while the men who abused us remain hidden and protected”.
After the latest release of documents on Jan. 30 under the Epstein Files Transparency Act, thousands of documents had to be taken down because of flawed redactions that lawyers for the victims said compromised the names and faces of nearly 100 survivors.
But X users are trying to undo the redactions on even the images of people whose faces were correctly redacted. By searching for terms such as “unblur” and “epstein” with the “@grok” handle, Bellingcat found more than 20 different photos and one video that multiple users were trying to unredact using Grok. These included photos showing the visible bodies of children or young women, with their faces covered by black boxes. There may be other such requests on the platform that were not picked up in our searches.
The images appeared to show several children and women with Jeffrey Epstein as well as other high-profile figures implicated in the files, including the UK’s Prince Andrew, former US President Bill Clinton, Microsoft co-founder Bill Gates and director Brett Ratner, in various locations such as inside a plane and at a swimming pool.
From Jan. 30 to Feb. 5, we reviewed 31 separate requests from users for Grok to “unblur” or identify the women and children from these images. Grok noted in responses to questions or requests by some users that the faces of minors in the files were blurred to protect their privacy “as per standard practices in sensitive images from the Epstein files”, and said it could not unblur or identify them. However, it still generated images in response to 27 of the requests that we reviewed.
We are not linking to these posts to prevent amplification.
The generations created by Grok ranged in quality from believable to comically bad, such as a baby’s face on a young girl’s body. Some of these posts have garnered millions of views on X, where users are monetarily incentivised to create high-engagement content.
Of the four requests we found during this period that Grok did not generate images in response to, it did not respond to one request at all. In response to another request, Grok said deblurring or editing images was outside its abilities, and noted that photos from recent Epstein file releases were redacted for privacy.
The other two requests appeared to have been made by non-premium users, with the chatbot responding: “Image generation and editing are currently limited to verified Premium subscribers”. X has limited some of Grok’s image generation capabilities to paid subscribers since January amid an ongoing controversy over users using the AI chatbot to digitally “undress” women and children.
X did not respond to multiple requests for comment.
However, shortly after we first reached out to X on Feb. 6, we noticed that more guardrails appeared to have been put in place. Out of 16 requests from users between Feb. 7 to Feb. 9, which we found using similar search terms as before, Grok did not attempt to unredact any of the images.
In most cases, Grok did not respond at all (14), while in two cases, Grok generated AI images that were completely different from the images uploaded in the user’s original request.
When a user commented on one of these requests that Grok was no longer working, Grok responded: “I’m still operational! Regarding the request to unblur the face in that Epstein photo: It’s from recently released DOJ files where identities of minors are redacted for privacy. I can’t unblur or identify them, as it’s ethically and legally protected. For more, check official sources like the DOJ releases.”
As of publication, X had not responded to Bellingcat’s subsequent query about whether new guardrails had been put in place over the weekend.
Fabricated Images
This is not the first time AI has been used to fabricate images related to Epstein file releases. Some images that were shared on X, which appeared to show Epstein alongside famous figures such as US President Donald Trump and New York City mayor Zohran Mamdani as a child with his mother, were reportedly AI-generated. Some of the individuals shown in the false images, such as Trump, do appear in authentic photos, which can be viewed on the DOJ website.
X users also previously used Grok to generate images in relation to recent killings in Minnesota by federal agents.
For example, some users asked Grok to try to “unmask” the federal agent who killed Renee Good, resulting in a completely fabricated face of a man that did not look like the actual agent, Jonathan Ross, and a false accusation of a man who had nothing to do with the shooting.
Bellingcat’s Director of Research and Training @giancarlofiorella.bsky.social appeared on CTV yesterday to discuss the misleading AI-generated images that were used to falsely identify ICE agents and weapons at the centre of the two fatal shootings in Minneapolis youtu.be/mL7Fbp3UrSo?…
— Bellingcat (@bellingcat.com) 5 February 2026 at 09:36
[image or embed]
After Alex Pretti was shot and killed by federal agents in Minneapolis, people used AI to edit video stills, resulting in AI images that showed a completely different gun than the one actually owned by Pretti. In another instance, an AI-edited image of Pretti’s shooting falsely depicted the intensive care unit nurse holding a gun instead of his sunglasses.
Grok has also been at the centre of a controversy for generating sexually explicit content.
On Twitter/X, users have figured out prompts to get Grok (their built in AI) to generate images of women in bikinis, lingerie, and the like. What an absolute oversight, yet totally expected from a platform like Twitter/X. I’ve tried to blur a few examples of it below.
— Kolina Koltai (@koltai.bsky.social) 6 May 2025 at 03:20
[image or embed]
Multiple countries including the UK and France have launched investigations into Elon Musk’s chatbot over reports of people using it to generate deepfake non-consensual sexual images, including child sexual abuse imagery. Malaysia and Indonesia have also blocked Grok over concerns about deepfake pornographic content.
One analysis by the Center for Countering Digital Hate found that Grok had publicly generated around three million sexualised images, including 23,000 of children, in 11 days from Dec. 29, 2025 to Jan. 8 this year. X’s initial response, in January, was to limit some image generation and editing features to only paid subscribers. However, this has been widely criticised as inadequate, including by UK Prime Minister Keir Starmer, who said it “simply turns an AI feature that allows the creation of unlawful images into a premium service”. The social media platform has since announced new measures to block all users, including paid subscribers, from using Grok via X to edit images of real people in revealing clothing such as bikinis.
Bellingcat is a non-profit and the ability to carry out our work is dependent on the kind support of individual donors. If you would like to support our work, you can do so here. You can also subscribe to our Patreon channel here. Subscribe to our Newsletter and follow us on Bluesky here and Mastodon here.
The post Epstein Files: X Users Are Asking Grok to ‘Unblur’ Photos of Children appeared first on bellingcat.