EU Investigates Musk’s Grok Over Sexual AI Deepfakes
European Union regulators have opened a new investigation into Elon Musk’s social media platform X, focusing on its AI chatbot Grok and claims that it can be used to generate sexualized deepfake images of women and minors.
The probe, announced today (Jan. 26), follows growing international criticism of the tool and concerns that users can manipulate ordinary photos into explicit content using basic text prompts such as “put her in a bikini” or “remove her clothes.”
The move marks an escalation in the EU’s crackdown on platforms it believes are failing to curb illegal or harmful content, and it raises broader questions about how fast-moving generative AI tools will be governed under existing digital safety laws.
What prompted the EU’s latest investigation
The European Commission said it is examining whether X has adequately addressed “risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material.”
EC President Ursula von der Leyen framed the issue as both a legal and moral red line.
“In Europe, we will not tolerate unthinkable behaviour, such as digital undressing of women and children,” she said. “It is simple – we will not hand over consent and child protection to tech companies to violate and monetize. The harm caused by illegal images is very real.”
The case centers on how Grok, a chatbot integrated into X, is allegedly being used to generate sexualized imagery from photos using minimal instructions. Critics argue the ease of producing such content increases the risk of harassment, exploitation, and the spread of illegal material, particularly when minors are involved.
A backlash fueled by watchdog research
The investigation comes days after research published by the Center for Countering Digital Hate (CCDH), a nonprofit watchdog, claimed Grok generated an estimated three million sexualized images of women and children in a matter of days.
That figure has intensified pressure on regulators and on X to demonstrate that its safeguards are effective. While the methods used to estimate the scale of image generation were not detailed in the brief announcement, the headline number has sharpened concerns that generative systems can be misused faster than moderation systems can keep up.
For critics of the platform, the controversy is another example of how AI products can introduce new forms of abuse by automating actions that previously required higher technical skill—turning deepfake creation into something closer to everyday social media behavior.
Why the Digital Services Act matters here
Henna Virkkunen, the European Commission’s point person for technological sovereignty, said the probe would “determine whether X has met its legal obligations” under the EU’s Digital Services Act (DSA), a landmark rulebook designed to rein in the risks posed by the world’s largest online platforms.
She added that the rights of women and children in the EU should not be “collateral damage” of X’s services.
The DSA requires major platforms to identify systemic risks and take action to reduce harms such as illegal content distribution. In practical terms, the EU will be looking for evidence that X has done enough to prevent Grok from producing and spreading sexually explicit manipulated content—especially if it crosses the line into child sexual abuse material.
If regulators conclude that X failed to manage these risks, the company could face further penalties, demands for product changes, or deeper compliance scrutiny. The case could also become a significant test of whether the DSA can effectively address generative AI-driven harms, even though the law was not designed specifically for modern image-generating tools.
Deepfakes as a growing threat
The controversy reflects a rapidly expanding problem: “digital undressing” tools and sexual deepfakes have increasingly been used to target women, public figures, and minors.
Even when manipulated images do not meet the strict legal definition of child sexual abuse material, they can still cause severe harm, including harassment, blackmail, reputational damage, and long-term psychological trauma. Victims often have little control once content spreads online, and takedown processes can be slow, fragmented, or inconsistent across platforms and jurisdictions.
The EU’s language suggests it is treating the issue not as an edge case, but as a direct safety failure with real-world consequences—especially because the images can be generated and shared at scale.
A widening probe tied to earlier enforcement against X
Brussels said it was widening an existing investigation into X that originally focused on illegal content and information manipulation.
X has been under EU scrutiny since December 2023 under the bloc’s digital content rules. The Commission previously fined X €120 million ($142 million) in December for violating DSA transparency obligations, citing breaches including the “deceptive design” of its “blue checkmark” verification system and a failure to provide access to public data for researchers.
That fine triggered sharp criticism from the administration of US President Donald Trump, highlighting how Europe’s approach to tech regulation has become a flashpoint in transatlantic politics.
For EU officials, the Grok investigation is likely to be framed as a continuation of a consistent enforcement agenda: imposing obligations on platforms regardless of political pressure, company influence, or the popularity of specific technologies.
What comes next for Grok and X
The investigation will likely scrutinize how Grok handles sensitive prompts, whether X has adequate detection and removal systems for sexualized manipulated imagery, and how quickly the company responds to risks once they become evident.
The outcome could shape how other generative AI tools are treated under the DSA, and whether platforms will be expected to implement stronger default restrictions, improved monitoring, and clearer accountability for AI-driven content.
For Musk’s X, the case adds to mounting regulatory pressure in Europe at a time when authorities are signaling that AI-enabled harms—especially those involving women and children—will be treated as an enforcement priority, not a policy debate.
In a related event, Grok AI was still running in Indonesia and Malaysia, despite a nationwide ban.
The post EU Investigates Musk’s Grok Over Sexual AI Deepfakes appeared first on eWEEK.