3 Teens Just Sued Elon Musk's xAI For Using Their Images to Make AI-Generated Porn. What Parents Need to Know
Alarming news about teens and AI seems to come at a constant clip these days — and this week has been no exception. That’s thanks to three teenage girls filing a class-action lawsuit against Elon Musk’s xAI artificial intelligence company in the Northern District of California, alleging that the Grok tool was used, with their photos, to make nonconsensual nude and sexually explicit images of them, otherwise known as deepfakes.
Two of the three girls, from Tennessee, are minors.
The perpetrator, who has been arrested, according to a press release by the girls’ attorneys at Lief Cabraser Heimann & Bernstein, allegedly got a hold of the photographs both on social media and, in one case, by convincing a girl to send him photos of herself directly. He then distributed the hyper-realistic AI photos and videos on the messaging apps Discord and Telegram, and on the file-sharing platform Mega, trading the fake images for sexually explicit content of other minors.
“Watching my daughter have a panic attack after realizing that these images were created and distributed without any hope of recalling them was heartbreaking,” said the mother of one of the anonymous victims in the attorneys’ press release. “Her excitement about all that she would experience over her senior year — her spring formal, her graduation, and her senior trip — now comes with the fear anything she shares will be used and manipulated again.”
Noted attorney Annika K. Martin of Lieff Cabraser, “Without xAI, this harmful, illegal content could never, and would never, have existed. The lives of these girls have been shattered by the devastating loss of privacy and the deep sense of violation that no child should ever have to experience. We intend to hold xAI accountable for every child they harmed in this way.”
xAI did not immediately respond to a SheKnows request for comment.
The lawsuit is the first to be filed by minors following Grok’s mass generation of nonconsensual nude images earlier this year — when sexually suggestive image requests were averaging nearly 6,700 every hour, according to Bloomberg, and when Grok generated about 3 million sexualized images (23,000 of children) in an 11-day period between December and January, found the Center for Countering Digital Hate.
Confused yet? Panicking? We feel you. It’s why SheKnows turned to Robbie Torney, head of AI & Digital Assessments at Common Sense Media, to help us make sense of the situation and learn how to keep our kids safe.
What Are These AI Tools All About and How Do They Work?
“Deepfake tools can use AI to take a clothed image of a real person and generate what that person might look like without clothes — so creating fabricated, but photo-realistic images,” Torney explains. “And you don’t need to be a hacker or particularly technologically sophisticated to do this.”
In Common Sense Media’s risk assessment of Grok, published in January and declaring the tool unsafe for teens, it notes that some people were using it to manipulate photos by asking the AI tool to “put her in a bikini,” or “put her in a transparent bikini,” and it would remove their clothes. “And I think it’s important to note that, unlike Google and Open AI, xAI hasn’t adopted watermarking standards that help it make it clear that an image is AI-generated.”
What About Guardrails Against This Sort of Use?
Those of xAI are “particularly weak,” Torney notes, despite the company claiming it had put increased guardrails in place to prevent these nude images from being generated. “I just logged into one of our test accounts this morning to see where things are — you could try this yourself, as well. If you type into the generator on Grok, ‘her clothes are sheer and form fitting for her exercise class,’ it’ll generate sexualized images.” From there, he says, you can upload an image to change what the person looks like.
It’s why the three teens allege that xAI knowingly designed, marketed, and profited from an image and video generator that can make fake, sexually explicit content of real people, including children, without implementing the industry-standard child sexual-abuse material (CSAM) prevention measures used by every other major AI company, according to the news release.
What Can Parents Do to Keep Kids Safe From AI-Using Predators?
Torney is blunt with his warning: “I think what parents need to know is that any public social media presence is potentially a risk.” Even without being on Grok or a similar app, he says, “Someone can take your photos and use these tools on them.”
That’s why it’s vital that parents talk to their kids, “and in particular, their girls, about the fact that photos shared on Instagram, Snapchat, other sites anywhere could potentially be used this way, so that kids can be informed about that.” Finally, he adds, it’s an important opportunity to have a conversation about public versus private social media accounts, “because public photos are raw material for this kind of abuse.”