People are using Grok to create lewd images of women and young girls
Elon Musk took over X and folded in Grok, his sister company’s generative AI tool, with the aim of making his social media ecosystem a more permissive and “free speech maximalist” space. What he’s ended up with is the threat of multiple regulatory investigations after people began using Grok to create explicit images of women without their permission—and sometimes veering into images of underage children.
The problem, which surfaced in the past week as people began weaponizing the image-generation abilities of Grok on innocuous posts by mostly female users of X, has raised the hackles of regulators across the world. Ofcom, the U.K.’s communications regulator, has made “urgent contact” with X over the images, while the European Union has called the ability to use Grok in such a way “appalling” and “disgusting.”
In the three years since the release of ChatGPT, generative AI has faced numerous regulatory challenges, many of which are still being litigated, including alleged copyright infringement in the training of AI models. But the use of artificial intelligence in such a harmful way to target women poses a major moral moment for the future of the technology.
“This is not about nudity. It’s about power, and it’s about demeaning those women, and it’s about showing who’s in charge and getting pleasure or titillation out of the fact that they did not consent,” says Carolina Are, a U.K.-based researcher who has studied the harms of social media platforms, algorithms, and AI to users, including women.
For its part, X has said: “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” echoing the wording of its owner, Elon Musk, who posted the same thing on January 3.
The fact that it’s at all possible to create such images suggests just how harmful it is to remove guardrails on generative AI to allow users to essentially do whatever they want. “This is yet another example of the wild disparities, inequalities, and double standards of the social media age, particularly during this period of time, but also of the impunity of the tech industry,” Are says.
Precedented
While the scale and power of AI-created images feels unprecedented, some experts disagree that they represent the first real morality test for generative AI.
“AI—I’m using it here for an umbrella term—has long been a tool of discrimination, misogyny, homophobia and transphobia, and direct harm, including encouraging people to end their lives, causing depression and body dysmorphia, and more,” says Ari Waldman, a law professor at the University of California, Irvine. “Creating deepfakes of women and girls is absolutely horrible, but it is not the first time AI has engaged in morally reprehensible conduct,” he adds.
But the question of who bears legal responsibility for the production of these images is less clear than Musk’s pronouncements make it seem.
Eric Goldman, a professor at Santa Clara University School of Law, points out that the recently enacted Take It Down Act—which requires platforms to have, in the coming months, measures to take down illegal or infringing content within 48 hours—added new criminal provisions against “intimate visual depictions,” a category that would include AI-generated images. But whether that would include bikini images of the type Grok is making by the load is uncertain.
“This law has not yet been tested in court, but using Grok to create synthetic sexual content is the kind of thing the law was designed to discourage,” Goldman says. “Given that we don’t know if the Take It Down Act has already put in place the regulatory solution necessary to solve the problem at hand, it would be premature to make yet more laws.”
Experts like Rebecca Tushnet, a First Amendment scholar at Harvard Law School, say the necessary laws already exist. “The issue is enforcing them against the wrongdoers when the wrongdoers include the politically powerful or those contemptuous of the law,” she says.
In recent years, many new anti-deepfake and explicit-image laws have been passed in the U.S., including a federal law to punish the distribution of sexually explicit digital forgeries, explains Mary Anne Franks, an intellectual property and technology expert at George Washington Law School.
But the recent developments with Grok show the existing measures aren’t good enough, she says. “We need to start treating technology developers like we treat other makers of dangerous products: hold them liable for harms caused by their products that they could and should have prevented.”
Ultimate responsibility
This question of ultimate responsibility, then, remains unanswered. And it’s the question that Musk may be trying to head off by expressing his distaste for what his users are doing.
“The tougher legal question is what, if any, liability Grok may have for facilitating the creation of intimate visual imagery,” explains Santa Clara University’s Goldman, pointing to the “voluntary” imposition of guardrails as part of firms’ trust and safety protocols. “It’s unclear under U.S. law if those guardrails reduce or eliminate any legal liability,” he says, adding that it’s “unclear if the model’s liability will increase if a model has obviously inadequate guardrails.”
UC Irvine’s Waldman argues that lawmakers in Washington should pass a law that would hold companies legally responsible for designing and building AI tools capable of creating child pornography or pornographic deepfakes of women and girls. “Right now, the legal responsibility of tech companies is contested,” he adds.
While the Federal Trade Commission has statutory authority to take action, Waldman worries that it won’t. “The AI companies have aligned themselves with the president, and the FTC doesn’t appear to be fulfilling its consumer protection mandate in any real sense.”