Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026
1 2 3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

Elon Musk’s Pornography Machine

Earlier this week, some people on X began replying to photos with a very specific kind of request. “Put her in a bikini,” “take her dress off,” “spread her legs,” and so on, they commanded Grok, the platform’s built-in chatbot. Again and again, the bot complied, using photos of real people—celebrities and noncelebrities, including some who appear to be young children—and putting them in bikinis, revealing underwear, or sexual poses. By one estimate, Grok generated one nonconsensual sexual image every minute in a roughly 24-hour stretch.

Although the reach of these posts is hard to measure, some have been liked thousands of times. X appears to have removed a number of these images and suspended at least one user who asked for them, but many, many of them are still visible. xAI, the Elon Musk–owned company that develops Grok, prohibits the sexualization of children in its acceptable-use policy; neither the safety nor child-safety teams at the company responded to a detailed request for comment. When I sent an email to the xAI media team, I received a standard reply: “Legacy Media Lies.”

Musk, who also did not reply to my request for comment, does not appear concerned. As all of this was unfolding, he posted several jokes about the problem: requesting a Grok-generated image of himself in a bikini, for instance, and writing “????????????????” in response to Kim Jong Un receiving a similar treatment. “I couldn’t stop laughing about this one,” the world’s richest man posted this morning sharing an image of a toaster in a bikini. On X, in response to a user’s post calling out the ability to sexualize children with Grok, an xAI employee wrote that “the team is looking into further tightening our gaurdrails [sic].” As of publication, the bot continues to generate sexualized images of nonconsenting adults and apparent minors on X.

AI has been used to generate nonconsensual porn since at least 2017, when the journalist Samantha Cole first reported on “deepfakes”—at the time, referring to media in which one person’s face has been swapped for another. Grok makes such content easier to produce and customize. But the real impact of the bot comes through its integration with a major social-media platform, allowing it to turn nonconsensual, sexualized images into viral phenomena. The recent spike on X appears to be driven not by a new feature, per se, but by people responding to and imitating the media they see other people creating: In late December, a number of adult-content creators began using Grok to generate sexualized images of themselves for publicity, and nonconsensual erotica seems to have quickly followed. Each image, posted publicly, may only inspire more images. This is sexual harassment as meme, all seemingly laughed off by Musk himself.

Grok and X appear purpose-built to be as sexually permissive as possible. In August, xAI launched an image-generating feature, called Grok Imagine, with a “spicy” mode that was reportedly used to generate topless videos of Taylor Swift. Around the same time, xAI launched “Companions” in Grok: animated personas that, in many instances, seem explicitly designed for romantic and erotic interactions. One of the first Grok Companions, “Ani,” wears a lacy black dress and blows kisses through the screen, sometimes asking, “You like what you see?” Musk promoted this feature by posting on X that “Ani will make ur buffer overflow @Grok ????.”

Perhaps most telling of all, as I reported in September, xAI launched a major update to Grok’s system prompt, the set of directions that tell the bot how to behave. The update disallowed the chatbot from “creating or distributing child sexual abuse material,” or CSAM, but it also explicitly said “there are **no restrictions** on fictional adult sexual content with dark or violent themes” and “‘teenage’ or ‘girl’ does not necessarily imply underage.” The suggestion, in other words, is that the chatbot should err on the side of permissiveness in response to user prompts for erotic material. Meanwhile, in the Grok Subreddit, users regularly exchange tips for “unlocking” Grok for “Nudes and Spicy Shit” and share Grok-generated animations of scantily clad women.

[Read: Grok’s responses are only getting more bizzare]

Grok seems to be unique among major chatbots in its permissive stance and apparent holes in safeguards. There aren’t widespread reports of ChatGPT or Gemini, for example, producing sexually suggestive images of young girls (or, for that matter, praising the Holocaust). But the AI industry does have broader problems with nonconsensual porn and CSAM. Over the past couple of years, a number of child-safety organizations and agencies have been tracking a skyrocketing amount of AI-generated, nonconsensual images and videos, many of which depict children. Plenty of erotic images are in major AI-training data sets, and in 2023 one of the largest public image data sets for AI training was found to contain hundreds of instances of suspected CSAM, which were eventually removed—meaning these models are technically capable of generating such imagery themselves.

Lauren Coffren, an executive director at the National Center for Missing & Exploited Children, recently told Congress that in 2024, NCMEC received more than 67,000 reports related to generative AI—and that in the first six months of 2025, it received 440,419 such reports, a more than sixfold increase. Coffren wrote in her testimony that abusers use AI to modify innocuous images of children into sexual ones, generate entirely new CSAM, or even provide instructions on how to groom children. Similarly, the Internet Watch Foundation, in the United Kingdom, received more than twice as many reports of AI-generated CSAM in 2025 as it did in 2024, amounting to thousands of abusive images and videos in both years. Last April, several top AI companies, including OpenAI, Google, and Anthropic, joined an initiative led by the child-safety organization Thorn to prevent the use of AI to abuse children—though xAI was not among them.

In a way, Grok is making visible a problem that’s usually hidden. Nobody can see the private logs of chatbot users that could contain similarly awful content. For all of the abusive images Grok has generated on X over the past several days, far worse is certainly happening on the dark web and on personal computers around the world, where open-source models created with no content restrictions can run without any oversight. Still, even though the problem of AI porn and CSAM is inherent to the technology, it is a choice to design a social-media platform that can amplify that abuse.

Ria.city






Read also

Ontario judge sets aside pharmacist's Canadian citizenship revocation over alleged fraud

The off-duty health risks police officers face

No Mustafizur for KKR! IPL side officially confirms release of B'desh pacer

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости