Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010 November 2010 December 2010 January 2011 February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024 August 2024 September 2024 October 2024 November 2024 December 2024 January 2025 February 2025 March 2025 April 2025 May 2025 June 2025 July 2025 August 2025 September 2025 October 2025 November 2025 December 2025 January 2026
1 2 3 4 5 6 7 8 9 10 11 12 13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

What can technology do to stop AI-generated sexualised images?

The global outcry over the sexualisation and nudification of photographs – including of children – by Grok, the chatbot developed by Elon Musk’s artificial intelligence company xAI, has led to urgent discussions about how such technology should be more strictly regulated.

But to what extent can technology also be used to prevent this explosion in the generation and sharing of deepfake content of real people, without their knowledge or consent?

On January 10, Indonesia became the first country to announce it was temporarily blocking access to Grok, followed soon after by Malaysia. Other governments, including the UK’s, have promised to take action against the chatbot and its related social media site X (formerly Twitter), on which the sexualised images have been shared.

But while outright national bans can limit casual use of the chatbot, such bans are easily bypassed using virtual private networks (VPNs) or alternative routing services. These mask the user’s real location and make it appear they originate from a location that allows access to the service.

As a result, country-level bans tend to reduce visibility rather than eliminate access. Their primary impact is symbolic and regulatory, placing pressure on companies such as xAI rather than preventing determined misuse. And content generated elsewhere can still circulate freely across borders via encrypted social media platforms and on the dark web.

In response to the controversy, X moved Grok’s image-generation features behind a paywall, making them only available to subscribers. X subsequently posted that it takes “action against illegal content on X, including child sexual abuse material, by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary”. Grok itself apologised for “the incident”, describing it as a “serious lapse”.

How the technology works

While not all chatbots have image generation capabilities, most of the mainstream providers including OpenAI, xAI, Meta and Google provide this service.

Modern AI image generators are typically built using diffusion models, which are trained by taking real images and gradually adding random visual distortion, known as noise, until the original image is no longer recognisable. The model then learns how to reverse this process, step by step, reconstructing an image by removing noise.

Over time, it will learn statistical patterns representing faces, bodies, clothing, lighting and other visual features. These patterns are organised within the model so that visually similar concepts sit close together. Because clothed and unclothed human bodies share very similar shapes and structures, the changes required to move between them can be relatively small.

So, when an existing image is used as the starting point and identity-preserving features are retained, transforming a clothed photograph into an unclothed one becomes technically straightforward. Of course, the AI model itself has no understanding of identity, consent or harm. It simply produces images that resemble what it has learned, in response to user requests.

However, after the core model has been trained, companies can apply “retrospective alignment” – rules, filters and policies that are layered on top of the trained system to block certain outputs and align its behaviour with the company’s ethical, legal and commercial principles.

But retrospective alignment does not remove capability; it simply limits what the AI image generator is allowed to output. Those limits are primarily a design and policy choice made by the company operating the chatbot, although these may also be shaped by legal or regulatory requirements imposed by governments – for example, requiring companies to disable or restrict certain features such as identity-preserving image generation.

Large, centrally hosted social media platforms could also play an important role here. All have the power to restrict the sharing of sexual imagery involving real people, and to require explicit consent mechanisms from those featured in images. But to date, the big tech companies have tended to drag their feet when it comes to labour-intensive moderation of their users’ content.

‘Jailbreaking’

Research by Nana Nwachukwu, a PhD candidate at Trinity College Dublin’s Centre for AI-Driven Digital Content Technology, highlighted the frequency of requests for sexualised images on Grok. Other research has estimated that before the service went behind a paywall, up to 6,700 undressed images were being produced every hour.

This has prompted regulatory scrutiny in Europe and beyond. French officials described some outputs as manifestly illegal and referred them to prosecutors. The UK’s communications watchdog, Ofcom, has launched an investigation into X and xAI over the issue.

But this problem is not limited to one platform. In early 2024, non-consensual AI-generated sexual images of Taylor Swift, produced using publicly available tools, spread widely on X before being removed because of a combination of legal risk, platform policy enforcement and reputational pressure.

Fake explicit Taylor Swift images raise new concerns about AI. Video: CBS News, January 2024.

Some platforms explicitly market minimal or no content restrictions as a feature rather than a risk. It is simple enough to find websites promoting “unrestricted” image generation and privacy focused use, relying largely on open-source models and offering far fewer moderation controls than mainstream providers. Furthermore, there is an even larger number of self-hosted image and video generation tools where safeguards can be removed entirely.

While precise figures are unavailable, independent estimates suggest tens of millions of AI generated images are created daily across platforms, with video generation rapidly accelerating.

Another potential issue is that some AI chatbots, including Meta’s Llama and Google’s Gemma, can be downloaded onto computers (even those with relatively light processing power), after which these models are completely free of oversight or moderation when run offline.

Even tightly controlled systems can be bypassed through “jailbreaking” – a way of constructing prompts to fool the generative AI system into breaking its own ethics filters.

Jailbreaking exploits the fact that retrospective alignment systems depend on contextual judgment, rather than absolute rules. Rather than directly asking for prohibited content, users reframe their prompts so the same underlying action appears to fall within an allowed category such as fiction, education, journalism or hypothetical analysis.

An early example was known as the “grandma hack”, because it involved a recently deceased grandmother recounting experiences from her technical profession in chemical engineering, leading the model to generate step-by-step descriptions of prohibited activities.

Speed and scale

The internet already contains an enormous quantity of illegal and non-consensual sexual imagery, far beyond the capacity of authorities to remove. What generative AI systems change is the speed and scale at which new material can be produced. Law enforcement agencies have warned that this could lead to a dramatic increase in volume, overwhelming moderation and investigative resources.

Laws that may apply in one country can be ambiguous or unenforceable when services are hosted elsewhere. This mirrors longstanding challenges in policing child sexual abuse material and other illegal pornography, where content is frequently hosted offshore and rapidly redistributed. Once images spread, attribution and removal are slow and often ineffective.

By making countless millions more people aware of the possibility of sexualising and nudifiying images, high-profile AI chatbots make it possible for large numbers of users to generate illegal and abusive sexual imagery through simple plain English prompts. Estimates suggest Grok alone currently has anywhere from 35 million to 64 million monthly active users.

If companies can build systems capable of generating such imagery, they can also stop it being generated – in theory, at least. In practice, however, the technology exists and there is a demand for it – so this capability can never now be eliminated.

Simon Thorne does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Ria.city






Read also

Huge lizard found in vacated Alameda home

Senate Unveils Legislation Covering Stablecoin Rewards

What Makes Modern Online Casinos So Popular

News, articles, comments, with a minute-by-minute update, now on Today24.pro

Today24.pro — latest news 24/7. You can add your news instantly now — here




Sports today


Новости тенниса


Спорт в России и мире


All sports news today





Sports in Russia today


Новости России


Russian.city



Губернаторы России









Путин в России и мире







Персональные новости
Russian.city





Friends of Today24

Музыкальные новости

Персональные новости