Keir Starmer says tackling deepfake abuse is on par with government’s approach to terrorist content
Metro’s Anna Staddon spoke with Keir Starmer about the impact of image-based abuse on women and what the government is doing to combat this, from the 48-hour takedown rule to a ‘one and done’ approach.
After millions of sexually explicit deepfakes flooded X in January this year when the platform added a new ‘nudify’ feature to its AI image generator Grok, reports showed that 99% of images created were of women and girls.
A standoff between the UK Government and the big tech company led to the function being removed for users after 11 days. However, the impact by this point was irreversible.
‘Let me first acknowledge the damage this does,’ Starmer told Metro at an International Women’s Day event at Downing Street. ‘It affects so many people, predominantly women and girls. What Grok did was absolutely disgusting. We were determined to take them on, and to be absolutely clear that no platform gets a free pass.’
‘X’s initial response was to say that the feature would be made part of a premium service, which was an appalling response.’
Want to understand more about how politics affects your life?
Metro's senior politics reporter Craig Munro breaks down all the chaos into easy to follow insight, in Metro's politics newsletter Alright, Gov? Sent every Wednesday. Sign up here.
After this huge surge, the government brought a law into effect which, while passed in mid 2025, could not yet be enforced. The amendment to the Crime and Policing Bill was officially announced on February 18, and rules that tech companies must take down non-consensual intimate images within 48 hours or they will face a fine.
Starmer said that this time frame is the ‘equivalent’ to the government’s approach to ‘terrorist-related material’. ‘48 hours is the maximum’, explained the PM.
‘We battled on and we won that battle,’ he added. ‘We have to keep winning those battles because too many women and girls feel that they have to have the battle on their own, and they need the government alongside them.’
Expert Dr. Sophie Nightingale, a senior lecturer in psychology at Lancaster University, specialising in digital technology and behaviour, told Metro that combatting deepfake abuse is as much about legislation aimed at tech firms, as it is about shifting public perception.
She explained: ‘What we hear time and time again is that people don’t seem to understand the hurt that creating non-consensual sexual imagery causes. They say it’s not real, there’s no harm, there’s no real victim here. That could not be further from the truth.
‘The trauma caused is massive. Psychologically, the incorrect feelings of shame, the embarrassment.’
Aside from the huge emotional toll on victims, Dr. Nightingale also highlighted the real life impacts
‘I’ve heard women say they don’t want to apply for jobs because when the recruiter searches their name online, a deepfake scandal is the first thing they will see.
‘They don’t know who to trust anymore, as often they don’t know who has created the content or shared it. It can be really good friends, or people’s partners. They never know when the images are going to come back.’
Dr. Nightingale’s research focuses on digital wellbeing, investigating women’s physical and emotional safety while interacting virtually.
‘Deepfakes are pushing women to think they are not safe online. It is incorrectly encouraging women to be reluctant to go to these spaces, to share images online. It’s such a terrible outcome, and more needs to be done to ensure that women and girls do feel safe online.
‘There is some curriculum change happening at the moment on AI generated content, but we need more education in schools about what the actual harms are, to stop the next generation creating them.
‘The government are working hard on this,’ she added. ‘48 hours sounds really quick, but it’s not. The second that somebody shares something, it gets screenshotted and taken somewhere else. So, we need to prevent these non-consensual images from being created in the first instance.’
Metro also asked Starmer whether enforcing 48 hours for tech companies to remove the abusive content was enough.
‘We need to get it taken down as quickly as possible, and that’s why we stipulated 48 hours.’
Andrea Simon, London’s Victim Commissioner and former director of End Violence Against Women, a coalition of feminist groups across the country, says the timeframe is key for improving police force responses to reports of deepfake abuse.
‘Victims of this abuse often struggle with uncooperative tech companies and inconsistent police responses when they report, which has meant victims lack confidence that reporting will lead to action,’ Simon tells Metro.
‘Tech enabled abuse is in many ways the new frontier of violence against women and girls, and the rapid development of generative AI has escalated this threat.
‘Although this is a significant step forward, online offences will proliferate and tech platforms will continue to amplify harms unless there is strong enforcement of requirements like this.’
While this timeframe has been the topic of heated discussion online, the two-day limit was the number put forward by Baroness Owen, who spearheaded the ‘Stop Image Based Abuse’ campaign, a coalition between individuals and organisations aiming to hold the government to account over deepfakes.
Professor Clare McGlynn, a Professor of Law at Durham University and a leading expert on VAWG, was one of the coalition’s key campaigners.
‘48 hours comes from a precedent in the US act,’ she told Metro. ‘Many of these tech platforms operate in the US, so it made sense to follow that practice. However, every minute images are online is harmful, and increases the real risk that they are copied and shared.’
On the 48-hour takedown policy, Starmer added: ‘That’s not all.
‘There is a really important secondary provision we are introducing, to say ‘one and done’. In other words, once it’s been taken down, it can’t be put up elsewhere. What’s happened in the past is that non-consensual images have come down in one place and gone up in another.’
McGlynn notes that this secondary provision is essential for victims.
‘It’s called a hash register,’ she said. ‘It’s a kind of digital footprint attached to a photo, and what our amendment is trying to do is to make sure that tech platforms all share those hashes, so that victims don’t have to contact, for example, Meta, and X, and porn sites, but all images with the same hash get taken down in the 48 hours.
‘We’ve got to make sure this becomes compulsory, as this would prevent images from being more widely spread, which can be a matter of life and death for survivors.’
The VAWG expert also said that the government should look to British Columbia in Canada, and follow in the country’s footsteps. ‘They have a swift, easy online court process, which we have in this country for small claims, where victims can go online and make a claim against a perpetrator.
‘It allows a court order to be produced really easily, and means you don’t have to be a rich celebrity with an expensive, knowledgeable specialist lawyer to open a case. We also need to make the transfer of copyright more straightforward, as right now the copyright of an AI generated photo belongs to the person who created it.’
Starmer also told Metro: ‘This stipulation is [another element that is] equivalent to what we do with terrorist-related material. This is really important as so many women have said to me that they feel they are the ones who have to chase the whole thing. We’re absolutely committed to this, and if we can do more, we will.’
What could the government do to improve the lives of women?
What do you want from the government?