'Chilling': News broadcaster shocked to hear her voice used to spread right-wing hate
An online news personality was shocked to hear her own voice being copied by artificial intelligence — and used to push far-right hate. She wrote about that experience for The Guardian in an article published on Tuesday.
"y brother held his phone up to my ear. 'You’re gonna find this creepy,' he warned," wrote Georgina Findlay of the independent online media group TLDR News. "An Instagram reel showing a teenage boy at a rally featured a voiceover in the style of a news broadcast. A calm, female voice, with an almost imperceptible Mancunian accent, said: 'The recent outcry from a British student has become a powerful symbol of a deepening crisis in the UK’s educational system.'
"I sat bolt upright, my eyes wide open," she said. She was hearing her own voice.
"'They are forcing us to learn about Islam and Muhammad in school,' it continued. 'Take a listen. This is disgusting.'"
"It was chilling to hear my voice associated with far-right propaganda — but more than that, as I dug further into how this scam is perpetrated, I discovered just how far-reaching the consequences of fake audio can be," Findlay wrote.
AI "deepfake" technology has become widespread. Last year, AI-generated images circulated of megastar Taylor Swift in sexually compromised positions, prompting fury from fans and a push for legislation to stop it. But as Findlay learned, there are other insidious ways this technology can be used for harm.
"After some digging, I was able to trace it back to a far-right YouTube channel with around 200k subscribers," wrote Findlay. "It was purportedly an American channel, but many of the spelling errors on the videos were typical of non-native-English-speaking disinformation accounts. I was horrified to find that eight out of 12 of the channel’s most recent videos had used my voice. Scrolling back even further, I found one video using my voice from five months ago showing a view count of 10m. The voice sounded almost exactly like mine. Except there was a slightly odd pacing to my speech, a sign the voice was AI-generated."
Many AI companies are aware of the risk their products can be used for impersonation, identity theft, and scams, and have tried to create their own mitigation. "But these mitigation steps don’t go far enough," wrote Findlay. "In the U.S., concerned senators have proposed a draft bill that would crack down on those who reproduce audio without consent. In Europe, the European Identity Theft Observatory System (Eithos) is developing four tools to support police in identifying deepfakes, which they hope will be ready this year. But tackling our audio crisis will be no easy feat."
"While I willingly shared my voice on screen when I became a presenter, I did not agree to sign away this integral, precious, part of myself to anyone who wants to use it," Findlay concluded. "My recent experience has given another, far more sinister meaning to the concept of losing one’s voice."