YouTube Expands AI Deepfake Detection Tool to Journalists, Politicians
YouTube is launching a new pilot program giving government officials, journalists and political candidates access to a likeness detection tool designed to protect their identities as AI-generated content rapidly evolves.
The tool works similar to Content ID, looking for a participant’s likeness in AI-generated content. If a match is found, such as a deepfake of their face, the individual can review the content and request removal if it violates YouTube’s privacy guidelines.
In order to use the tool, participants must verify their identity before enrolling. The data provided during setup is strictly used for identity verification purposes and is not used to train Google’s generative AI models. Users can also opt out at anytime and YouTube will delete the data provided during setup.
“While this tool provides a powerful way to manage unauthorized AI-impersonation, detection does not guarantee removal,” YouTube’s vice president of government affairs & public policy Leslie Miller and vice president of creator products Amjad Hanif wrote in a Tuesday blog post. “YouTube has a long history of protecting free expression and content in the public interest — including preserving content like parody and satire, even when used to critique world leaders or influential figures. We’ll continue to carefully evaluate these exceptions when we receive requests for removal.”
YouTube said it would start with this group to ensure the tool meets their unique needs, with plans to significantly expand access over the coming months. The update comes after YouTube previously enlisted 5,000 creators in October to test things out after initially collaborating with CAA clients in December 2024.
In addition to launching this tool for safety, the Google-owned video platform is exploring opportunities for creators and artists to find new revenue streams through the management and authorization of AI likeness.
“Ultimately, YouTube wants a future where AI helps creativity thrive and that means building the legal and technical frameworks to ensure that creators and artists stay in the driver seat,” YouTube creator liaison Rene Ritchie said in a video.
YouTube also said it would continue to advocate for the NO FAKES Act, which establishes a federal right of publicity and acts as a blueprint for international adoption to ensure the technology serves and never replaces human creativity.
The post YouTube Expands AI Deepfake Detection Tool to Journalists, Politicians appeared first on TheWrap.