YouTube Expands AI Likeness Detection to Celebrities
YouTube is expanding its AI likeness detection tool to talent agencies, management firms, and the celebrities they represent, widening a program that had already started rolling out to creators, politicians, government officials, and journalists.
The update puts the company more directly into a growing fight over AI impersonation: unauthorized videos that mimic a real person’s face and spread before that person can respond.
The move also comes with an important qualifier. YouTube has said that actual removals generated by the system have been “very small” so far, which means wider access does not yet translate to proven protection at scale.
According to TechCrunch’s report on the expansion, the tool scans uploaded videos for AI-generated visual matches of an enrolled person’s face and sends likely matches to that person or their representatives for review.
The company does not require entertainers to have their own YouTube channels to use it, which makes the rollout easier to scale through agencies and managers. Support from CAA, UTA, WME, and Untitled Management shows YouTube is trying to broaden enrollment quickly.
How it works
YouTube says the system is modeled on Content ID, but adapted for a person’s likeness rather than a copyrighted work. If a likely match is found, the enrolled participant can review it and decide whether to request removal through YouTube’s privacy process.
The company has also said parody, satire, and other public-interest uses will remain protected, so a match does not automatically trigger a takedown.
That makes the tool a detection and review system, not an automatic enforcement tool. In its earlier expansion to civic leaders and journalists, YouTube said participants must verify their identity before enrolling and that the submitted data is used for verification and to operate the feature, not to train Google’s generative AI models.
The rollout also fits with YouTube’s broader effort to cut AI slop and respond to the kind of fake celebrity videos and scam ads that have drawn growing scrutiny.
Where it falls short
The most obvious limit is that the tool is still focused on faces. TechCrunch said voice support is planned for a future version, leaving one of the most common forms of AI impersonation outside the system for now.
There is also a problem with the removal path itself. The University of Baltimore Law Review argues that likeness claims do not fit as neatly into copyright law as Content ID claims do, leaving platforms that detect potential harm without a clean legal route in every case. That helps explain why the small-removals figure matters so much.
YouTube has built a stronger detection layer, but the harder part is still turning those matches into action. The same problem appears across the broader deepfake backlash, where spotting abuse is often easier than stopping it.
For now, the expansion matters because it gives celebrities and their teams a better way to find potential abuse. It is still too early to call it a strong remedy. Until YouTube shares more about accuracy, false positives, and removals, the tool looks more useful as an alert system than a complete answer.
Also read: Meta boosts creator protection on Facebook as it targets impersonation and copycats.
The post YouTube Expands AI Likeness Detection to Celebrities appeared first on eWEEK.