YouTube has just made its AI-powered likeness detection tool available to the entire entertainment industry. The platform says the technology is now open to celebrities, talent agencies, and management firms. They can identify AI-generated videos that feature fake versions of their faces. They can then take action against them.
Here’s how it works: a celeb, creator or public figure, or, more likely, someone on their team, opts in to the program and uploads their likeness to the system. They don’t even have to have a public-facing YouTube channel. The system then scours YouTube and flags potential clones for that celeb’s team to review. They can then leave the content up or request its removal.
The technology works similarly to YouTube’s existing Content ID system, which detects copyright-protected material in users’ uploaded videos. Likeness detection does the same, but for simulated faces.

Not every flagged video gets pulled
A removal request doesn’t guarantee YouTube will take down the video. “There are a number of cases like parody and satire where our community guidelines would allow that to remain on the platform,” said a YouTube executive.
Major industry players including Creative Artists Agency (CAA), United Talent Agency (UTA), and William Morris Endeavor (WME) have supported the rollout and provided feedback on the system.
This expansion didn’t happen overnight. The tool was first rolled out to creators in the YouTube Partner Program as an “industry-first” system in September 2025. YouTube then expanded access in March 2026 to a pilot group of journalists, government officials, and political candidates.
The Timing
The timing tracks with a growing alarm across Hollywood. In February, Irish director Ruairí Robinson created a strikingly realistic clip of Brad Pitt fighting Tom Cruise on a rooftop using just a two-sentence prompt. That widely shared video was generated with Seedance 2.0, an AI tool owned by ByteDance. Charles Rivkin, the chairman and CEO of the Motion Picture Association, called on ByteDance to “immediately cease its infringing activity,” accusing it of disregarding copyright law.
Alon Yamin, CEO and co-founder of Copyleaks, told AFP that YouTube’s move “reflects a turning point in how platforms approach identity protection in the age of generative AI,” adding that “the technology to replicate a person’s face, voice, and mannerisms has advanced faster than the safeguards around it.”
The expansion also comes as Google, YouTube’s parent company, faces mounting pressure to address AI-generated content across its platforms, with legislators in multiple countries pushing for stricter rules around synthetic media disclosure and removal.
Yamin stressed that detection systems “must be highly accurate, continuously updated, and paired with clear policies and swift takedown processes to be effective,” noting the tool won’t eliminate deepfakes entirely but “can significantly reduce their reach and impact.”
Fan creators will feel the shift most directly. Content like AI-generated movie trailers or fan films casting real actors now risks removal if a flagged celebrity requests a takedown.
Processing removal requests from thousands of celebrities requires significant human review, especially since automated systems can’t fully judge context like satire or fair use. YouTube’s balancing act is only getting harder from here.





