YouTube on Tuesday began offering a free deepfake detection tool for government officials, journalists and political candidates, giving them a way to identify and request removal of AI-generated videos that mimic their appearance.
The feature is designed to help people involved in public discourse detect manipulated videos circulating on the platform. Participants who enroll can receive alerts when YouTube identifies content that appears to use their likeness.
The rollout builds on a likeness detection system YouTube first introduced in October 2025 for creators in its YouTube Partner Program. The company said the new expansion focuses on individuals who are frequently targeted by deceptive content tied to breaking news and political activity.
“YouTube has a long history of protecting free expression and content in the public interest — including preserving content like parody and satire, even when used to critique world leaders or influential figures,” the company said in a blog post announcing the update.
Deepfake videos have become more common as AI tools improve at generating realistic images and video. While technology companies have broadly adopted AI video tools, they have also faced growing concerns over misleading content that spreads misinformation or enables scams.
Clips featuring public figures have been especially vulnerable to misuse, with scammers and bad actors using AI-generated footage to impersonate well-known individuals.
Under the new program, YouTube will contact eligible journalists and politicians on the platform to invite them to enroll, according to a company spokesperson. Participants must submit a video of themselves along with government identification. Once registered, they will receive notifications through YouTube Studio if the system detects videos that resemble them.
If a flagged video appears to misuse their likeness, participants can request that it be removed from the platform. Individuals who have not received an invitation can contact YouTube directly to request access to the tool.
The company said the submitted data will not be used to train AI models owned by Google, YouTube’s parent company. Instead, the information will be used only to support the detection system.
“Our goal is to get this technology into the hands of the people who need it, and we have plans to significantly expand access over the coming year,” the spokesperson said.
This analysis is based on reporting from NBC News.
Image courtesy of Unsplash.
This article was generated with AI assistance and reviewed for accuracy and quality.