YouTube is making it easier for politicians and journalists to take down AI deepfakes from its platform ahead of this year’s midterm elections. But it’s keeping quiet on who now has access to this tool. The video streaming giant announced today that it is expanding access to its likeness detection tool to journalists, government officials, and political candidates. The tool flags videos that feature a user’s likeness in AI-generated content and allows them to request unauthorized videos be taken down. “YouTube is where the world comes to understand the events shaping their lives—from breaking news to the debates that drive civic discourse,” wrote Amjad Hanif, YouTube vice president of creator products, and Leslie Miller, vice president of government affairs and public policy, in a blog post. “As AI-generated content evolves, the individuals at the center of these conversations need reliable tools to protect their identities.” The expansion comes as AI deepfakes have gotten pretty impressive, raising concerns about their potential to spread misinformation especially around elections. The news also comes as YouTube has been increasingly leaning more into AI. Last year, the company brought a custom version of Google’s video-generation model, Veo 3, to Shorts—YouTube’s TikTok- and Instagram Reels-like feed
Read More
YouTube Expands AI Deepfake Detection Tool to Politicians, Won’t Say If Trump Is Included

YouTube Expands AI Deepfake Detection Tool to Politicians, Won’t Say If Trump Is Included