[TheDevs]

YouTube Allows Users to Request Removal of AI-Generated Content Impersonating Appearance or Voice

YouTube, the world’s largest video platform, has quietly updated its Privacy Guidelines to give users the ability to request the removal of AI-generated content that imitates their appearance or voice. This move comes as a response to concerns about the potential for bad actors to exploit Privacy Police Generative AI and impersonate individuals effortlessly. The update expands on YouTube’s existing guardrails for the technology, which were previously quite light.

Although the change was made last month, it only came to light when TechCrunch reported on it this week. YouTube now considers the use of AI to alter or create synthetic content that looks or sounds like an individual as a potential privacy violation, rather than an issue of misinformation. However, submitting a removal request does not guarantee that the content will be taken down, as YouTube’s criteria for evaluation leaves room for considerable ambiguity.

YouTube’s stated factors for consideration include whether the content is disclosed as altered or synthetic, whether the person can be uniquely identified, and whether the content appears realistic. However, there are also vague qualifications to consider, such as whether the content can be deemed parody or satire, or if it holds some value to the “public interest.” These nebulous criteria indicate that YouTube is taking a relatively soft stance on the matter and is not necessarily anti-AI.

In line with its standards for privacy violations, YouTube will only entertain first-party claims. Third-party claims will only be considered in exceptional cases, such as when the impersonated individual does not have internet access, is a minor, or is deceased. If a claim is successful, the uploader will have 48 hours to address the complaint by either editing or blurring the video to remove the problematic content or deleting the video entirely. Failure to comply within the given timeframe will result in further review by the YouTube team.

While these guidelines appear comprehensive, the real question lies in how YouTube enforces them in practice. As TechCrunch points out, YouTube has its own vested interests in AI, including the release of music generation tools and a bot that summarizes comments on short videos. Additionally, Google’s significant role in the broader AI race may influence YouTube’s approach. This new ability to request the removal of AI-generated content seems to be a tepid continuation of YouTube’s “responsible” AI initiative, which began last year and is now taking effect. YouTube officially mandated the disclosure of realistic AI-generated content in March.

It remains to be seen whether YouTube will be as proactive in removing problematic AI-generated content as it is with enforcing strikes. Nevertheless, this development is a somewhat encouraging gesture and a step in the right direction.