Illustration by Alex Castro / The Verge
YouTube will have two sets of content guidelines for AI-generated deepfakes: a very strict set of rules to protect the platform’s music industry partners, and another, looser set for everyone else.
That’s the explicit distinction laid out today in a company blog post, which goes through the platform’s early thinking about moderating AI-generated content. The basics are fairly simple: YouTube will require creators to begin labeling “realistic” AI-generated content when they’re uploading videos, and that the disclosure requirement is especially important for topics like elections or ongoing conflicts.
The labels will appear in video descriptions, and on top of the videos themselves for sensitive material. There is no specific definition…