YouTube Implements AI Content Disclosure Policy for Creators

YouTube's new policy mandates creators to disclose AI-generated realistic content to ensure transparency and protect viewers. This applies to content that might be mistaken for real, with visible labels for sensitive topics.

YouTube has introduced a new policy requiring creators to disclose if their content, deemed realistic, has been generated or modified using artificial intelligence (AI) or other synthetic media. This move aims to foster transparency and prevent viewer deception, especially in sensitive areas like health, news, elections, and finance.

Transparency and Viewer Protection

In November 2023, YouTube announced plans to implement rules regarding generative AI to address the growing use of AI tools in content creation. Creators must now use a new tool in Creator Studio to inform viewers about realistic content created with AI or synthetic media. For sensitive topics, YouTube will add a visible label on the video itself, while for others, the label will appear in the video description.

What Content Falls Under the New Policy?

The disclosure requirement applies to "realistic" content that could easily be mistaken for real people, places, or events. Examples include digitally altering a person's image or voice, modifying images of real events or locations, and creating realistic scenes of fictitious events.

Exemptions and Clarifications

YouTube's policy does not extend to all AI-generated content. Creations used for productivity purposes, like script generation or subtitle automation, and clearly unrealistic content, such as fantastical scenes, are not subject to this regulation. Also, minor AI-assisted editing, like color correction or special effects, does not require disclosure.

YouTube continues to refine its privacy process to address the removal of misleading AI-generated or modified content, ensuring a balance between innovation and ethical content presentation.

TAGs
Articles récents