- The Washington Times - Tuesday, November 14, 2023

YouTube is putting warning labels on realistic artificial intelligence-generated videos as the company moves to protect users from various information.

According to the company, content creators are required to label any video they upload that was helped in any part by realistic AI generation tools.

“This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials,” YouTube Vice Presidents of Product Management Jennifer Flannery O’Connor and Emily Moxley said Tuesday.

The labels will be required only on content that’s realistic, according to YouTube’s standards. This means videos showing fictitious events or real people doing things they didn’t really do would need the labels.

Citing its privacy policy, YouTube also announced it will let users request the removal of AI-generated content that replicates the face of another person.

If users fail to adhere to the policy, they risk having their videos removed and their accounts suspended.

The new labels come as tech platforms wrangle with rapidly developing AI technology. Platforms like TikTok and Meta have added tags to better inform users about the validity of the content they’re watching, and Facebook has even limited the use of AI in advertising. 

• Vaughn Cockayne can be reached at vcockayne@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide