Tech giant Meta announced it will expand its artificial intelligence labeling program in a further effort to crack down on misleading images on its platforms.
In a Monday blog post, Meta President Nick Clegg said the new labeling policy is meant to create an industry standard for detecting AI-generated images.
“We’ve been working with industry partners to align on common technical standards that signal when a piece of content has been created using AI. Being able to detect these signals will make it possible for us to label AI-generated images that users post to Facebook, Instagram and Threads,” he wrote.
While Meta already tracks and labels images generated with its own “Imagine With Meta” AI, the new policy will slap labels on images created with rival technologies.
While Mr. Clegg didn’t specify when the policy will be in full swing, he said he expects Meta to be able to detect AI images from Adobe, Google, Open AI, Microsoft and others by the end of the year.
“During this time, we expect to learn much more about how people are creating and sharing AI content, what sort of transparency people find most valuable, and how these technologies evolve,” he wrote. “What we learn will inform industry best practices and our own approach going forward.”
Mr. Clegg added, “We’re working hard to develop classifiers that can help us to automatically detect AI-generated content, even if the content lacks invisible markers. At the same time, we’re looking for ways to make it more difficult to remove or alter invisible watermarks.”
Since its detection methods aren’t foolproof, Meta is requiring users to notify the company if they’re posting synthetic audio or video content. If users fail to notify Meta, they could face suspension under Facebook, Instagram or Threads’ community guidelines.
• Vaughn Cockayne can be reached at vcockayne@washingtontimes.com.
Please read our comment policy before commenting.