Meta announced this week that content generated by artificial intelligence made up less than 1% of what it called wrong election information on its platforms.
In a blog post Tuesday, Meta said that concerns over erroneous generative AI content leading to chaos on Election Day didn’t play out.
“Our existing policies and processes proved sufficient to reduce the risk around generative AI content,” Meta wrote. “During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation.”
Meta said it investigated posts concerning elections in the U.S., U.K., France, India, Indonesia, Pakistan and Bangladesh.
The company attributed the mostly correct AI information on its platforms to its focus on account behavior. Meta said its content moderation teams focused on taking down “covert influence” operations with manufactured audiences. The Facebook parent company said it eliminated 20 similar influence operations around the world.
Additionally, Meta said its AI image generator, Imagine, rejected nearly 60,000 requests to generate images featuring President-elect Donald Trump, Vice President-elect J.D. Vance, President Biden, Vice President Kamala Harris and her running mate, Tim Walz.
While Meta claimed victory against what it deemed erroneous election information, the company left the door open for possible policy changes.
“As we take stock of what we’ve learned during this remarkable year,” the company wrote, “we will keep our policies under review and announce any changes in the months ahead.”
Meanwhile, Meta took shots at its social media rivals. According to the blog post, some of the covert influence operations that Meta’s content moderation teams broke up are active on X and Telegram.
• Vaughn Cockayne can be reached at vcockayne@washingtontimes.com.
Please read our comment policy before commenting.