- The Washington Times - Thursday, November 30, 2023

Meta is putting retooled artificial intelligence systems in charge of policing political content on Facebook and Instagram before next year’s elections, changing the way content gets amplified or hidden online. 

Meta said this week it has spent the last few years working to reduce the amount of political content people see on its platforms in response to users’ feedback, and it intends to avoid recommending political content. 

Changes to its AI systems mean Meta’s platforms rely less on users’ engagement, such as likes and shares, and more on users’ attitudes, including expected reactions, according to updates Meta published this week.

“When ranking political content in Feed, our AI systems consider personalized signals, like survey responses, that help us understand what is informative, meaningful, or worth your time,” Meta said in an update. “We also consider how likely people are to provide us with negative feedback on posts about political issues when they appear in Feed.”

The tech titan said it would preserve people’s ability to find and interact with political content while still looking to stop the spread of unwanted material. 

Meta’s shift away from engagement and toward personalization is a long time coming, chronicled by the platform in statements on its websites. Meta tested turning down the distribution of political content in 2021 and has since published regular updates about the results and progress of its experiments. 

In April, Meta said tests were ongoing and the company was incorporating user survey results and “direct and indirect feedback” to personalize people’s online experiences. 

The algorithms recommending and ignoring content are far from unsupervised. Around 40,000 people are working at Meta on safety and security for global elections, according to a fact sheet Meta published about its approach to next year’s U.S. elections. 

Alongside the thousands of Meta workers focused on elections, the Big Tech company is partnering with nearly 100 fact-checking partners to “address viral misinformation.”

“When they rate content as false, we move it lower in Feed by default and show additional information so people can decide what to read, trust, and share,” the Meta fact sheet said. “We apply additional penalties when false content is repeatedly shared.”

Meta’s approach to censoring political speech has proven fluid. Facebook banished former President Donald Trump after the Jan. 6, 2021 riot at the U.S. Capitol, alongside similar actions by other social media platforms. 

The company then restored Mr. Trump’s access earlier this year, and he has frequently posted on the platform from the presidential campaign trail as the 2024 election season approaches. 

Meta has also changed its approach to political advertising on its platforms. The company decided within the last year to permit advertising questioning the legitimacy of the 2020 election, according to the Wall Street Journal. 

Concerns about the use of AI in ads have also prompted Meta to change its policies. Meta president Nick Clegg said this week that his company is imposing new disclosure requirements on advertisers, including for content that alters real events or appears real but is fake. 

“Starting in the new year, advertisers will also have to disclose when they use AI or other digital techniques to create or alter a political or social issue ad in certain cases,” Mr. Clegg wrote. 

Mr. Clegg portrayed changes to Meta’s plans as not disruptive, and wrote on the company’s website that its comprehensive approach to elections is broadly consistent with past years. He said the company spent more than $20 billion on teams and technology focused on the safety and security of elections since 2016.

“No tech company does more or invests more to protect elections online than Meta — not just during election periods but at all times,” he wrote.

Meta also published a quarterly threat report on Thursday revealing the challenges it faces in policing its platform. The company said it exposed China-based efforts to influence American politics. 

The new report said Meta removed nearly 4,800 Facebook accounts originating in China for coordinated inauthentic behavior. The fake accounts criticized “both sides of the U.S. political spectrum” and shared social media posts from real people, including Elon Musk, and links from mainstream U.S. media.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide