- The Washington Times - Wednesday, March 4, 2020

Twitter is set to implement new tools Thursday to crack down on what it deems harmful and manipulative content on its platform.

The San Francisco-based social media company has spent months creating its new enforcement effort aimed at synthetic and manipulated media, which includes “deepfakes,” or doctored media that intends to deceive an audience into accepting a false perception as truthful reality.

Beginning Thursday, Twitter will label tweets that “have been significantly and deceptively altered or fabricated,” show users a warning message before they retweet or like that content, reduce the visibility of such harmful tweets, and provide additional explanations and clarification about the information at issue.

“You may not deceptively share synthetic or manipulated media that are likely to cause harm,” Twitter’s new rule states. “In addition, we may label Tweets containing synthetic and manipulated media to help people understand the media’s authenticity and to provide additional context.”

How Twitter intends to screen for the content isn’t clear. The company’s website identifies those responsible for making such decisions as “our teams,” and Twitter would not provide The Washington Times with more information about who will make the decisions involving labeling and shadowbanning.

The criteria Twitter says it will use to address concerning content include whether the Tweets contain synthetic or manipulated media, whether the media was shared in a deceptive manner, and whether the Tweets are “likely to impact public safety or cause serious harm.”

Twitter is not simply taking aim at fabricated media or deepfakes, but intends to target tweets in which “the content has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing.” Tweets that add information to original content, such as modified subtitles and overdubbed audio, also will be categorized as synthetic and manipulated.

To determine whether the material is shared deceptively, the platform is not just looking at a user’s intent or the content in a tweet. Twitter says it also will examine other “context” about users, including the information provided in their profile and the websites linked in the profile of the person sharing the questionable media.

Twitter acknowledges it “will make errors” as it begins enforcing the new rules, but some users think the company has already deliberately quieted voices it dislikes rather than those that it deems harmful.

The Internet Accountability Project, a conservative group aiming to get the government to rein in “Big Tech” companies, believes Twitter has taken aim at conservatives.

Mike Davis, the group’s founder, said Twitter is thumbing its nose at conservatives with the new enforcement mechanisms.

“Twitter has a pattern and practice of unfairly targeting and censoring conservatives, and this is going to give Twitter a much more powerful tool to do it,” Mr. Davis said. “This new system will make Twitter’s censorship of conservatives a lot more efficient and a lot more effective and that is a very bad thing for conservatives.”

In determining how to craft its new policy, Twitter surveyed more than 6,500 users around the world and found people wanted more information and thought harmful content should be labeled.

Twitter’s results are consistent with other independent pollsters’ findings. A Gallup and Knight Foundation poll released Monday found a majority of people support banning false ads on social media and favor more transparency in political advertising.

More than 80% of respondents said political ads containing outright falsehoods should be banned and 59% of those surveyed think political ads should be required to disclose who paid for the ads. The poll surveyed more than 1,600 people online Dec. 3-15.

John Sands, Knight Foundation director of learning and impact, said social media companies taking steps to address the problem show they are making the issue a priority headed into the November elections.

“The fact that there are a variety of different approaches to [political advertising] seem to suggest that [social media companies] also recognize some of the concern that we’ve identified and they’re trying to figure out ways that address the concern but also protect some of the free speech issues that are involved, free political speech issues that are involved, as well,” Mr. Sands said.

• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide