Facebook has declared a digital war against Antifa and others encouraging rioting on its site by removing 980 groups they used to sow chaos and discord.
In a rare step by a social media company, Facebook explicitly sought to disrupt Antifa’s operations and said it restricted 1,400 hashtags related to the militant leftists and tore down 520 pages and 160 ads.
The crackdown is a harbinger of things to come, Facebook said, with policy changes under way to more aggressively go after militia groups supporting violent protests.
“We are expanding our Dangerous Individuals and Organizations policy to address organizations and movements that have demonstrated significant risks to public safety but do not meet the rigorous criteria to be designated as a dangerous organization and banned from having any presence on our platform,” Facebook’s statement states. “While we will allow people to post content that supports these movements and groups, so long as they do not otherwise violate our policies, we will restrict their ability to organize on our platform.”
Facebook’s policy shift aims at the radical organizations’ funding, their ability to attract new followers and the visibility of their content. Beyond shuttering accounts, Facebook has blocked groups from running ads and monetizing content and has prohibited nonprofits from soliciting funds on their behalf.
Facebook removed the radical organizations from being discovered via searches on the site, diminished their rankings in Facebook’s News Feed, and began reviewing its hashtag features’ functionality on Instagram to thwart the bad actors’ spread as well.
Facebook’s decision to explicitly call out Antifa for inciting violence stands in stark contrast to other social media companies. This summer, Twitter limited the visibility of some of President Trump’s tweets and tagged one about protests and riots over concerns that the president was potentially inciting violence.
Other social media companies followed suit, such as Snapchat, and took action to diminish Mr. Trump’s presence on their platforms.
Some companies took a broader approach. Reddit enacted new policies in June that yielded “ban waves” eliminating 7,000 subreddit communities from its platform, the company said.
Still, Reddit said that 40,000 pieces of content it deems “hateful” appear on its platform each day and garner nearly 6.5 million views daily.
By publicizing its actions tailored against Antifa, Facebook called attention to its more aggressive content moderation approach as affecting all political ideologies.
While Facebook’s latest actions struck a blow against Antifa and radical leftists, the company’s new posture also damaged the conspiratorial QAnon community sympathetic to the political right and Mr. Trump.
Mr. Trump acknowledged QAnon on Wednesday, saying he didn’t know much about the group but that he knew they “like me very much” which he appreciates and that they “love America.” He did not disavow it, drawing criticism from both Democrats and Republicans.
Facebook said it already has removed 790 QAnon groups from Facebook and imposed restrictions on an additional 1,950 QAnon groups on its platform and 10,000 accounts on Instagram. Another 1,500 ads affiliated with QAnon, 100 pages and 300 QAnon-related hashtags were also scrubbed.
“Any non-state actor or group that qualifies as a dangerous individual or organization will be banned from our platform,” Facebook’s statement says. “Our teams will also study trends in attempts to skirt our enforcement so we can adapt. These movements and groups evolve quickly, and our teams will follow them closely and consult with outside experts so we can continue to enforce our policies against them.”
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.
Please read our comment policy before commenting.