Under more scrutiny than ever, Facebook finds itself caught in a no-man’s land between activists who say it needs to adopt much stricter definitions governing hate speech and critics on the right who feel the social media giant is censoring conservative voices.
The company’s policy now largely depends on humans reviewing content flagged by others as offensive — a system Facebook CEO Mark Zuckerberg told Congress he hopes to change within 10 years by integrating artificial intelligence that can identify questionable content immediately.
By some accounts, the current policy has been a failure, and Mr. Zuckerberg’s claim that the site doesn’t house hateful posts is simply wrong. They also contend that Facebook needs to become much more aggressive, perhaps tying its definition of hate speech to the one used by the controversial Southern Poverty Law Center.
“We’re shocked by Zuckerberg’s claim that Facebook does not allow any hate groups on their platform. For years, civil rights groups have been urging Facebook to address the discrimination and bigotry on its platform, and, for years, the company has done little to meaningfully address our concerns,” said Madihha Ahussain, special counsel for anti-Muslim bigotry at Muslim Advocates.
“Today, to our knowledge, at least 23 of them are still organizing on Facebook,” she said. “That list only accounts for Southern Poverty Law Center’s compilation of anti-Muslim groups and doesn’t include the thousands of others organized to hate against other communities.”
But conservatives say adopting the Southern Poverty Law Center’s definition of a hate group would lead to even bigger problems and more bias. The organization, for example, classifies the Family Research Council as a hate group because of its stand on same-sex marriage and other LGBT issues.
Using that definition would deepen the fear and anger among conservatives toward the Silicon Valley behemoth. Facebook already has faced intense criticism from the political right for suspected censoring of posts from the popular pro-Trump duo Diamond and Silk, among a host of other content that Republicans say is filtered on solely political grounds.
Diamond and Silk, whose posts in the past have been considered “unsafe” by Facebook, will appear Thursday before the House Judiciary Committee.
Facebook’s handling of the duo has become a rallying point for conservative critics, and it was the latest in a string of controversial steps. The company two years ago came under fire for appearing to suppress conservative news sources in its trending topics feed, and its actions since then have done little to calm those who say Facebook’s liberal bias is out of control.
“I think this is going to be a controversial topic perpetually, for several reasons. First and foremost, they can’t get out of their own bubble, and until they do they won’t even realize they have a problem,” said Christie-Lee McNally, founder of the conservative group Free Our Internet and a former Trump campaign official who believes Facebook’s human review system is inherently flawed because of the company’s progressive leanings.
On Capitol Hill, the issue of whether Facebook suppresses conservative content has raised the ire of Republican lawmakers, some of whom argue that Facebook has become so big and powerful that its handling of speech — such as what to censor and what to allow — creates ripples across the American cultural and political landscape.
“If they’re behaving like Big Brother and censoring political speech, I think that raises very serious legal questions that I expect to see a whole lot more scrutiny devoted to,” Sen. Ted Cruz, Texas Republican, said last week.
Facebook says it defines hate speech as “content that attacks people based on their actual or perceived race, ethnicity, national origin, religion, sex, gender or gender identity, sexual orientation, disability or disease.”
It specifies that it does allow satire, comedy, music and other types of performance art that some people may find offensive.
Issues arise, of course, because what some consider to be offensive, racially tinged attacks are seen by others as political statements. Rhetoric surrounding illegal immigration, for example, often falls into that category.
“At the end of the day, it will be tough to keep 2 billion people happy all of the time,” said Emma Llanso, director of the Freedom of Expression Project at the Center for Democracy and Technology.
Ms. Llanso said she believes that Facebook and other massive social media firms that have become ubiquitous parts of pop culture may end up adopting more stringent standards on speech, while other companies could cast themselves as more open and, in some cases, willfully controversial.
“I’d rather see a situation with multiple different competing platforms that each have their own tailor-made content policies,” she said. “I feel like that creates less of a risk that an entire group of speakers or topic won’t find anywhere on the internet that will host their speech.”
Mr. Zuckerberg told lawmakers that the human element of flagging offensive content will be removed within the next decade, though that doesn’t mean Facebook’s automated system won’t also ruffle feathers.
“Hate speech — I’m optimistic that over a five, 10-year period we will have AI tools that get into some of the nuances, the linguistic nuances, of different types of content to be more accurate in flagging things for our systems,” he said this month. “But today we’re just not there on that, so a lot of this is still reactive. People flag it to us, we have people look at it, we have policies to try and make it as not subjective as possible, but until we get it more automated there’s a higher error rate than I’m happy with.”
• Ben Wolfgang can be reached at bwolfgang@washingtontimes.com.
Please read our comment policy before commenting.