OPINION:
James Lankford, the Republican senator from Oklahoma, was rightly incensed Monday by an ABC News online headline that “Jeff Sessions addresses ’anti-LGBT hate group’ but DOJ won’t release his remarks.”
The item, more rant than news, reflected a rant by the Southern Poverty Law Center designating the Alliance Defending Freedom as a “hate group” in an account of the attorney general’s speech to the alliance, which litigates religious-liberty cases. “In this country,” the senator told the president of the network, “we have the freedom to disagree. However, disagreement is not the same as hate.”
That’s a distinction with a significant difference, and it’s one that Mark Zuckerberg would do well to contemplate as Facebook weighs how it can do a better job of policing so-called “hate speech” on his social network. Facebook employs about 4,500 “content moderators,” including third-party contractors, and has promised to hire 3,000 more this year. Even with that many monitors, policing content is a daunting task for a site that says it has nearly 2 billion users.
Even with a 15,000-word internal manual to go by, the content moderators still have to make subjective judgment calls. Whether something qualifies as “hate speech” isn’t always clear. Because of the sheer volume, content moderators don’t have the luxury of looking for nuance or judging redeeming social values of a post that sits on the margin.
Facebook deletes about 288,000 posts each month as “hate speech,” but critics say Facebook’s standards are arbitrary and capricious. Those who have had posts taken down by Facebook for violating its standards call those standards unclear and inconsistent.
That’s often unavoidable, because posts that some people consider offensive or otherwise objectionable aren’t offensive to others. “Snowflakes” among racial, religious and sexual minorities, who have raised umbrage-taking to an art form, complain the loudest. Freedom of speech, our most precious right, should not be routinely silenced on Facebook (or anywhere else) merely at snowflake behest, as on many college campuses.
Facebook in the internet age is similar to the Founders’ “public square,” and must be kept open to all but the most obviously extreme speech, nudity (to protect children who use Facebook), violent live videos and recruitment and incitement to terrorism.
But where to draw the line? What constitutes online “shouting ’fire’ in a crowded theater”? It’s an argument that’s been simmering and raging in the United States for 240 years. Minority groups say they’re disproportionately censored when they use Facebook to call out racism, often using racist anti-white rhetoric to do so, or anything they regard as anti-LGBT or anti-Islam. But Facebook doesn’t offer specific explanations of why posts are pulled down, or make public data public on what gets excised.
Facebook is understandably wary of being the arbiter or gatekeeper of the public discourse on its site, and technology companies that host such speech aren’t legally responsible for the content posted by third parties.
Courts have made that clear in rulings favoring those sued for posting scathing reviews, as on Yelp or Trip Advisor sites, of their bad experiences with those businesses’ goods and services. Some states even have laws against attempts to silence opinions some don’t agree with.
Mr. Zuckerberg should take his cue from Supreme Court Justice Louis Brandeis’ admonition 90 years ago that the remedy for “falsehoods and fallacies” is “more speech, not enforced silence.” Right on.
Please read our comment policy before commenting.