OPINION:
When Twitter announced its recent ban on political advertising, the Trump campaign leveled heavy criticism at the platform, suggesting in an official statement that walking away from “hundreds of millions of dollars of potential revenue” was “a dumb decision for their stockholders.” And when Facebook recently rejected a blanket ban on political ads, actor Sacha Baron Cohen made headlines for proclaiming that the platform would have agreed to run ads for the Nazis (something a look at Facebook’s community standards quickly proves wrong).
When platforms make policy changes on political advertising, it tends to cause quite a stir. But it merely means that competing online platforms are experimenting with how to best remove harmful content from their services.
It’s important that Congress continues to allow social media businesses to innovate in this way on content moderation, like with political advertising. That means maintaining the current legal structure that these platforms rely on, including Section 230 of the Communications Decency Act. Changes to that structure risk the unimaginable benefits of future competition and innovation in the name of nixing bad online content.
The national conversation on content moderation is sharply divided. Many have demanded that it be stricter, while others have called for a more relaxed approach. These complaints echo within the halls of Congress, too. At a recent House Energy and Commerce hearing on social media, the main criticisms of mainstream content moderation practices were on full display.
Gretchen Peters, executive director of the Alliance to Counter Online Crime, claimed that online platforms are doing next to nothing to prevent the spread of crime online. On the other hand, Rep. Greg Gianforte, Montana Republican, berated social media representatives for blocking an ad about hunting, claiming that content moderation practices are too heavyhanded.
Dissatisfied parties on both sides are calling for platforms to be punished through regulation if their demands aren’t met. Perhaps most alarming of all are the cries for Section 230 to be fundamentally changed.
But that’s a really bad idea, because content moderation is far from simple. In reality, it’s infinitely more difficult and nuanced than many legislators and commentators realize. Criminal activity is often difficult to distinguish from legitimate activity, especially online. A picture of a naked minor could be child pornography or just an innocent family photo.
Criminals who camouflage their activity make finding the line even more difficult. A gun case listed at an above-market rate could be a legitimate listing or an illegal black-market gun sale intentionally camouflaged. Without actually buying the gun case, it would be very hard to know which is which.
Despite this hurdle, though, platforms are getting significantly better at moderating content. For instance, the Big Three social media networks — Facebook, Twitter and YouTube — removed more than 5 billion posts and accounts in the last six months of 2018. With progress in AI and more investments in content moderation, the numbers for this year are likely to be even higher. That translates into objective goods like fighting against white supremacist content.
Members of the tech industry have tried to demonstrate how difficult enforcing speech rules on their websites are, but many inside and outside of Congress have refused to listen. During the House Energy and Commerce hearing, Gretchen Peters said, “If they can keep genitalia off of these platforms, they can keep drugs off these platforms.” She added that Section 230 should be substantially reformed to force platforms to improve, but never specified how.
A reform suggested by fellow hearing panelist Danielle Citron included tacking on a “duty of care” requirement to Section 230. This would force platforms to have “reasonable” content moderation practices if they wish to access the benefits of Section 230. But this rather unclear reform would have devastating effects. Rather than improve processes, requiring “reasonable” content moderation creates an unclear standard and an unpredictable environment for content moderators and online platforms, likely forcing platforms to over-police content so as to avoid costly legal battles.
Through their updated political advertising policies, online platforms have demonstrated they are diligently utilizing the freedom granted in current content moderation law to improve their practices. Yet, according to Ms. Peters and Mr. Baron Cohen, if they don’t reach perfection soon, they should lose their ability to host and moderate user-created content altogether.
By ignoring the enormous efforts already taken by online platforms to filter out the bad stuff, the debate over moderation on social media threatens to undermine the very structure on which current content moderation and responsible social media sites rely. In demanding perfection, Congress will soon make an enemy of a better, well-moderated Internet.
• Robert Winterton is the director of communications for NetChoice and a Tech Policy Fellow for Young Voices.
Please read our comment policy before commenting.