- The Washington Times - Tuesday, April 11, 2023

One man’s terrorist is another man’s freedom fighter on Twitter.

And then there’s Donald Trump.

As the Supreme Court ponders the liability of Big Tech companies and their treatment of terrorists who use their platforms, the justices have been reminded of the words of one Twitter official in 2014 who was reticent to police users.

“One man’s terrorist is another man’s freedom fighter,” the Twitter employee, who remained anonymous, told Mother Jones about the company’s passive approach to defining “terrorism.”

Iran’s supreme leader, Ayatollah Ali Khamenei, who has tweeted that Israel is “a malignant cancerous tumor” that must be “removed and eradicated,” remains on the platform. So does Afghanistan’s Taliban.

Yet Mr. Trump was ousted in the wake of events of Jan. 6, 2021, for what Twitter called “incitement to violence.”

“It does seem odd they would feel justified in removing Donald Trump but not the leaders of groups that have been on the State Department system as foreign terrorist organizations,” said Max Abrams, a specialist in national security and terrorism at Northeastern University.

The freedom fighter quote came up during oral arguments at the Supreme Court on Feb. 22 in a case testing whether families of victims of terrorism can sue Twitter for failing to sift out terrorist content.

It was coupled with another similar case, with Google as the main platform, that delved into whether platforms are liable for the algorithms they use to promote content.

The technology companies insist they try to cull terrorist content but acknowledge they can’t be perfect.

Eric Schnapper, the lawyer representing the families in the Twitter case, said the companies didn’t always seem to be trying.

He cited the Twitter official’s comment in the 2014 interview when asked why Twitter wasn’t taking down Islamic State content three months after ISIS militants executed two Americans.

The quote dates back to at least the 1970s and has become a trite absolution of responsibility for trying to sort right and wrong.

Mario Diaz, a lawyer with Concerned Women for America, introduced the quote as part of a “friend of the court” brief. He said he was trying to puncture the tech companies’ contentions that they are honest arbiters.

“They actively chose to ban the president of the United States in the aftermath of January 6 but refused to suspend known international terrorist accounts, even after being explicitly alerted to them,” Mr. Diaz said. “They are, therefore, knowingly, willingly and actively aiding and abetting these organizations in conducting terrorist attacks, as the families are alleging in this case.”

The justices didn’t stop long to ponder the quote.

Immediately after Mr. Schnapper mentioned it, Justice Ketanji Brown Jackson moved on. She asked the attorney about the intricacies of blame and how much assistance an enterprise had to give to a terrorist operation before it becomes liable.

The tech companies say they aren’t liable for what users post on their platforms under Section 230 of the Communications Decency Act. Their opponents say that 1990s-era provision has been stretched beyond its breaking point, particularly when tech companies use algorithms to promote content.

The case involving Twitter centers on liability under the Anti-Terrorism Act.

In response to an email inquiry for this report, Twitter replied with a “poop” emoji. That has been the company’s standard reply to press inquiries since Elon Musk took over the platform in October.

Inquiries to Google and Facebook went unanswered.

Each of the companies scolded Mr. Trump in January 2021.

Twitter ousted the president for two posts on Jan. 8, one of which said his followers “will not be disrespected” and another that announced he wouldn’t attend the inauguration. Twitter said it read the tweets “in the context of broader events in the country and the ways in which the president’s statements can be mobilized by different audiences, including to incite violence, as well as in the context of the pattern of behavior from this account in recent weeks.”

The company said it banned Mr. Trump for glorifying violence.

Mr. Trump has been re-platformed since Mr. Musk’s takeover.

Figuring out who stays and who goes has always been an art.

In the heady days of social media early in the last decade, the companies often took a hands-off approach — as the Twitter official’s quote to Mother Jones suggests.

As evidence mounted that ISIS was sustaining itself through online recruiting, the platforms began to take more active roles in cleansing content.

“Facebook, Twitter, Google and the others already have ‘a national security or terrorism unit’ doing a lot of work in these areas, and they also rely on lots of algorithms to help patrol and eliminate terrorist content,” said James Forest, a professor at the University of Massachusetts Lowell.

Twitter announced that it had suspended nearly 1 million accounts linked to terrorist content.

In 2022, Twitter was pressured to cancel accounts linked to Iran’s Islamic Revolutionary Guard Corps. At first, it told the Counter Extremism Project that @igrciran hadn’t violated its policies despite posting a tweet threatening to assassinate Mr. Trump.

The account was eventually axed.

Yet several top Taliban officials remain active by sharing the oppressive regime’s doings in Afghanistan.

A spokesperson from the Counter Extremism Project told The Washington Times that terrorist content has decreased on social media platforms since 2014 but has not disappeared.

In a report issued in January, the group said it found 813 links to extremist content in 2022.

“The companies did not suddenly become concerned that their platforms were misused by terrorists, only that it could cost them financially and legally,” the spokesperson said. “Any progress was the result of public and advertiser pressure and the prospect of regulation from authorities.”

Jason Blazakis, a professor at the Middlebury Institute of International Studies in Monterey, California, said Congress should amend laws to make clear where liability would fall for tech companies because the definition of terrorism varies from company to company and even among government departments.

“This inevitability results in subjective determinations,” he said. “This is clearly a problem, yet the problem is not one that social media companies can fix.”

• Stephen Dinan can be reached at sdinan@washingtontimes.com.

• Alex Swoyer can be reached at aswoyer@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide