- Associated Press - Monday, May 11, 2020

CHICAGO (AP) - Twitter announced Monday it will start alerting users when a tweet makes disputed or misleading claims about the coronavirus.

The new rule is the latest in a wave of stricter policies that tech companies are rolling out to confront an outbreak of virus-related misinformation on their sites. Facebook and Google, which owns YouTube, have already put similar systems in place.

The announcement signals that Twitter is taking its role in amplifying misinformation more seriously. But how the platform enforces its new policy will be the real test, with company leaders already tamping down expectations.

Yoel Roth, Twitter’s head of site integrity, acknowledged as much: “We will not be able to take enforcement action on every tweet with incomplete or disputed information about COVID-19.”

Roth said Monday the platform has historically applied a “lighter touch” when enforcing similar policies on misleading tweets but said the company is working to improve the technology around the labels.

In February, Twitter said it would add warning labels to doctored or manipulated photos and videos after a recording of Democratic House Speaker Nancy Pelosi was slowed down to make it appear as though she slurred her words. But even with obviously fake videos, such as one showing Joe Biden lolling his tongue and grinning that was shared by President Donald Trump, the company has since used the label only twice, in part because of technical glitches.

And Twitter has not added any warning labels to politicians’ tweets that violate its policies but are deemed in the “public interest” under a policy the company announced in June 2019.

Under the newest COVID-19 rules, Twitter will decide which tweets are labeled - only taking down posts if they are harmful.

Politicians’ tweets will be subject to the notices, which will be available in roughly 40 languages.

Some of the questionable tweets will run with a label underneath that directs users to a link with additional information about COVID-19. Other tweets might be covered entirely by a warning label alerting users that “some or all of the content shared in this tweet conflict with guidance from public health experts regarding COVID-19.”

Twitter won’t directly fact check or call tweets false on the site, said Nick Pickles, the company’s global senior strategist for public policy. The warning labels might send users to curated tweets, public health websites or news articles.

“People don’t want us to play the role of deciding for them what’s true and what’s not true but they do want people to play a much stronger role providing context,” Pickles said.

The notices, which could start appearing as soon as today, could also apply retroactively to past tweets.

The fine line is similar to one taken by tech rival Facebook, which has said it doesn’t want to be an “arbiter of the truth” but has arranged for third-party fact checkers to review falsehoods on its site. The Associated Press is part of Facebook’s fact-checking program.

One example of a disputed tweet that might be labeled on its site includes claims about the origin of COVID-19, which remains unknown. Conspiracy theories about how the virus started and if it is man-made have swirled around social media for months.

Twitter will continue to take down COVID-19 tweets that pose a threat to the safety of a person or group, along with attempts to incite mass violence or widespread civil unrest. The company has been removing bogus coronavirus cures and claims that social distancing or face masks do not curb the virus’ spread for several weeks.

__

AP Technology Writer Barbara Ortutay contributed to this story from Oakland, Calif.

Copyright © 2024 The Washington Times, LLC.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide