- The Washington Times - Wednesday, August 19, 2020

Facebook came under fire Wednesday for falling short at stopping the spread of medical misinformation on its social network throughout the continuing novel coronavirus pandemic.

Avaaz, a global nonprofit activist group, released a 33-page report scrutinizing how Facebook handles user content that includes bogus and potentially dangerous medical claims.

Facebook groups and pages that regularly share medical misinformation on the platform generated an estimated 3.8 billion views over the last year, according to Avaaz’s report.

Only 16% of content determined by Avaaz to contain medical misinformation was flagged with a warning label, while the other 84% remained online without warnings, the report found.

And in April, when the outbreak effectively shuttered the U.S. economy, legitimate medical sites generated a quarter of as many views on Facebook as fake ones, Avaaz also noted.

“Content from the top 10 websites spreading health misinformation reached four times as many views on Facebook as equivalent content from the websites of 10 leading health institutions,” such as the World Health Organization and U.S. Centers for Disease Control and Prevention, the authors of Avaaz’s report wrote.

“This suggests that just when citizens needed credible health information the most, and while Facebook was trying to proactively raise the profile of authoritative health institutions on the platform, its algorithm was potentially undermining these efforts and helping to boost content from health misinformation spreading websites at a staggering rate,” they added.

Rep. Anna Eshoo, California Democrat and chair of the House Energy and Commerce Health Subcommittee, was quick to condemn Facebook in light of learning of Avaaz’s findings.

“The fact that health misinformation has been viewed 4 billion times on Facebook in the last year is utterly inexcusable & dangerous,” she said on Twitter. “The findings underscore the fundamental flaw in FB’s business model: it values lies over lives.”

Facebook, on its part, pushed back on the findings while touting the social network’s success at combating misinformation about COVID-19, the disease caused by the novel coronavirus.

“We share Avaaz’s goal of limiting misinformation, but their findings don’t reflect the steps we’ve taken to keep it from spreading on our services,” Facebook said in a statement.

Facebook applied warning labels to 98 million pieces of COVID-19 misinformation between April and June, it said in the statement. It also removed 7 million pieces of content that could lead to imminent harm during that same span, in addition to directing “over 2 billion people to resources from health authorities.”

Indeed, Facebook notably removed a post made by President Trump’s account on the platform weeks earlier in which he falsely claimed kids are “virtually immune” to the coronavirus.

More recently, Facebook was among several platforms that acted upon the release Tuesday of a film containing bogus claims about COVID-19 by blocking users from sharing a link to it.

• Andrew Blake can be reached at ablake@washingtontimes.com.

Copyright © 2024 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide