The National Science Foundation spent millions of taxpayer dollars developing censorship tools powered by artificial intelligence that Big Tech could use “to counter misinformation online” and “advance state-of-the-art misinformation research.”
House investigators on the Judiciary Committee and Select Committee on the Weaponization of Government said the NSF awarded nearly $40 million, including $13 million to three universities and a software company, to develop AI tools that could censor information far faster and at a much greater scale than human beings.
The University of Michigan, for instance, was awarded $750,000 from NSF to develop its WiseDex artificial intelligence tool to help Big Tech outsource the “responsibility of censorship” on social media. University researchers promoted the tool as a way to “get people off our backs for how we act on misinfo and … do things we know work without backlash.”
In an interim staff report released Tuesday by lawmakers on the two panels, investigators say the NSF “forged ahead” with the project despite evidence it clearly understood its actions amounted to censorship.
Lawmakers say NSF officials tried to hide their actions from the media and to curb negative scrutiny by blacklisting conservative outlets and legal scholar Jonathan Turley, who were all writing about or investigating the foundation’s funding of the development of social media censorship tools.
Foundation officials intentionally removed videos and hid public information about funding for the program in response to requests from news outlets they disliked. NSF officials also rejected media requests from outlets that produced coverage they deemed negative.
In a Feb. 2, 2023, email to officials at six universities participating in the AI censorship tool program, NSF Program Director Michael Pozmantier warned about “groups that want to frame the projects … in a negative light.”
He told the institutions that NSF “is not responding to requests from people who are interested in attacking our programs or your projects. … It’s best if you also ignore it.”
Media outlets shunned by the NSF included The Daily Caller and Just the News, both conservative online news sites that wrote about the project.
The release of the interim report follows new revelations that the Biden White House pressured Amazon to censor books about the COVID-19 vaccine and comes months after court documents revealed White House officials leaned on Twitter, Facebook, YouTube and other sites to remove posts and ban users whose content they opposed, even threatening the social media platforms with federal action.
House investigators say the NSF project is potentially more dangerous because of the scale and speed of censorship that artificial intelligence could enable.
“AI-driven tools can monitor online speech at a scale that would far outmatch even the largest team of ’disinformation’ bureaucrats and researchers,” House investigators wrote in the interim report.
“The NSF-funded projects threaten to help create a censorship regime that could significantly impede the fundamental First Amendment rights of millions of Americans and potentially do so in a manner that is instantaneous and largely invisible to its victims.”
Massachusetts Institute of Technology received $750,000 from NSF for its Search Lit platform and the University of Wisconsin-Madison took $5.75 million in federal funds to develop its CourseCorrect tool.
The foundation paid the nonprofit software company Meedan $5.75 million to build its Co-Insights program that would use AI to combat so-called misinformation. When Meedan applied for the hefty grant, company officials pitched its relationships with WhatsApp, Telegram and Signal as a way to develop a tool that would proactively “identify and limit susceptibility to misinformation” and “pseudoscientific information online.” Methods would include open-web crawling and “controversy detection identifying possible content for fact-checking.”
The team at Meedan boasted to the NSF it was using AI to monitor 750,000 blogs and media articles daily in addition to mining data from major social media platforms, the report said.
The presentation slides used in their pitch, obtained by House lawmakers, provided insight into what the company had determined to be misinformation. It included “undermining trust in the mainstream media” and cited as an example criticism of the New York Times for “ignoring Black-on-Asian hate crimes” in its coverage.
The company said it would also monitor and respond to “fearmongering and anti-Black narratives, glorifying vigilantism and weakening political participation.”
Scott Hale, director of research at Meedan, wrote to Mr. Pozmantier at NSF in November 2022 and expounded on his “dream world” for an even larger AI censorship network that could have full access to all of the data social media platforms had removed from their sites. The “data enclave,” Mr. Hale said, could be used by researchers to “run code against to produce aggregate analyses and benchmark different automated detection approaches without ever having direct access to the data.”
Lawmakers on the two committees said NSF has stonewalled their investigation and only provided roughly 300 pages of requested documents. The panels will weigh “legislative solutions,” including blocking NSF from funding AI censorship projects.
The technology is poised to expand.
Lawmakers pointed out that Meedan officials, in their presentation to NSF, determined the “content moderation solutions market” in 2022 was $10 billion.
The Washington Times reached out to NSF, but did not immediately receive a response.
• Susan Ferrechio can be reached at sferrechio@washingtontimes.com.
Please read our comment policy before commenting.