Microsoft developed new technology to detect “deepfake” disinformation that it will share with political campaigns and journalists focused on how such false images could affect the upcoming election.
The new tool to verify photos and videos online, called Microsoft Video Authenticator, will be initially distributed via the AI Foundation with the goal of thwarting malicious influence campaigns aimed at the 2020 elections.
Tom Burt, Microsoft corporate vice president, and Eric Horvitz, Microsoft’s chief scientific officer, wrote on the company blog that the company is testing its new technology in partnership with The New York Times, the BBC, and CBC/Radio-Canada.
“Video Authenticator can analyze a still photo of video to provide a percentage chance, or confidence score, that the media is artificially manipulated,” Mr. Burt and Mr. Horvitz wrote. “In the case of a video, it can provide this percentage in real-time on each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye.”
As part of Microsoft’s new tech, users will also be able to add digital hashes and certificates to content that will serve as a watermark on the content.
The AI Foundation, an artificial intelligence company, will provide the tech through its “Reality Defender 2020” initiative to political campaigns, news outlets, and other organizations with a stake in the political process.
Alongside its new software tools, Microsoft is also forming partnerships to educate voters on spotting misinformation and disinformation that the tech titan thinks they would not discover on their own.
The company has partnered with the University of Washington and USA Today to launch a media literacy effort that teaches people what deepfakes look like and that will spread a public service announcement on radio stations about truth.
“Practical media knowledge can enable us all to think critically about the context of media and become more engaged citizens while still appreciating satire and parody,” Mr. Burt and Mr. Horvitz wrote. “Though not all synthetic media is bad, even a short intervention with media literacy resources has been shown to help people identify it and treat it more cautiously.”
Whether Microsoft is proven better at differentiating disinformation from content from social media trolls looking to motivate supporters, provoke an emotional response, or instigate conflict remains to be seen.
In the past week, Twitter has branded posts from House Minority Whip Steve Scalise and White House social media director Dan Scavino with the label “manipulated media.” The videos on both Mr. Scalise and Mr. Scavino’s accounts are no longer visible on Twitter, with Mr. Scavino’s video removed in response to a copyright claim.
Rather than adopting Twitter’s aggressive content enforcement approach, Microsoft appears more focused on raising awareness than forcing users to change their behavior. Microsoft said it has recently expanded its implementation of NewsGuard, which uses a team of journalists to rate online news websites with a “nutrition label.”
By partnering with leading media companies in the United States, Canada, and the United Kingdom, and sharing its new tech with other journalists, Microsoft may be less likely to face withering scrutiny from journalists over malicious influence efforts using Microsoft platforms such as LinkedIn.
Earlier this year, U.S. government officials warned of LinkedIn as a platform increasingly used by American adversaries, particularly China, in mounting influence campaigns and intelligence collection efforts.
Executives from Microsoft and LinkedIn have more recently participated in meetings with federal law enforcement and intelligence agencies to discuss malicious influence operations from adversaries looking to leverage their platforms.
Alongside other Big Tech companies, Microsoft and LinkedIn have met with representatives from the FBI, Department of Justice, Office of the Director of National Intelligence, and the Cybersecurity and Infrastructure Security Agency.
• Ryan Lovelace can be reached at rlovelace@washingtontimes.com.
Please read our comment policy before commenting.