Google will soon require that political ads using artificial intelligence be accompanied by a prominent disclosure if imagery or sounds have been synthetically altered.
Starting in November, just under a year before Election Day, Google said in an update to its political content policy that disclosure of AI to alter images must be clear and conspicuous and be located somewhere that users are likely to notice it.
Though fake images, videos or audio clips are not new to political advertising, generative AI tools are making it easier to do, and more realistic. Some presidential campaigns in the 2024 race — including that of Florida GOP Gov. Ron DeSantis - already are using the technology.
The Republican National Committee in April released an entirely AI-generated ad meant to show the future of the United States if President Joe Biden is reelected. It employed fake but realistic photos showing boarded-up storefronts, armored military patrols in the streets, and waves of immigrants creating panic.
In June, DeSantis’ campaign shared an attack ad against his GOP primary opponent Donald Trump that used AI-generated images of the former president hugging infectious disease expert Dr. Anthony Fauci.
Last month the Federal Election Commission began a process to potentially regulate AI-generated deepfakes in political ads ahead of the 2024 election.
Congress could pass legislation creating guardrails for AI-generated deceptive content, and lawmakers, including Senate Majority Leader Chuck Schumer, have expressed intent to do so.
Several states also have discussed or passed legislation related to deepfake technology.
Google is not banning AI outright in political advertising. Exceptions to the ban include synthetic content altered or generated in a way that’s inconsequential to the claims made in the ad. AI can also be used in editing techniques like image resizing, cropping, color, defect correction, or background edits.
Please read our comment policy before commenting.