Experts ‘Terrified’ of AI-Driven Misinformation ‘Tsunami’ in 2024 Election

“If people don’t ultimately trust information related to an election, democracy just stops working,” said a senior fellow at the Alliance for Securing Democracy.

By Olivia Rosane. Published 1226-2023 by Common Dreams

An ad for a deepfake video maker. Screenshot: OpenAISee

As 2024 approaches and with it the next U.S. presidential election, experts and advocates are warning about the impact that the spread of artificial intelligence technology will have on the amount and sophistication of misinformation directed at voters.

While falsehoods and conspiracy theories have circulated ahead of previous elections, 2024 marks the first time that it will be easy for anyone to access AI technology that could create a believable deepfake video, photo, or audio clip in seconds, The Associated Press reported Tuesday.

“I expect a tsunami of misinformation,” Oren Etzioni, an AI expert and University of Washington professor emeritus, told the AP. “I can’t prove that. I hope to be proven wrong. But the ingredients are there, and I am completely terrified.”

Subject matter experts told the AP that three factors made the 2024 election an especially perilous time for the rise of misinformation. The first is the availability of the technology itself. Deepfakes have already been used in elections. The Republican primary campaign of Florida Gov. Ron DeSantis circulated images of former president Donald Trump hugging former White House Coronavirus Task Force chief Anthony Fauci as part of an ad in June, for example.

“You could see a political candidate like President [Joe] Biden being rushed to a hospital,” Etzioni told the AP. “You could see a candidate saying things that he or she never actually said.”

The second factor is that social media companies have reduced the number of policies designed to control the spread of false posts and the amount of employees devoted to monitoring them. When billionaire Elon Musk acquired Twitter in October of 2022, he fired nearly half of the platform’s workforce, including employees who worked to control misinformation.

Yet while Musk has faced significant criticism and scrutiny for his leadership, co-founder of Accountable Tech Jesse Lehrich told the AP that other platforms appear to have used his actions as an excuse to be less vigilant themselves. A report published by Free Press in December found that Twitter—now X—Meta, and YouTube rolled back 17 policies between November 2022 and November 2023 that targeted hate speech and disinformation. For example, X and YouTube retired policies around the spread of misinformation concerning the 2020 presidential election and the lie that Trump in fact won, and X and Meta relaxed policies aimed at stopping Covid 19-related falsehoods.

“We found that in 2023, the largest social media companies have deprioritized content moderation and other user trust and safety protections, including rolling back platform policies that had reduced the presence of hate, harassment, and lies on their networks,” Free Press said, calling the rollbacks “a dangerous backslide.”

Finally, Trump, who has been a big proponent of the lie that he won the 2020 presidential election against Biden, is running again in 2024. Since 57% of Republicans now believe his claim that Biden did not win the last election, experts are worried about what could happen if large numbers of people accept similar lies in 2024.

“If people don’t ultimately trust information related to an election, democracy just stops working,” Bret Schafer, a senior fellow at the nonpartisan Alliance for Securing Democracy, told the AP. “If a misinformation or disinformation campaign is effective enough that a large enough percentage of the American population does not believe that the results reflect what actually happened, then Jan. 6 will probably look like a warm-up act.”

The warnings build on the alarm sounded by watchdog groups like Public Citizen, which has been advocating for a ban on the use of deepfakes in elections. The group has petitioned the Federal Election Commission to establish a new rule governing AI-generated content, and has called on the body to acknowledge that the use of deepfakes is already illegal under a rule banning “fraudulent misrepresentation.”

“Specifically, by falsely putting words into another candidate’s mouth, or showing the candidate taking action they did not, the deceptive deepfaker fraudulently speaks or act[s] ‘for’ that candidate in a way deliberately intended to damage him or her. This is precisely what the statute aims to proscribe,” Public Citizen said.

The group has also asked the Republican and Democratic parties and their candidates to promise not to use deepfakes to mislead voters in 2024.

In November, Public Citizen announced a new tool tracking state-level legislation to control deepfakes. To date, laws have been enacted in California, Michigan, Minnesota, Texas, and Washington.

“Without new legislation and regulation, deepfakes are likely to further confuse voters and undermine confidence in elections,” Ilana Beller, democracy campaign field manager for Public Citizen, said when the tracker was announced. “Deepfake video could be released days or hours before an election with no time to debunk it—misleading voters and altering the outcome of the election.”

This work is licensed under Creative Commons (CC BY-NC-ND 3.0). 

Share Button

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

Protected with IP Blacklist CloudIP Blacklist Cloud