Advertisement

AI-generated election disinformation can subtly influence voters, deepen political divide

Disinformation generated by artificial intelligence and amplified by social media threatens to subtly influence voters and deepen political divides. File Photo by Bonnie Cash/UPI
1 of 3 | Disinformation generated by artificial intelligence and amplified by social media threatens to subtly influence voters and deepen political divides. File Photo by Bonnie Cash/UPI | License Photo

Sept. 18 (UPI) -- Experts warn that false images, video and audio created by artificial intelligence to spread disinformation about the 2024 elections can subtly influence voters and worsen the nation's political divide.

A survey by Elon University's Imagining the Digital Future Center found that 69% of participants are not confident that most voters can detect fake photos, audio or video. The survey examined attitudes about AI and the role it is playing in U.S. politics in 2024.

Advertisement

Janet Coats, managing director of the Consortium on Trust in Media and Technology at the University of Florida, told UPI that social media remains the primary channel for transmitting disinformation. AI is increasingly becoming a more common tool to enhance these misleading and false messages.

"We used to call it propaganda. It's been around as long as people have been communicating in some kind of organized way," Coats said. "What we're starting to see in this election cycle is the rise of artificial intelligence as a really easy and pretty sophisticated, user-friendly tool for creating disinformation, then using social media platforms to push it out to consumers. You're almost reaching everyone."

Advertisement

"Things move by so fast. It might make a quick impression on you and maybe you file it away in the back of your mind and you don't really dig into it, but it colors your perceptions of other things you're seeing."

Prominent figures have shared AI-created content in ways that spread false information as the general election nears. Former President Donald Trump, the Republican nominee, shared an AI-generated endorsement from Taylor Swift in August. Swift would later officially endorse Trump's opponent, Vice President Kamala Harris.

Weeks later, X owner Elon Musk shared an image of Harris dressed in stereotypical communist garb, falsely claiming that Harris "vows to be a communist dictator on day one."

"Can you believe she wears that outfit!?" Musk posted on X.

Lisa Fazio, associate professor of psychology at Vanderbilt, studies how people learn both true and false information from the world around them and how it shapes their views or become more receptive to false information. She told UPI that these posts by Trump and Musk are ways to signal to their followers what they should believe.

"One of the things about Trump sharing AI-generated images of Taylor Swift is it doesn't need to convince voters she actually did the endorsement," Fazio said. "It's just another sign of, 'Trump is popular and you're with the winning side.'"

Advertisement

AI-generated disinformation is not likely to cause a voter to switch their vote, Fazio and Coats said, but it can increase polarization, creating a greater divide between political ideologies.

Social media companies like X and Meta widely adopted new policies for flagging and downgrading posts and accounts that engaged in disinformation as these played a role in sparking the Jan. 6 riot at the U.S. Capitol.

These companies -- X in particular -- have scaled back their teams and mechanisms designed to combat disinformation and misinformation in the years since. Musk quickly fired the head of trust policy after taking over the company in 2022 as one of his first moves. He continued with layoffs in X's trust and safety roles.

About 73% of respondents to the Elon University survey believe it is very or somewhat likely that AI will be used to influence the outcome of the election by manipulating social media. About 70% say the election will be affected by the use of AI and about 62% worry that it is likely to convince some voters not to cast ballots.

False information about how, where and when people can vote and their eligibility is a bigger threat to the election than AI-generated posts like those from Trump or Musk, Coats said.

Advertisement

Voters in New Hampshire received robocalls from an artificially generated voice purporting to be President Joe Biden leading up to the state's primary earlier this year. The calls urged voters to "save your vote" for the general election.

"There could be impacts in swing states that create a perception that there's no point in me voting or I have been disqualified from voting or sending out false information on voting," Coats said. "Depressing turnout is one way in the close states to tip things one way or the other."

There are some telltale signs of AI-created images. One of the apparent limitations of the technology is in its inability to create realistic hands. They often have the wrong number of fingers or are posed in an unnatural way.

However the technology will continue to become more sophisticated and the content it produces will be more convincing.

AI-generated voices are more difficult to detect.

"Some people's voices are easy to duplicate," Coats said. "That's one of the things with Biden is he has a very distinctive speech cadence."

The University of Florida and other universities are researching unique ways to harness AI themselves to spot AI-generated disinformation.

Advertisement

The University of Washington's Allen Institute for Artificial Intelligence introduced AI software called Grover in 2019. It is used to detect fake news stories with a 92% accuracy rate. There are also tools like Check by Meedan, a fact-checking software used to detect misinformation in Africa and South America.

"They're turning the machine on itself," Coats said.

There have long been concerns about how technology -- most currently AI -- can influence society. Fazio said society has tended to adapt over time to recognize things like digitally edited photos and deepfakes. This gives her some optimism that the same can be true for AI-generated disinformation.

"Pay attention to the source and who it's coming from. Think about if they might have motivated reasons for pushing things consistent with their point of view," she said. "One of my concerns about this type of misinformation isn't just that the existence of it might change people's minds. It's also that it might have people doubt real things."

Fazio referred to a photo of a Harris campaign rally that left some users on social media skeptical about whether the crowd was real or if the images were AI-generated.

There have been bills introduced in Congress since the launch of ChatGPT in 2022, attempting to regulate the use of AI. None of the bills have gained traction enough to suggest meaningful enforcement is on its way.

Advertisement

Pursuing regulations will be a long road leading to a difficult task, Coats said.

"We're kind of always fighting the last war. There's a regulatory approach we have to think through," Coats said. "Part of the issue is this is global. This is not something you can just regulate in one place and not have there be complications somewhere else."

It will also raise some interesting questions.

"Does the machine have free speech rights? Intellectual property rights?" Coats asked. "When they're regulating this, where are the free speech boundaries and the First Amendment boundaries? How does it all intersect? It is just a big complicated ball of string here that we're trying to figure out as that ball keeps moving."

Latest Headlines