Advertisement

AI-generated voices in robocalls now illegal, FCC says

Ruling comes after AI recently used to impersonate President Joe Biden's voice in fraudulent political calls

By Ehren Wynder
Chair of the Federal Communications Commission Jessica Rosenworcel (pictured in 2022) said the agency's new ruling expands enforcement of the Telephone Consumer Protection Act to include voice cloning with artificial intelligence. File Photo by Oliver Contreras/UPI
Chair of the Federal Communications Commission Jessica Rosenworcel (pictured in 2022) said the agency's new ruling expands enforcement of the Telephone Consumer Protection Act to include voice cloning with artificial intelligence. File Photo by Oliver Contreras/UPI | License Photo

Feb. 8 (UPI) -- The Federal Communications Commission said Thursday that robocalls using artificial intelligence are now illegal, giving state law enforcement more legal avenues to prosecute scammers.

The ruling, effective immediately, declares voices made with AI technology are illegal under the Telephone Consumer Protection Act, which restricts the use of "artificial or prerecorded voice" in phone calls.

Advertisement

"The Telephone Consumer Protection Act is the primary law we have to help limit unwanted robocalls," FCC Chair Jessica Rosenworcel said in the ruling. "It means that AI technologies like voice cloning fall within this law's existing prohibitions and that calls that use this technology to simulate a human voice are illegal, unless callers have obtained prior express consent."

While state attorneys general currently can target the outcome of unwanted robocalls, the new ruling allows them to prosecute bad actors specifically for using AI in their schemes, further opening legal avenues for state officials to hold them accountable.

The FCC in November launched a notice of inquiry to build a record for how the agency can combat AI-generated robocalls. The agency asked questions such as how AI might be used to mimic someone's voice and how AI can be used to identify robocalls before they even reach people's phones.

Advertisement

"Responsible and ethical implementation of AI technologies is crucial to strike a balance, ensuring that the benefits of AI are harnessed to protect consumers from harm rather than amplify the risks they face in an increasingly digital landscape," FCC Commissioner Anna Gomez said in a statement.

The agency's ruling comes after the New Hampshire Attorney General's Office found a Texas-based organization used AI to impersonate President Joe Biden's voice in calls that urged Democrats not to vote in the state's primary on Jan. 23.

Attorney General John Formella said his investigation found 5,000-25,000 robocalls imitating Biden had been made before the primary.

Even without the legal and ethical complications brought by AI, robocalls have been an ongoing consumer problem for some time.

The FCC in December proposed a nearly $300 million fine for Roy Cox Jr. and Michael Aaron Jones, the scammers behind the infamous robocalls alleging "we've been trying to reach you about your car's extended warranty."

The agency said Cox and Jones spammed more than 5 billion calls to over 550 million people in 2021. They had been operating since 2018, and it is unknown how many calls were made in that time.

Advertisement

The FCC said it would allow the operators to respond and submit evidence before it takes further action.

Latest Headlines