Skip to content

US Bans AI-Generated Voices in Robocalls Following Biden Deepfake Scam

The United States has banned the use of AI-generated voices in robocalls following a rise in fraudulent schemes. The FCC's decision aims to protect consumers from deceptive practices and uphold telecommunications integrity.

In response to the recent surge in AI-generated voice scams, the United States has officially banned the use of artificial intelligence-generated voices in robocalls under telemarketing laws. The decision by the Federal Communications Commission (FCC) aims to combat fraudulent robocall schemes that exploit vulnerable individuals.

The FCC's ruling, announced on Feb. 8, classifies calls made with AI-generated voices as "artificial" under the Telephone Consumer Protection Act (TCPA). This designation grants state attorneys general new tools to pursue perpetrators of illicit robocall campaigns. The move comes in the wake of a fake robocall campaign in New Hampshire featuring a voice impersonating President Joe Biden, urging recipients not to vote in the state's primary election.

While robocall scams were already illegal under the TCPA, the new ruling explicitly prohibits the use of "voice cloning technology" employed in AI-generated voice scams. The ban takes immediate effect, signaling a crackdown on fraudulent robocall practices.

FCC Chair Jessica Rosenworcel emphasized the need to address the proliferation of AI-generated voice scams, which have targeted individuals with extortion attempts, celebrity impersonations, and misinformation campaigns. The ruling aims to protect consumers from deceptive robocalls by holding perpetrators accountable for using AI to generate fraudulent voices.

The TCPA, enacted in 1991, safeguards consumers from unwanted telemarketing communications and restricts automated calls without prior consent. The FCC's decision extends these protections to encompass AI-generated voice calls, ensuring that telemarketers obtain explicit consent before engaging in robocalling activities.

The FCC's action underscores the escalating threat posed by AI-backed robocalls and the urgent need for regulatory intervention. By outlawing AI-generated voices in robocalls, authorities aim to safeguard consumers from deceptive practices and preserve the integrity of telecommunications networks.

In a related development, the Texas-based firm Life Corporation and an individual named Walter Monk were implicated in the Biden robocall scam, prompting the issuance of a cease-and-desist order by the Election Law Unit. The order demands immediate compliance with state statutes on bribery, intimidation, and suppression, signaling a broader crackdown on fraudulent robocall activities.

The FCC's decisive action against AI-generated voice scams represents a significant step in combating fraudulent robocalls and protecting consumers from deceptive practices in telecommunications.