Skip to content

FCC Chair Proposes Banning AI Robocalls Under the Telephone Consumer Protection Act

Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel has proposed outlawing AI-generated robocalls, subjecting them to regulations and penalties defined by the Telephone Consumer Protection Act (TCPA).

Federal Communications Commission (FCC) Chairwoman Jessica Rosenworcel has introduced a proposal to declare AI-generated robocalls illegal and subject to penalties outlined in the Telephone Consumer Protection Act (TCPA). The announcement, made on January 31, comes in response to a recent incident involving AI-generated calls imitating the voice of President Joe Biden and spreading misinformation ahead of the 2024 presidential election.

The proposal aims to combat the rise of AI-driven robocalls that have exploited technology to impersonate celebrities, political figures, and even family members, thereby deceiving consumers and violating their privacy.

The Telephone Consumer Protection Act (TCPA) is a United States law enacted in 1991, primarily designed to regulate automated political and marketing calls that are made without the recipient's consent. Its primary objective is to protect consumers from unwanted and intrusive communications, including unsolicited telemarketing calls and automated messages.

By implementing this proposal, the FCC intends to empower State Attorneys General across the country with additional tools to pursue individuals responsible for these malicious AI-generated robocalls and enforce legal consequences.

The FCC's decision to address AI-driven robocalls follows a Notice of Inquiry initiated by the agency in November. This inquiry sought information about addressing illegal robocalls and the potential involvement of AI. The FCC inquired into the role of AI in scams, voice mimicry, and whether it should be regulated under the TCPA. Furthermore, the agency sought insights into how AI can be harnessed positively to detect and prevent illegal robocalls.

The proliferation of deepfake technology has raised concerns about AI-generated content, leading to calls for legislation to criminalize the creation of deepfake images and videos. The recent incident involving AI-generated calls imitating President Biden highlights the need for regulatory measures to address the misuse of AI in deceptive communications.

The White House also recently released a fact sheet outlining key actions related to AI, highlighting "substantial progress" toward safeguarding Americans from potential risks associated with AI systems. The global community, including organizations like the World Economic Forum and intelligence agencies like Canada's CSIS, has expressed concerns about the use of AI deepfakes in disinformation campaigns and the need to mitigate these threats.

As AI continues to advance, policymakers and regulators are working to strike a balance between harnessing its positive potential and addressing the risks and challenges it poses, including the proliferation of AI-generated content and deceptive communications.