US FCC Prohibits AI-Generated Robocalls

The United States Federal Communications Commission (FCC) has declared illegal the use of AI-generated voices in robocalls, a move that allows the government to prosecute those responsible for such calls. The announcement was made on Thursday, with immediate effect.

The FCC’s decision comes in response to a growing trend of robocalls employing AI-generated voices to mimic celebrities, political figures, and even family members. FCC Chairwoman Jessica Rosenworcel stated that “bad actors” are utilizing these AI-generated voices to engage in activities such as extorting vulnerable individuals, impersonating celebrities, and spreading misinformation to voters.

The move follows a recent incident in New Hampshire, where voters received robocalls impersonating US President Joe Biden, discouraging them from participating in the state’s presidential primary. An estimated 5,000 to 25,000 such calls were placed, leading to an ongoing criminal investigation.

The FCC emphasized the potential for confusion and misinformation caused by these calls, which imitate public figures and, in some cases, close family members. While state attorneys general could previously prosecute companies and individuals behind robocalls for crimes like scams or fraud, this new action explicitly makes the use of AI-generated voices in such calls illegal.

The FCC’s decision was prompted, in part, by a letter from attorneys general representing 26 states in mid-January. The letter urged the FCC to take action to restrict the use of AI in marketing phone calls, emphasizing the need to prevent new technologies from being exploited for deceptive purposes.

This move aligns with the FCC’s Notice of Inquiry in November 2023, seeking input nationwide on the use of AI technology in consumer communications. The agency aims to address the evolving technological landscape and ensure that these advancements are not misused to deceive or manipulate consumers.

The rise of deepfake technology, utilizing AI to create realistic video or audio impersonations, has raised concerns globally, particularly in the context of elections. In the UK and other nations, senior politicians have been targeted with audio deepfakes, prompting warnings from cybersecurity authorities about the potential threats these AI fakes pose to the integrity of elections.