close
close

FCC Considering New Protections Against AI-Generated Robocalls Next Month

FCC Considering New Protections Against AI-Generated Robocalls Next Month

The Federal Communications Commission will soon consider a measure that would allow the agency to develop new rules for AI-generated robocalls.

Next month, the agency will decide whether to continue its efforts to formally define AI-powered calls and adopt new rules that would require callers to disclose to consumers whether they are using AI in such conversations. It would also seek to maintain “positive” use cases where an AI-powered voice can help people with disabilities use their phones.

The agency has already ruled voice-cloning technology illegal in scam calls, after President Joe Biden deployed an AI-generated voice during this year’s New Hampshire primary, urging voters not to go to the polls. But not all AI-generated phone messages are necessarily used to scam people. The rule would aim to help clarify that dynamic while protecting consumers from misinformed messages.

The FCC will vote on whether to pursue the measure at its public meeting on Aug. 7, the agency said in a news release. The proposal builds on a survey the agency launched in November to gather additional information about the implications of AI for the telecommunications industry, including how the technology’s evolution will impact robocalls and automated text messages.

Spam and robocall operations have traditionally been conducted in environments where human handlers oversaw call patterns, but AI technologies have automated some of these tasks and robocall networks have begun to leverage the speech and voice generation capabilities of consumer-facing AI tools available online or on the dark web.

The expected vote also comes amid heightened concerns that AI systems could amplify the spread of misinformation and disinformation about the November election and beyond. AI-generated materials have already been infiltrated into election campaigns around the world.

The new rules follow a related proposal announced in May that would consider requiring disclosure of AI-generated content in political ads that air on radio or television. The agency has not said whether it will also address that issue next month.

A May survey by cloud call center provider Talkdesk found that 21% of respondents expect their vote to be influenced by election deepfakes and misinformation, while 31% said they fear they won’t be able to reliably distinguish real election content from fake.

A new proposal from Rep. Rick Allen, R-Ga., would make calls created by artificial intelligence tools “subject to the same regulations and standards as traditional telephone systems.” Next government/FCW reported last week.