FCC proposes $6M fine for AI-generated robocall spoofing Biden’s voice
The Federal Communications Commission (FCC) has proposed a hefty $6 million fine against a political consultant for allegedly using AI-generated voice cloning and caller ID spoofing to spread election-related misinformation.
“Political consultant Steve Kramer was responsible for the calls and now faces a $6 million proposed fine for perpetrating this illegal robocall campaign on January 21, 2024,” the FCC said in a statement.
The FCC alleged that Kramer orchestrated a robocall campaign that featured a deepfake of President Joe Biden’s voice, urged New Hampshire voters not to participate in the January primary, asking them to “save your vote for the November election.”
Kramer’s action, conducted just two days before the presidential primary, violated the “Truth in Caller ID Act,” the FCC said. This law prohibits the transmission of false or misleading caller ID information with the intent to defraud, cause harm, or wrongfully obtain value.
“We will act swiftly and decisively to ensure that bad actors cannot use U.S. telecommunications networks to facilitate the misuse of generative AI technology to interfere with elections, defraud consumers, or compromise sensitive data,” Loyaan A Egal, chief of the Enforcement Bureau and chair of the Privacy and Data Protection Task Force at FCC said in the statement.
The FCC is also taking action against Lingo Telecom for its role in facilitating the illegal robocalls, the statement added.
“Lingo Telecom transmitted these calls, incorrectly labeling them with the highest level of caller ID attestation, making it less likely that other providers could detect the calls as potentially spoofed. The Commission brought a separate enforcement action today against Lingo Telecom for apparent violations of STIR/SHAKEN for failing to utilize reasonable “Know Your Customer” protocols to verify caller ID information in connection with Mr. Kramer’s illegal robocalls.”
The Commission has made clear that calls made with AI-generated voices are “artificial” under the Telephone Consumer Protection Act (TCPA), confirming that the FCC and state Attorneys General have the needed tools to go after bad actors behind these nefarious robocalls, the statement added. “In addition, the FCC launched a formal proceeding to gather information on the current state of AI use in calling and texting and ask questions about new threats, like robocalls.”
Echoes of a wider debate
This incident reignites concerns over the potential misuse of deepfakes, a technology that can create realistic and often undetectable audio and video forgeries.
Earlier this month, actress Scarlett Johansson raised similar concerns alleging OpenAI using her voice without consent in its AI application. She had alleged that the voice behind “Sky” voice chat sounded “eerily similar” to her. However, OpenAI quickly refuted this allegation.
“The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” OpenAI CEO Sam Altman said in a statement. “We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products.”
Meanwhile, the ChatGPT maker had paused the voice of “Sky,” the statement added.
“Johansson’s case highlights broader ethical and legal challenges surrounding AI-generated content and the need for stringent regulations to protect individuals’ privacy and identities,” said Faisal Kawoosa, founder and chief analyst at the technology research and consulting firm, Techarc.