- New Information Supplement: Payment Page Security and Preventing E-Skimming
- 7 reasons Kindles are still a great buy, even without downloads
- How to clear the cache on your Windows 11 PC (and why it's important to do so)
- Meta's latest limited edition Ray-Ban smart glasses are here - and they're fancy
- Surge in Malicious Software Packages Exploits System Flaws
Most AI voice cloning tools aren't safe from scammers, Consumer Reports finds

AI voice cloning technology has made remarkable advances in the last few years, reaching the ability to create realistic-sounding audio from just a few seconds of a sample. Although this has many positive applications — such as audiobooks, marketing materials, and more — the technology can also be exploited for elaborate scams, fraud, and other harmful applications.
To learn more about the safeguards currently in place for these products, Consumer Reports assessed six of the leading voice cloning tools: Descript, ElevenLans, Lovo, PlayHT, Resemble AI, and Speechify. Specifically, Consumer Reports were looking for proper safeguards that prevent the cloning of someone’s voice without their knowledge.
Also: Got a suspicious E-ZPass text? It’s a trap – how to spot the scam
The results found that four of the six products — from ElevenLabs, Speechify, PlayHT, and Lovo — did not have the technical mechanisms necessary to prevent cloning someone’s voice without their knowledge or to limit the AI cloning to only the user’s voice.
Instead, the protection was limited to a box users had to check off, confirming they had the legal right to clone the voice. The researchers found that Descript and Resemble AI were the only companies with additional steps in place that made it more challenging for customers to do non-consensual cloning.
Descript asked the user to read and record a consent statement and used that audio to generate the clone. Resemble AI takes a different approach, ensuring that the first voice clone created is based on audio recorded in real time. Neither method is impenetrable, as a user could hit play on another AI-cloned snippet or an existing video on a different device.
A common use of non-consensual cloning is scamming people. For example, a popular attack involves cloning the voice of a family member and then using that recording to contact a loved one to request that money be sent to help them out of a dire situation. Because the victim thinks they are hearing the voice of a family member in distress, they are more likely to send whatever funds are necessary without questioning the situation.
Also: Tax scams are getting sneakier – 10 ways to protect yourself before it’s too late
Voice cloning has also been used to impact voters’ decisions in upcoming elections, as seen in the 2024 election when someone cloned former President Joe Biden’s voice to discourage people from showing up to the voting polls.
Consumer Reports also found that Speechify, Lovo, PlayHT, and Descript only required an email and name for a user to create an account. Consumer Reports recommends that these companies also collect customers’ credit card information to trace fraudulent audio back to the bad actor.
Other Consumer Reports recommendations include mechanisms to ensure the ownership of the voice, such as reading off a unique script, watermarking AI-generated audio, creating a tool that detects AI-generated images, detecting and preventing the cloning of the voice of influential or public figures, and prohibiting audio containing scam phrases.
The biggest departure from the current system would be Consumer Report’s proposal to have someone supervise voice cloning instead of the current do-it-yourself method. Consumer Reports also said there should be an emphasis on making the necessary actors understand their liability should the voice model be misused in a contractual agreement.
Also: How Cisco, LangChain, and Galileo aim to contain ‘a Cambrian explosion of AI agents’
Consumer Reports believes companies have a contractual obligation under Section 5 of the Federal Trade Commission Act to protect their products from being used for harm, which can only be done by adding more protections.
If you receive an urgent call from someone you know demanding money, don’t panic. Use another device to directly contact that person to verify the request. If you cannot make contact with that person, you can also ask the caller questions to verify their identity. For a full list of how to protect yourself from AI scam calls, check out ZDNET’s advice here.