- Microsoft's Copilot AI is coming to your Office apps - whether you like it or not
- How to track US election results on your iPhone, iPad or Apple Watch
- One of the most dependable robot vacuums I've tested isn't a Roborock or Roomba
- Sustainability: Real progress but also thorny challenges ahead
- The 45+ best Black Friday PlayStation 5 deals 2024: Early sales available now
1 in 4 people have experienced identity fraud – and most of them blame AI
As technology evolves, so does fraud — and that’s true with the most-hyped innovation of the moment, artificial intelligence (AI). AI is all the rage these days, whether that’s a new feature for the latest cell phone, a tool to generate images or a way to create podcasts in seconds.
Unfortunately, AI is also popular in the criminal underworld. Many scams utilize AI, from a phone scam with a sobbing and realistic-sounding family member who sounds like they’ve been kidnapped, to a fake invoice for a cryptocurrency purchase, or an AI-powered scam that has infiltrated the knitting and crocheting world.
Also: What is Microsoft’s Copilot Labs, and how does it compare to Google Labs?
According to a recent study from Censuswide, more than 25% of people said they had fallen victim to an identity fraud-related scam. The majority of these victims blamed AI.
While the scam these people fell victim to might not have been AI-related, the potential for deep fakes was their biggest future concern. When asked about the biggest threat to identity security, 78% of people pointed to the misuse of AI.
Also: Can synthetic data solve AI’s privacy concerns? This company is betting on it
As many as 70% of respondents said they encounter deep-faked material at least once weekly. Less than half of the study pool said they felt confident enough to tell when something was an AI-created fake.
These numbers highlight that, while AI is seemingly everywhere, many people realize it brings significant risks, too.
So, what’s the solution? More than half (55%) of people who responded to the survey believed that current technology isn’t enough to protect our identities and that more technology is the best option to detect deepfakes. Another large proportion (45%) believed legal options are the best route, including stricter regulations and laws for AI.
Also: Google’s AI podcast tool transforms your text into stunningly lifelike audio – for free
It’s clear from the research that people have an understanding of the downsides of AI. However, they also believe they don’t have enough knowledge to overcome those downsides and don’t fully trust other technology to help. The results are a warning that we should all be more vigilant.