- 3 lucrative side hustles you can start right now with OpenAI's Sora video generator
- How to use Microsoft's Copilot AI on Linux
- Protect 3 Devices With This Maximum Security Software
- I tested Samsung's 98-inch 4K QLED TV, and watching Hollywood movies on it left me in awe
- Apple is working on a doorbell that unlocks your door Face ID-style
Cybersecurity: Watch out for these unique fraudster tricks Loki would be proud of
Online fraud is getting sneakier and stealthier as mischievous operatives evolve their techniques. Learn some of the unique tricks afoot today and how to spot them.
My family and I enjoyed the TV show “Loki,” which portrays the god of mischief entailed in various shenanigans. I thought how many “variants” of Loki are online—albeit less charismatic—trying to pull off fraudulent tricks to bilk victims out of money or identity information. The tricks are getting more unique, unfortunately, as the related tools grow more complex and widely available.
SEE: Security incident response policy (TechRepublic Premium)
I spoke about unique fraudster gimmicks with Rick Song, co-founder and CEO of Persona, an identity verification solution provider, and Johanna Baum, founder and CEO of security provider Strategic Security Solutions.
Song pointed out that it’s now easier for malicious players with minimal technology skills to engage in fraudulent activity. “Deepfakes were once only deployed by sophisticated tech users, but are now as simple as downloading an app,” he said.
Song said there is a huge increase in deepfake content on the internet, and amateurs have started to produce impersonation videos of celebrities that are believable to the untrained eye. “Fraudsters can use deepfakes for bribery, to spread lies and misinformation or to pose as everyday individuals—such as someone you trust—in order to get personal information and login credentials. This enables them to open fake credit card accounts, take over existing accounts, steal money from unsuspecting victims or access entire databases of user information to sell on the dark web,” he said.
Another example is called synthetic fraud, whereby a fraudster steals an Social Security number and combines it with fake information, such as a false name and date of birth, to create a false identity or biometric ID verification using artificial intelligence and other technology to mimic facial identities and fool facial recognition software. “Companies need to take into account multiple signals, like behavioral patterns, to ensure they’re evaluating a complete picture that a fraudster can’t easily replicate.”
Baum said most information is available without any victim input: “Personal information is readily available from multiple social media platforms. Fraudster tricks don’t have to involve the victim directly. So much information can be obtained without any contact. The threats will continue to increase in size, frequency and damage with significant elapsed time for the victim to identify it.
“Vacation and pandemic-related scams are continuing to increase as individuals re-enter mainstream activities,” she added. “Homes are being rented to multiple parties by imposters who then cancel the rentals and steal the deposits. There are similar issues with multiple transactions placed on the same autos or autos not owned by the so-called seller.”
Song said users and IT departments should pay close attention to personally identifiable information to safeguard against these attacks.
SEE: How to manage passwords: Best practices and security tips (free PDF) (TechRepublic)
“Consumers should pay attention to how companies handle their PII,” he said. “They should read privacy policies, check their privacy settings on any apps, websites or social networks they use, and know their data rights. Additionally, they can stay on top of the latest phishing campaigns, especially as these get more sophisticated with AI. If you Google your name, you may be able to uncover brokers who are selling your data and opt out. The more of your data that is out there for fraudsters to find, the more ammo they have to steal your identity.”
He added that IT departments need to rethink their strategies from identity verification systems to data storage in order to keep consumers safe and block bad actors.
“IDV solutions can mitigate account takeovers and fraud that can happen during customer onboarding, changing account information and high-risk transactions. However, businesses might be reluctant to use them if they think it will hurt the user experience—asking the customer to jump through hoops could cause them to abandon the transaction before it can be completed. Businesses should shop around for identity verification solutions that are customizable to their needs. IT departments should make sure they store data in as few places as possible, utilize encryption and build a framework that is compliant with global security and privacy standards.”
Song said it can be hard to anticipate what’s coming but advised: “Companies can protect themselves by using a multi-factor identity verification approach. Companies should collect several points of information (photo ID, address, date of birth, SSN) while evaluating passive signals such as IP address or browser fingerprints and check collected information against third-party data sources (e.g. phone and email risk lists). Performing continuous checks throughout the entire customer lifecycle where there’s a potential point of compromise is critical.”
But things will change, he added. “In the future the identity verification experience will need to evolve based on what services are seeing with real-time signals. For example, as the service detects a suspicious IP address or keystroke behavior, it should adapt its requirement automatically and perform more rigorous checks.”