See No Evil, Hear No Evil: Deepfakes’ Use in Social Engineering Attacks


Artificial Intelligence (AI) is one of the most high-profile technology developments in recent history. It would appear that there is no end to what AI can do. Fom driverless cars, dictation tools, translator apps, predictive analytics and application tracking, as well as retail tools such as smart shelves and carts to apps that help people with disabilities, AI can be a powerful component of wonderful tech products and services. But it can also be used for nefarious purposes, and ethical considerations around the use of AI are in their infancy.

In their book, Tools and Weapons, the authors talk about the need for ethics, and with a good reason. Many AI services and products have come to face some scrutiny because they have negatively impacted certain populations such as by exhibiting racial and gender bias or by making flawed predictions.

Voice Cloning and Deepfakes

Now, with AI-powered voice technology, anyone can clone a voice. This is exactly what happened to Bill Gates, whose voice was cloned by Facebook engineers – probably without his consent. Voice cloning is already being used for fraud. In 2019, fraudsters cloned a voice of a chief executive and successfully tricked a CEO into transferring a substantial sum of money. Similar crimes have emerged using the same technology.

Voice cloning is not the only concern of AI technology. The combination of voice cloning and video has given rise to what is known as deepfakes. With the help of software, anyone can create convincing and often hard-to-authenticate images or videos of someone else. This has cybersecurity experts worried both because this technology is open source, making it available to anyone with skill and imagination, and because it is still largely unregulated, making it easy to use for nefarious purposes.

Similar to the Bill Gates voice cloning demonstration, a deep fake of Belgian Premier Sophie Wilmès speaking about COVID-19 was released by a political group. One potential area of harm associated with deepfakes is the spreading of misinformation. Another problem is that it can influence the opinions of ordinary people who may trust and look up to public figures. Also, the person who is cloned can suffer loss of reputation, leading to loss of income or opportunities as well as psychological harm.

Deepfakes on LinkedIn

Recently, an article raising awareness about deepfake LinkedIn profiles told the story of a deepfake account that managed to get hundreds of LinkedIn connections. The article also stated that profiles of cybersecurity individuals seem to be of specific interest to this account. This is not surprising, as cybersecurity professionals often trust each other when it comes to security recommendations. Once one of these fake accounts are accepted as a LinkedIn connection, the information on the account could be used by any malicious actor to conduct research about the person to commit fraud. Once an account is added by a few cybersecurity professionals, it becomes easier for the fraudulent account to connect with similar people, as “social proof” gives authenticity to the new connection. This way, any future phishing attempt may be more successful because it will mimic real life and appear benign. A new LinkedIn connection with common interests, knowledge, or expertise may perhaps be asking for recommendations. But with those recommendations, malicious actors could potentially harvest valuable security insights that can be used against organizations.

As humans, we are socialized to trust and help friends, family, colleagues, and/or acquaintances. This is part of our social norms, and it helps us to thrive in life. We help people we know, and they help us in return. Scammers know and weaponize this by orchestrating scams that exploit social norms. The use of deepfakes could make their job a lot easier.

These new, sophisticated cyberattacks are worrying because what we see and hear is typically accepted as proof. For most people, distinguishing between a deepfake or a real voice or image is extremely hard, as even the deepfake detectors can be easily evaded, if you know how. Up to this point, cybercriminals were always hidden figures that eagerly evaded real-life touchpoints with other humans. Even with phone-based fraud (vishing), the fraudster was a stranger, so trust might not be readily extended. But with the help of deepfakes, fraudsters can orchestrate social engineering attacks that appear to come from a friend or colleague, that is, someone we know and trust and whose motives do not need to be questioned. This is precisely why there seems to be a rise in use of this technology, even though orchestrating a quality deepfake is not cheap and takes some skill. The returns on the original investment are potentially quite high, as more sophisticated scams tend to yield big gains for cybercriminals.

Deepfakes and the Future

One has to wonder what the fraudulent use of deepfakes will mean for society. As humans, we behave according to social and cultural norms. In most societies, people are taught to form friendships and social networks. At work, we are expected to collaborate and help our colleagues. But with this emerging threat, how will our norms and expected behaviors change? Suddenly, we need to treat each social interaction carefully, just like those we may have with a stranger.

If this heightened state of vigilance becomes a norm in order to detect fraud, how will this hinder collaboration, productivity, and camaraderie at work and among friends? What if such technology used for fraud becomes even more mainstream, so a carefully orchestrated scam such as a spoofed number and a deepfake voice of a family member becomes something to be feared because there is no telling if it’s real? Will this change how we socialize and bond with others? How we trust? Perhaps only time will tell.



Source link