Will deepfake threats undermine cybersecurity in 2025?

Deepfake technology is a new and unsettling reality. When embedded within phishing emails and collaboration chats, deepfakes represent a vicious and growing threat. To combat deepfakes, organizations urgently need to implement fresh approaches to both employee education and cybersecurity.

To convey the gravitas of the situation, in February of 2024, a Singapore-based company lost $25 million due to deepfake impersonation of the firm’s CFO and other high-level executives. An employee mistakenly believed the deepfake ruse and sent funds to an illegitimate account.  

Similarly, in the second half of 2024, a North Korean deepfake threat actor convinced KnowBe4, the cybersecurity company, to hire him. That’s correct — a deepfake fooled a cybersecurity company. 

The aforementioned examples point to threat actors deliberate and effective attempts to dupe employees and to infiltrate global organizations with ties to the supply chain. Motives range from financial gain to espionage.

Addressing responsibility 

One of the key issues in combating deepfake threats is that we’re still missing some of the cybersecurity protective technologies. It has not yet been determined as to which security provider, and which mechanisms are responsible for protection. Is it the endpoint security? Is it the telecommunications platform?

The ambiguity means that organizations need to be proactive about deepfake prevention, initially focusing on employee awareness and employee training. Organizations should also ensure that they deploy best-in-class technologies for their cybersecurity stack — from email, to endpoint, to mobile security — to block attackers from infiltrating and then impersonating both employees and business partners. As I mentioned in my 2025 cyber predictions, hackers won’t just steal your data or your access credentials, they’ll disrupt financial transactions, corporate decisions, and brand reputation. 

For example, if the attackers’ send a meeting bridge via email, advanced email security is likely to identify a potential impersonation attempt, and can block the email before it even reaches the employee.

Mastering the mind

As a society, by-and-large, we are used to trusting what we see with our eyes and what we hear with our ears. Many people continue to trust rich media; video and audio. However, with deepfake technologies all of that can be faked. Thus, we need to tell our employees to be more suspicious and to provide our employees with new heuristics with which to assess input. 

Several years ago, when text message scams ramped up and began to impersonate commonly used services, like UPS, FedEx or the bank, we learned to avoid trusting text messages. We will need to do the same with video and audio in order to prevent the hijacking of our perceptions. 

Within organizations, in relation to cybersecurity training, teach employees to think in terms of “zero trust”. Encourage employees to suspect and question everything that they see. 

It is imperative that we train employees to implement a mental two-factor authentication, where they verify any request through a trusted and independent channel — one that a hacker is unlikely to compromise.

Furthermore, employees need to consider what they are being asked to do and how they can verify that it is genuine. It could be as simple as a phone call to the person after taking their number from a separate and verified source.

Comprehensive and advanced

Organizations not only need comprehensive cybersecurity, as mentioned previously, but also need advanced technology to prevent deceptive threats. 

For example, AI-based stylometry can detect attacks that would bypass human observation. Attacks may involve email or documents that look ordinary, but that are actually inauthentic. A person wouldn’t be able to distinguish the fake from the real, but a software solution would.

More information

Experts anticipate that, over time, communication platforms will improve their security, as to be able to detect and block deepfakes. I foresee, that in a year or two, we’ll see more of these technologies embedded into the telecommunication platforms.

Further, communications platforms will likely increase their capacities to confirm real identities while blocking impersonators. That said, additional security from cybersecurity vendors will prove critical, as historically, default security hasn’t stopped sophisticated and targeted attacks. 

A deep-dive into DLP

Over the last decade, we’ve seen an increase in DLP technology adoption, especially when it comes to monitoring sensitive content in cloud-based repositories. According to Statista’s projection, the DLP market will experience almost a 65% increase, growing from $1.24 billion in 2019 to $3.5 billion by 2025. 

However, implementing DLP inside live conversations is still a challenge. Additionally, it’s not clear as to what the value and experience would be. It has yet to be seen as to whether enterprise customers will look to expand DLP capabilities into live conversations. 

Nonetheless, this may represent a strategic approach that can protect employees from sharing sensitive content, should they fall victim to a deepfake scam.

Further thoughts

As deepfake technology continues to advance, organizations not only need to track trends, but also need to stay flexible and adaptable in relation to cybersecurity strategies. 

By focusing on a combination of employee training, comprehensive security measures and agile security tooling, businesses will be better able to protect themselves from sophisticated AI and deepfake threats. 



Source link

Leave a Comment