#ISC2Congress: Cybersecurity Pros Must Prepare for Emerging Deepfake Threats

#ISC2Congress: Cybersecurity Pros Must Prepare for Emerging Deepfake Threats


Deepfakes pose an emerging security risk to organizations, stated Thomas P. Scanlon, CISSP, technical manager – CERT Data Science, Carnegie Mellon University, during a session at the (ISC)2 Security Congress this week.

Scanlon began his talk by explaining how deepfakes work, which he emphasized is essential for cybersecurity professionals to understand to protect against the threats this technology poses. He noted that organizations are starting to become aware of this risk. “If you’re in a cybersecurity role in your organization, there is a good chance you will be asked about this technology,” commented Scanlon.

He believes deepfakes are part of a broader ‘malinformation’ trend, which differs from disinformation in that it “is based on truth but is missing context.”

Deepfakes can encompass audio, video and image manipulations or can be completely fake creations. Examples include face swaps of individuals, lip syncing, puppeteering (the control of sounds and synthetic) and creating people who don’t exist.

Currently, the two machine-learning neural networks used to create deepfakes are auto-encoders and generative adversarial networks (GAN). Both require substantial amounts of data to be ‘trained’ to recreate aspects of a person. Therefore, creating accurate deepfakes is still very challenging, but “well-funded actors do have the resources.”

Increasingly, organizations are being targeted in numerous ways through deepfakes, particularly in the area of fraud. Scanlon highlighted the case of a CEO being duped into transferring $243,000 to fraudsters after being tricked into believing he was talking to the firm’s chief executive through deepfake voice technology. This was the “first known instance of somebody using deepfakes to commit a crime.”

He also noted that there has been a number of cases of malicious actors using video deepfakes to pose as a potential candidate for a job in a virtual interview, for example, using the LinkedIn profile of someone who would be qualified for the role. Once employed, they planned use their access to the company’s systems to access and steal sensitive data. This was a threat that the FBI recently warned employers about.

While there are developments in deepfake detection technologies, these are currently not as effective as they need to be. In 2020, AWS, Facebook, Microsoft, the Partnership on AI’s Medica Integrity Steering Committee and others organized the Deepfake Detection Challenge – a competition that allowed participants to test their deepfake detection technologies.

In this challenge, the best model detected deepfakes from Facebook’s collection 82% of the time. When the same algorithm was run against previously unseen deepfakes, just 65% were detected. This shows that “current deepfake detectors aren’t practical right now,” according to Scanlon.

Companies like Microsoft and Facebook are creating their own deepfake detectors, but these are not commercially available yet.

Therefore, at this stage, cybersecurity teams must become adept at identifying practical cues for fake audio, video and images. These include flickering, lack of blinking, unnatural head movements and mouth shapes.

Scanlon concluded his talk with a list of actions organizations can start taking to tackle deepfake threats, which are going to surge as the technology improves:

  • Understand the current capabilities for creation and detection
  • Know what can be done realistically and learn to recognize indicators
  • Be aware of practical ways to defeat current deepfake capabilities – ask them to turn their head
  • Create a training and awareness campaign for your organization
  • Review business workflows for places deepfakes could be leveraged
  • Craft policies about what can be done through voice or video instructions
  • Establish out-of-band verification processes
  • Watermark media – literally and figuratively
  • Be ready to combat MDM of all flavors
  • Eventually use deepfake detection tools



Source link