- TunnelBear VPN review: An affordable, easy-to-use VPN with few a few notable pitfalls
- VMware Product Release Tracker (vTracker)
- I use this cheap Android tablet more than my iPad Pro - and it costs a fraction of the price
- One of my favorite budget tablets this year managed to be replace both my Kindle and iPad
- I tested DJI's palm-sized drone, and it captured things I had never seen before
The Deepfakes Lab: Detecting & Defending Against Deepfakes with Advanced AI | McAfee Blogs
Detrimental lies are not new. Even misleading headlines and text can fool a reader. However, the ability to alter reality has taken a leap forward with “deepfake” technology which allows for the creation of images and videos of real people saying and doing things they never said or did. Deep learning techniques are escalating the technology’s finesse, producing even more realistic content that is increasingly difficult to detect.
Deepfakes began to gain attention when a fake pornography video featuring a “Wonder Woman” actress was released on Reddit in late 2017 by a user with the pseudonym “deepfakes.” Several doctored videos have since been released featuring high-profile celebrities, some of which were purely for entertainment value and others which have portrayed public figures in a demeaning light. This presents a real threat. The internet already distorts the truth as information on social media is presented and consumed through the filter of our own cognitive biases.
Deepfakes will intensify this problem significantly. Celebrities, politicians and even commercial brands can face unique forms of threat tactics, intimidation, and personal image sabotage. The risks to our democracy, justice, politics and national security are serious as well. Imagine a dark web economy where deepfakers produce misleading content that can be released to the world to influence which car we buy, which supermarket we frequent, and even which political candidate receives our vote. Deepfakes can touch all areas of our lives; hence, basic protection is essential.
How are Deepfakes Created?
Deepfakes are a cutting-edge advancement of Artificial Intelligence (AI) often leveraged by bad actors who use the technology to generate increasingly realistic and convincing fake images, videos, voice, and text. These videos are created by the superimposition of existing images, audio, and videos onto source media files by leveraging an advanced deep learning technique called “Generative Adversarial Networks” (GANs). GANs are relatively recent concepts in AI which aim to synthesize artificial images that are indistinguishable from authentic ones. The GAN approach brings two neural networks to work simultaneously: one network called the “generator” draws on a dataset to produce a sample that mimics it. The other network, known as the “discriminator”, assesses the degree to which the generator succeeded. Iteratively, the assessments of the discriminator inform the assessments of the generator. The increasing sophistication of GAN approaches has led to the production of ever more convincing and nearly impossible to expose deepfakes, and the result far exceeds the speed, scale, and nuance of what human reviewers could achieve.
McAfee Deepfakes Lab Applies Data Science Expertise to Detect Bogus Videos
To mitigate this threat, McAfee today announced the launch of the McAfee Deepfakes Lab to focus the company’s world-class data science expertise and tools on countering the deepfake menace to individuals, organizations, democracy and the overall integrity of information across our society. The Deepfakes Lab combines computer vision and deep learning techniques to exploit hidden patterns and detect manipulated video elements that play a key role in authenticating original media files.
To ensure the prediction results of the deep learning framework and the origin of solutions for each prediction are understandable, we spent a significant amount of time visualizing the layers and filters of our networks then added a model-agnostic explainability framework on top of the detection framework. Having explanations for each prediction helps us make an informed decision about how much we trust the image and the model as well as provide insights that can be used to improve the latter.
We also performed detailed validation and verification of the detection framework on a large dataset and tested detection capability on deepfake content found in the wild. Our detection framework was able to detect a recent deepfake video of Facebook’s Mark Zuckerberg giving a brief speech about the power of big data. The tool not only provided an accurate detection score but generated heatmaps via the model-agnostic explainability module highlighting the parts of his face contributing to the decision, thereby adding trust in our predictions.
Such easily available deepfakes reiterate the challenges that social networks face when it comes to policing manipulated content. As advancements in GAN techniques produce very realistic looking fake images, advanced computer vision techniques will need to be developed to identify and detect advanced forms of deepfakes. Additionally, steps need to be taken to defend against deepfakes by making use of watermarks or authentication trails.
Sounding the Alarm
We realize that news media do have considerable power in shaping people’s beliefs and opinions. As a consequence, their truthfulness is often compromised to maximize impact. The dictum “a picture is worth a thousand words” accentuates the significance of the deepfake phenomenon. Credible yet fraudulent audio, video, and text will have a much larger impact that can be used to ruin celebrity and brand reputations as well as influence political opinion with terrifying implications. Computer vision and deep learning detection frameworks can authenticate and detect fake visual media and text content, but the damage to reputations and influencing opinion remains.
In launching the Deepfakes Lab, McAfee will work with traditional news and social media organizations to identify malicious deepfakes videos during this crucial 2020 national election season and help combat this new wave of disinformation associated with deepfakes.
In our next blog on deepfakes, we will demonstrate our detailed detection framework. With this framework, we will be helping to battle disinformation and minimize the growing challenge of deepfakes.
To engage the services of the McAfee Deepfakes Lab, news and social media organizations may submit suspect video for analysis by sending content links to media@mcafee.com.