- Los CIO consideran que la gestión de costes puede acabar con el valor de la IA
- 칼럼 | AI 에이전트, 지금까지의 어떤 기술과도 다르다
- The $23 Echo Dot deal is a great deal to upgrade your smart home this Black Friday
- Amazon's Echo Spot smart alarm clock is almost half off this Black Friday
- The newest Echo Show 8 just hit its lowest price ever for Black Friday
How To Teach Your Kids About DeepFakes | McAfee Blog
Is it real? Is it fake?
Deepfake technology has certainly made everything far more complicated online. How do you know for sure what’s real? Can you actually trust anything anymore? Recently, a Hong Kong company lost A$40 million in a deepfake scam after an employee transferred money following a video call with a scammer who looked like his boss! Even Oprah and Taylor have been affected by deepfake scammers using them to promote dodgy online schemes. So, how do we get our heads around it, and just as importantly, how do we help our kids understand it? Don’t stress – I got you. Here’s what you need to know.
What Actually Is Deepfake Technology?
Deepfake technology is essentially photoshopping on steroids. It’s when artificial intelligence is used to create videos, voice imitations, and images of people doing and saying things they never actually did. The ‘deep’ comes from the type of artificial intelligence that is used – deep learning. Deep learning trains computers to process data and make predictions in the same way the human brain does.
When it first emerged around 2017, it was clunky and many of us could easily spot a deepfake however it is becoming increasingly sophisticated and convincing. And that’s the problem. It can be used to create great harm and disruption. Not only can it be used by scammers and dodgy operators to have celebrities promote their products, but it can also be used to undertake image abuse, create pornographic material, and manipulate the outcome of elections.
How Are DeepFakes Made?
When deepfakes first emerged they were clunky because they used a type of AI model called Generative Adversarial Network (or GAN). This is when specific parts of video footage or pictures are manipulated, quite commonly the mouth. You may remember when Australian mining magnate Andrew Forest was ‘deepfake’ into spruiking for a bogus ‘get rich quick’ scheme. This deepfake used GAN – as they manipulated just his mouth.
But deepfakes are now even more convincing thanks to the use of a new type of generative AI called a diffusion model. This new technology means a deepfake can be created from scratch without having to even manipulate original content making the deepfake even more realistic.
Experts and skilled scammers were the only ones who really had access to this technology until 2023 when it became widely available. Now, anyone who has a computer or phone and the right app (widely available) can make a deepfake.
While it might take a novice scammer just a few minutes to create a deepfake, skilled hackers are able to produce very realistic deepfakes in just a few hours.
Why Are Deepfakes Made?
As I mentioned before, deepfakes are generated to either create harm or cause disruption. But a flurry of recent research is showing that creating deepfake pornographic videos is where most scammers are putting their energy. A recent study into deepfakes in 2023 found that deepfake pornography makes up a whopping 98% of all deepfake videos found online. And not surprisingly, 99% of the victims are women. The report also found that it now takes less than 25 minutes and costs nothing to create a 60-second deepfake pornographic video of anyone using just one clear face image! Wow!!
Apart from pornography, they are often used for election tampering, identity theft, scam attempts and to spread fake news. In summary, nothing is off limits!
How To Spot A Deepfake
The ability to spot a deepfake is something we all need, given the potential harm they can cause. Here’s what to look out for:
- If it’s a video, check the audio matches the video i.e. is the audio synced to the lip movements? Check for unnatural blinking, odd lighting, misplaced shadows, or facial expressions that don’t match the tone of the voice. These might be the ‘older’ style of deepfakes, created using the GAN or ‘face-swap’ model.
- Deepfake videos and pictures created with the ‘face swap’ model may also look ‘off’ around the area where they have blended the face onto the original forehead. Check for colour and textual differences or perhaps an unusual hairline.
- The newer diffusion model means deepfakes can be harder to spot however look for asymmetries like unmatching earrings or eyes that are different sizes. They also don’t do hands very well, so check for the right number of fingers and ‘weird’ looking hands.
- A gut feeling! Even though the technology is becoming very sophisticated, it’s often possible to detect when it doesn’t seem quite right. There could be an awkwardness in body movement, a facial feature that isn’t quite right, an unusual background noise, or even weird looking teeth!!
How To Protect Yourself
There are two main ways you could be affected by deepfakes. First, as a victim e.g. being ‘cast’ in a deepfake pornographic video or photo. Secondly, by being influenced by a deepfake video that is designed to create harm e.g. scam, fake news, or even political disinformation.
But the good news is that protecting yourself from deepfake technology is not dissimilar to protecting yourself from general online threats. Here are my top tips:
Be Careful What You Share
The best way to protect yourself from becoming a victim is to avoid sharing anything online at all. I appreciate that this perhaps isn’t totally realistic so instead, be mindful of what and where you share. Always have privacy settings set to the highest level and consider sharing your pics and videos with a select group instead of with all your online followers. Not only does this reduce the chances of your pictures making their way into the hands of deepfake scammers but it also increases the chance of finding the attacker if someone does in fact create a deepfake of you.
Consider Watermarking Photos
If you feel like you need to share pics and videos online, perhaps add a digital watermark to them. This will make it much harder for deepfake creators to use your images as it is a more complicated procedure that could possibly be traceable.
Be Cautiously Suspicious Always
Teach your kids to never assume that everything they see online is true or real. If you always operate with a sceptical mindset, then there is less of a chance that you will be caught up in a deepfake scam. If you find a video or photo that you aren’t sure about, do a reverse image search. Or check to see if it’s covered by trusted news websites, if it’s a news video. Remember, if what the person in the video is saying or doing is important, the mainstream news media will cover it. You can always fact check what the ‘person’ in the video is claiming as well.
Use Multi-Factor Authentication
Adding another layer of security to all your online accounts will make it that much harder for a deepfake creator to access your accounts and use your photos and videos. Multi-factor authentication or 2-factor authentication means you simply add an extra step to your login process. It could be a facial scan, a code sent to your smartphone, or even a code generated on an authenticator app like Google Authenticator. This is a complete no-brainer and probably adds no more than 30 seconds to the logging in process.
Keep Your Software Updated
Yes, this can make a huge difference. Software updates commonly include ‘patches’ or fixes for security vulnerabilities. So, if your software is out of date, it’s a little like having a broken window and then wondering why people can still get in! I recommend turning on automatic updates, so you don’t have to think about it.
Passwords Are Key
A weak password is also like having a broken window – it’s so much easier for deepfake scammers to access your accounts and your pics and videos. I know it seems like a lot of work but if every one of your online accounts has its own complex and individual password then you have a much greater chance of keeping the deepfake scammers away!
So, be vigilant, always think critically, and remember you can report deepfake content to your law enforcement agency. In the US, that’s the FBI and in Australia, it is the eSafety Commissioner’s Office.
Stay safe all!
Alex