- The best foldable phones of 2024: Expert tested and reviewed
- Redefining customer experience: How AI is revolutionizing Mastercard
- The Apple Pencil Pro has dropped down to $92 on Amazon ahead of Black Friday
- This tiny USB-C accessory has a game-changing magnetic feature (and it's 30% off)
- Schneider Electric ousts CEO over strategic differences
6 Cybersecurity Predictions for 2024 – Staying Ahead of the Latest Hacks and Attacks | McAfee Blog
AI and major elections, deepfakes and the Olympics — they all feature prominently in our cybersecurity predictions for 2024.
That’s quite the mix. And that mix reflects the nature of cybersecurity. Just as changing technology shapes cybersecurity, it gets further shaped by the changing world we live in. The bad actors out there exploit new and emerging technologies — just as they exploit events and trends. It’s a potent formula that bad actors turn to again and again. With it, they concoct a mix of ever-evolving attacks.
For a pointed example of the interplay between technology and culture, look no further than Barbie. More specifically, the scams that cropped up around the release of the “Barbie” movie. Using AI tools, scammers generated videos that promoted bogus ticket giveaways. They combined the new technology of AI with the hype surrounding the film and duped thousands of victims as a result.
We expect to see more of the same in 2024, and we have several other predictions as well. With that, let’s look ahead so you can stay ahead of the hacks and attacks we expect to see in 2024.
1) Election cycles will see further disruption with AI tools.
2024 has plenty on the slate in terms of pivotal elections. Across the globe, we have the United States presidential election, general elections in India, and the European Union parliamentary elections, to name a few. While every election comes with its fair share of disinformation, the continued evolution of generative AI tools such as ChatGPT, DALL-E, and Stable Diffusion add an extra level of complication.
So, if a picture is worth a thousand words, what’s an AI-generated photo, video, or voice clone worth? For disinformation, plenty.
Already, many voters raise a skeptical brow when politicians sling statements aimed at discrediting their opponents. Yet when those words are backed by visual evidence, such as a photo or video, it lends them the appearance of credibility. With AI tools, a few keywords can give a false statement or accusation life in the form of a (bogus) photo or video, which now go by the common name of “deepfakes.”
Certainly, 2024 won’t be the first election where bad actors or unscrupulous individuals try to shape public opinion through the manipulation of photos and videos. However, it will be the first election where generative AI tools are significantly more accessible and easier than ever to use. As a result, voters can expect to see a glut of deepfakes and disinformation as the election cycle gears up.
Likewise, the advent of AI voice-cloning tools complicates matters yet more. Consider what that means for the pre-recorded “robocalls” that campaigns use to reach voters en masse. Now, with only a small sample of a candidate’s voice, bad actors can create AI voice clones with striking fidelity. They read from any script a bad actor bangs out and effectively put words in someone else’s mouth — potentially damaging the reputation and credibility of candidates.
As we reported earlier this year, AI voice cloning is easier and more accessible than ever. It stands to reason that bad actors will turn it to political ends in 2024.
How to spot disinformation.
Disinformation has several goals, depending on who’s serving it up. Most broadly, it involves gain for one group at the expense of others. It aims to confuse, misdirect, and manipulate its audience — often by needling strong emotional triggers. That calls on us to carefully consider the media and messages we see, particularly in the heat of the moment.
That can present challenges at a time when massive amounts of content scroll by our eyes in our subscriptions and feeds. Bad actors count on people taking content at immediate face value. Yet asking a few questions can help you spot disinformation when you see it.
The International Federation of Library Associations and Institutions offers this checklist:
- Consider the Source – Click away from the story to investigate the site, its mission, and its contact info.
- Read Beyond – Headlines can be outrageous to get clicks. What’s the whole story?
- Check the Author – Do a quick search on the author. Are they credible? Are they real?
- Supporting Sources? – Determine if the info given supports the story.
- Check the Date – Reposting old news stories doesn’t mean they’re relevant to current events.
- Is it a Joke? – If it is too outlandish, it might be satire. Research the site and author to be sure.
- Check your Biases – Consider if your own beliefs could affect your judgment.
- Ask the Experts – Ask a librarian or consult a fact-checking site.
That last piece of advice is particularly strong. De-bunking disinformation takes time and effort. Professional fact-checkers at news and media organizations do this work daily. Posted for all to see, they provide a quick way to get your answers. Some fact-checking groups include:
- Politifact.com
- Snopes.com
- FactCheck.org
- Reuters.com/fact-check
Put plainly, bad actors use disinformation to sow discord and divide people. While not every piece of controversial or upsetting piece of content is disinformation, those are surefire signs to follow up on what you’ve seen with several credible sources. Also, keep in mind that those bad actors out there want you to do their dirty work for them. They want you to share their content without a second thought. By taking a moment to check the facts before you react, curb the dissent they want to see spread.
2) AI scams will be the new sneaky stars of social media.
In the ever-evolving landscape of cybercrime, the emergence of AI has introduced a new level of sophistication and danger. With the help of AI, cybercriminals now possess the ability to manipulate social media platforms and shape public opinion in ways that were previously unimaginable.
One of the most concerning aspects of this development is the power of AI tools to fabricate photos, videos, and audio. These tools enable bad actors to create highly convincing and realistic content, making it increasingly difficult for users to discern between what is real and what is manipulated. This opens up a whole new realm of possibilities for cybercriminals to exploit unsuspecting individuals and organizations.
One alarming consequence of this is the potential for celebrity and influencer names and images to be misused by cybercrooks. With the ability to generate highly convincing content, these bad actors can create fake endorsements that appear to come from well-known personalities. This can lead to an increase in scams and fraudulent activities, as unsuspecting consumers may be more likely to trust and engage with content that appears to be endorsed by their favorite celebrities or influencers.
Local online marketplaces are also at risk of being targeted by cybercriminals utilizing AI. By leveraging fabricated content, these bad actors can create fake listings and advertisements that appear legitimate. This can deceive consumers into making purchases or engaging in transactions that ultimately result in financial loss or other negative consequences.
How to avoid AI social media scams
As AI continues to advance, it is crucial for consumers to be aware of the potential risks and take necessary precautions. This includes being vigilant and skeptical of content encountered on social media platforms, verifying the authenticity of endorsements or advertisements, and utilizing secure online marketplaces with robust verification processes.
3) Cyberbullying among kids will soar
One of the most troubling trends on the horizon for 2024 is the alarming rise of cyberbullying, which is expected to be further exacerbated by the increasing use of deepfake technology. This advanced and remotely accessible tool has become readily available to young adults, enabling them to create exceptionally realistic fake content with ease.
In the past, cyberbullies primarily relied on spreading rumors and engaging in online harassment. However, with the emergence of deepfake technology, the scope and impact of cyberbullying have reached new heights. Cyberbullies can now manipulate images that are readily available in the public domain, altering them to create fabricated and explicit versions. These manipulated images are then reposted online, intensifying the harm inflicted on their victims.
The consequences of this escalating trend are far-reaching and deeply concerning. The false images and accompanying words can have significant and lasting effects on the targeted individuals and their families. Privacy becomes compromised as personal images are distorted and shared without consent, leaving victims feeling violated and exposed. Moreover, the fabricated content can tarnish one’s identity, leading to confusion, mistrust, and damage to personal and professional relationships.
The psychological and emotional well-being of those affected by deepfake cyberbullying is also at stake. The relentless onslaught of false and explicit content can cause severe distress, anxiety, and depression. Victims may experience a loss of self-esteem, as they struggle to differentiate between reality and the manipulated content that is being circulated online. The impact on their mental health can be long-lasting, requiring extensive support and intervention.
The ripple effects of deepfake cyberbullying extend beyond the immediate victims. Families are also deeply affected, as they witness the distress and suffering of their loved ones. Parents may feel helpless and overwhelmed, struggling to protect their children from the relentless onslaught of cyberbullying. The emotional toll on families can be immense, as they navigate the challenges of supporting their children through such traumatic experiences.
How to prevent online cyberbullying.
- Education and Awareness: Promote digital literacy and educate individuals about the consequences and impact of cyberbullying. Teach them how to recognize and respond to cyberbullying incidents, and encourage them to report any instances they encounter.
- Strong Policies and Regulations: Implement and enforce strict policies and regulations against cyberbullying on online platforms. Collaborate with social media companies, schools, and organizations to establish guidelines and procedures for handling cyberbullying cases promptly and effectively.
- Support and Empowerment: Provide support systems and resources for victims of cyberbullying. Encourage open communication and create safe spaces where individuals can seek help and share their experiences. Empower bystanders to intervene and support victims, fostering a culture of empathy and kindness online.
4) Conflicts across the globe will ramp up charity fraud.
Scammers exploit emotions – such as the excitement of the Olympics. Darkly, they also tap into fear and grief.
A particularly heartless method of doing this is through charity fraud. While this takes many forms, it usually involves a criminal setting up a fake charity site or page to trick well-meaning contributors into thinking they are supporting legitimate causes or contributing money to help fight real issues.
2024 will see this continue. We further see potential for this to increase given the conflicts in Ukraine and the Middle East. Scammers might also increase the emotional pull of the messaging by tapping into the same AI technology we predict will be used in the 2024 election cycle. Overall, expect their attacks to look and feel far more sophisticated than in years past.
How to donate safely online.
- As with so many scams out there, any time an email, text, direct message, or site urges you into immediate action — take pause. Research the charity. See how long they’ve been in operation, how they put their funds to work, and who truly benefits from them.
- Likewise, note that there are some charities that pass along more money to their beneficiaries than others. Generally, the most reputable organizations only keep 25% or less of their funds for operations. Some less-than-reputable organizations keep up to 95% of funds, leaving only 5% for advancing the cause they advocate.
- In the U.S., the Federal Trade Commission (FTC) has a site full of resources so that you can make your donation truly count. Resources like Charity Watch and Charity Navigator, along with the BBB’s Wise Giving Alliance can also help you identify the best charities.
5) New strains of malware, voice, visual cloning and QR code scams will accelerate
Aside from its ability to write love poems, answer homework questions, and create art with a few keyword prompts, AI can do something else. It can code. In the hands of hackers, that means AI can churn out new strains of malware and even spin up entire malicious websites. And quickly at that.
Already, we’ve seen hackers use AI tools to create malware. This will continue apace, and we can expect them to create smarter malware too. AI can spawn malware that analyzes and adapts to a device’s defenses. This helps particularly malicious attacks like spyware and ransomware to infect a device by allowing it to slip by undetected. It also makes the creation and dissemination of convincing phishing emails and QR code scams, faster and easier. This extends to the creation of deepfake video, photo, and audio content aimed at deceiving unsuspecting targets and scamming them out of money. The rise of QR code scams, also known as quishing, is an additional concern. Scammers use AI to generate malicious QR codes that, when scanned, lead to phishing websites or trigger malware downloads. As the barrier to entry for these threats lowers, these scams will spread to all platforms with an increased focus on mobile devices.
However, like any technology, AI is a tool. It works both ways. AI is on your side. In fact, it’s kept you safer online for some time now. Meanwhile, at McAfee, we’ve used AI as a core component of our protection for years now. As such, it’s done plenty for you over the years. AI has sniffed out viruses, malicious websites, and sketchy content online. It’s helped steer you clear of malicious websites too.
As such, you can expect an increasing number of AI-powered tools that combat AI-powered threats.
How to stay safe from AI-powered threats.
- Use AI-powered online protection software. Use good AI to stop bad AI. This year, we made improvements to our AI-powered security, making it faster and stronger. It scans 3x faster than before and offers 100% protection against entirely new threats, like the ones generated by AI. It also offers 100% protection against threats released in the past month (AV-TEST results, June 2023). You’ll find it across all our products that include antivirus.
- Protect yourself from scams with AI. Our McAfee Scam Protection uses patented and powerful AI technology helps you stay safer amid the rise in phishing scams. Including phishing scams generated by AI. It detects suspicious URLs in texts before they’re opened or clicked on. No more guessing if that text you just got is real or fake. And if you accidentally click or tap on a suspicious link in a text, email, social media, or browser search, it blocks the scam site from loading. You’ll find McAfee Scam Protection across our McAfee+ plans.
6) Olympic-sized scams will kick into high stride.
With big events come big scams. Look for plenty of them with the 2024 Summer Olympics.
An event with this level of global appeal attracts scammers looking to capitalize on the excitement. They promise tickets, merch, and exclusive streams to events, among other things. Yet they take a chunk out of your wallet and steal personal info instead.
You can expect to see a glut of email-based phishing and message-based smishing attacks. Now, with the introduction of generative AI, these scams are getting harder and harder to identify. AI writes cleaner emails and messages, so fewer scams feature the traditional hallmarks of misspelled words and poor grammar. Combine that with the excitement generated around the Olympic games, and we can easily see how people might be tempted by bogus sweepstakes and offers for the Olympics trip of a lifetime. If they only click or tap that link. Which of course leads to a scam website.
You can expect these messages to crop up across a variety of channels, including email, text messages, and other messaging channels like WhatsApp and Telegram. They might slide into social media DMs as well.
If you’re planning to catch the Olympic action in person, scammers have a plan in mind for you — ticket fraud. As we’ve seen at the FIFA World Cup and several other major sporting events over the years, scammers spin up scam ticket sites with tickets to all kinds of matches and events. Again, these sites don’t deliver. These sites can look rather professional, yet if the site only accepts cryptocurrency or wire transfers, you can be certain it’s fraud. Neither form of payment offers a way to challenge charges or recoup losses.
How to enjoy the 2024 Olympics safely.
- Phishing and smishing attacks can take a little effort to spot. As we’ve seen, the scammers behind them have grown far more sophisticated in their approach. However, know that if a deal or offer seems a little too good to be true, avoid it. For more on how to spot these scams, check out our blog dedicated to phishing and similar attacks.
- As for tickets, they’re only available through the official Paris 2024 ticketing website. Anyone else online is either a broker or an outright scammer. Stick with the official website for the best protection.
- The same holds true for watching the Olympics at home or on the go. A quick search online will show you the official broadcasters and streamers in your region. Stick with them. Unofficial streams can hit your devices with malware or bombard you with sketchy ads.
- Overall, use comprehensive online protection software like ours when you go online, which can help steer you clear of phishing, smishing, and other attacks.