AI is Revolutionizing Phishing for Both Sides. What will make the Difference?
Thanks to AI, phishing attacks are better than ever. So is our ability to stop them.
By Antonio Sanchez, Principal Cybersecurity Evangelist at Fortra
AI has always been a lurking threat in the context of cybercrime. Since it burst onto the scene in late 2022, ChatGPT has been wielded by black hats of varying skill levels to make phishing attacks more convincing, more achievable, and more widespread. Not only has there been a rise in quantity, but – and here’s the aggravating part – in quality as well.
Luckily, the tool can be used by both sides. The only question will be – who will use it better?
Impact of AI on phishing
“If it ain’t broke, don’t fix it.” Phishing as a cybercrime model has always been successful. Thus, we see phishers using generative AI to recreate the same old techniques, only better. The result? An unheard-of 1,235% increase in phishing emails since the launch of ChatGPT.
Phishing emails are now sent with perfect grammar and spelling in a multiplicity of languages, thanks to generative AI and large language models (LLM). Need a phishing attack in perfect Japanese? Now, you can get that. The ability of AI to function flawlessly in any language has opened up new regions for enterprising black hats. Additionally, the ability of AI to scour social media, and the internet at large, for personal details has also made large-scale spear phishing a possibility as well. What used to take humans days now takes seconds or less.
But wait – it gets worse. New AI techniques also make them harder to detect. These detection evasion tactics ensure that attacks only present themselves to the intended target and otherwise ‘play dead’ for detection processes. These include anything from altering word sentences and structure to generating polymorphic malware on the fly.
And let’s not forget the most powerful aspect of phishing – the social engineering craft. Thanks to generative AI’s new methods of identity falsification, it’s harder than ever to tell what’s real from what’s not. This is evidenced in deepfake videos, voice phishing, and even QR code phishing (quishing).
Using AI to fight AI
The good news is that AI is unbiased, at least in a security context. Whoever wields it can bend it to their will, and security hasn’t been slow to make use of it.
In the fight against AI-based phishing, it is being used to trawl the web to identify new phishing infrastructure. It goes without saying that it can do this much faster than humans can. On that same note, AI is also being leveraged for its ability to spot divergent patterns over petabytes of data, thereby proving its usefulness in identifying stealthy attacks. Operationally, AI-based detection and response tools are helping overwhelmed teams level-up without staffing up, and vet alerts to reduce false positives helping to avoid burnout and overwhelm.
The only thing to remember is that AI is still the student, not the teacher. A human eye and mind are still required to make the hard calls, manage the decisions that come from data analysis, and (as of yet) deploy the systems in the first place.
The Necessity of the Human Element
It’s clear that AI can only do so much on the defensive side. All the AI-gleaned data in the world is no good without the expertise to know what to do with it. Someone needs to create the workflows, someone needs to confirm and vet incident response, and someone needs to tell the other humans on the team when something is amiss.
And that someone doesn’t always have to be Steve the IT Guy. All employees need to be aware of the latest cybercrime trends, especially those with non-technical roles, if there is any such thing these days. The head of HR needs to know the latest AI-driven phishing tactics as much as your system administrator, if not more. They need to know to be on the lookout for deepfakes, which emails sound “phishy,” and why they should always check with IT if Microsoft is sending them an unsolicited request to update their Teams login – again.
That’s why security awareness training (SAT) is vital. Phishing simulation campaigns can educate your employee-base about cutting-edge techniques and test their ability to recognize the real-world tactics of a modern-day phishing scam. The results might be illuminating, but with more practice, failure rates do decline. One global manufacturer saw phishing click-through rates drop from nearly 40% to under 15% after a SAT program.
Conclusion
Is AI changing the game for phishing? Yes, but the change is going both ways. In a way, we’re back to square one as we resume the cat-and-mouse game that is cybersecurity, but with race cars, if you will. The important thing is that the race hasn’t been won yet.
As we continue to explore the varied uses of artificial intelligence, we can combine those capabilities with everybody’s secret weapons – yes, humans. The human element is not to be underestimated; not on security teams, and not among everyday employees. Two of the biggest weapons attackers have are ignorance and casualness, and by leveling up the security-mindedness of the average workforce, we can drastically reduce those and cut down on human error. Given the fact that AI cybersecurity tools can already stand toe-to-toe with AI-based phishing attacks, the difference may be enough to tip the scale.
About the Author
Antonio Sanchez is Principal Cybersecurity Evangelist at Fortra. As a subject matter expert for Fortra’s security portfolio, Antonio helps drive market recognition for the Fortra brand. He joined Fortra from Alert Logic in 2023, where he developed the messaging, positioning, and technical content for the Managed Detection and Response (MDR) business. Alert Logic was acquired by Fortra in 2022.
Antonio has over 20 years in the IT industry focusing on cybersecurity, information management, and disaster recovery solutions to help organizations of all sizes manage threats and improve their security posture. He is a Certified Information Systems Security Professional (CISSP).
Antonio has held various product management, technical sales, and strategic marketing roles with Dell, Forcepoint, and Symantec. At the latter, he was responsible for developing and leading the Competitive Intelligence Program for the core security unit. www.fortra.com