GenAI Can Save Phishers Two Days of Work
Generative AI tools can save phishing actors 16 hours of work designing a scam email, but still can’t match a human knack for crafting more convincing missives, according to new IBM research.
Social engineering expert Stephanie Carruthers revealed details of a new research project today, in which her team sought to understand whether generative AI models have the same deceptive powers as the human mind.
“With only five simple prompts we were able to trick a generative AI model to develop highly convincing phishing emails in just 5 minutes – the same time it takes me to brew a cup of coffee,” she explained.
“It generally takes my team about 16 hours to build a phishing email, that’s without factoring in the infrastructure set-up. So, attackers can potentially save nearly two days of work by using generative AI models.”
Among the prompts were: the top areas of concern for employees working in specific industries; social engineering and marketing techniques that should be used; and the people/company that should be impersonated.
Read more on generative AI: Dark Web Markets Offer New FraudGPT AI Tool
“I have nearly a decade of social engineering experience, crafted hundreds of phishing emails, and I even found the AI-generated phishing emails to be fairly persuasive,” said Carruthers.
“In fact, there were three organizations who originally agreed to participate in this research project, and two backed out completely after reviewing both phishing emails because they expected a high success rate.”
However, the IBM X-Force Red social engineering team were marginally more successful in their efforts, which tapped “creativity and a dash of psychology” to resonate more deeply with their targets and add an air of authenticity that Carruthers claimed is hard for AI to replicate.
A round of A/B testing revealed the click rate for the human-generated phishing email (14%) was slightly higher than that of the AI-generated email (11%). It was also reported less frequently (52%) than the AI version (59%).
However, AI is likely to become an increasingly disruptive force in the phishing industry going forward, especially when used in malicious tools like WormGPT.
“Humans may have narrowly won this match, but AI is constantly improving. As technology advances, we can only expect AI to become more sophisticated and potentially even outperform humans one day,” Carruthers concluded.