- Hisense's latest laser projector is so sharp and vivid, it may just replace your 4K TV
- If you're planning to upgrade your phone, you might want to buy one now - here's why
- Run LLMs Locally with Docker Model Runner | Docker
- Microsoft unveils 9 new Copilot features - you can try some now
- Nintendo Switch 2 pre-orders delayed, new price hike likely - here's why
Defenders Outpace Attackers in AI Adoption

Cybercriminals’ use of AI is more limited than is generally reported or demonstrated by security researchers. Meanwhile, investment in AI by the cybersecurity sector is set to give defenders the edge over threat actors, according to Trend Micro’s Director, Forward Looking Threat Research – Cybercrime Research, Robert McArdle.
Speaking during IRISSCON 2024 in Dublin, McArdle said that given the contrasting scale of investment and emphasis on AI in cybersecurity, defenders will gain an advantage over attackers.
How Cybercriminals Really Use AI
McArdle set out four main ways in which cybercriminals are leveraging AI today.
Improving Coding
Similarly to how developers use GenAI tools like ChatGPT to create less buggy code, cybercriminals are able to write better malware code via such technologies.
However, there have been no observed cases of malware code created entirely by AI, with the technology not yet reliable enough for this.
Building AI into Criminal Software
Cybercriminals, including APT groups, often use GenAI tools to provide a template for phishing emails.
This gives them the advantage of writing phishing emails in any language, with the correct spelling and grammar thus making it more difficult to detect such scams.
Jailbreak-as-a-Service
These are criminal services specializing in disabling AI tools’ security policies. McArdle noted that security is not built into genAI tools, and policies are instead built into their interfaces.
This provides an opportunity for malicious actors to revert the tool to its basic settings, removing security protocols.
Deepfakes
McArdle highlighted a recent trend of deepfakes being advertised for cybercrime services. For example, a Russian deepfake tool, dubbed ‘Melvin’, can be used for user impersonation.
Other common deepfake schemes utilized by cybercriminals include business email compromise (BEC), virtual kidnapping and sextortion.
AI Set to Give Defenders the Edge
Despite the common uses of AI in cybercrime, McArdle said that no novel or unique AI attacks, as simulated by researchers, have been observed to date.
“It could be a lot worse than what we’re seeing right now,” he acknowledged.
McArdle set out three ‘rules’ of cybercrime which explains the limited use of AI by attackers to date:
- Criminals want an easy life
- Return on investment has to be better than other options on the market
- Cybercrime is an evolution, not revolution
“Criminals are particularly slow to change when they don’t have to,” he added.
In contrast, the last two years has seen huge investment into AI on the defenders’ side, which McArdle believes will ultimately help give them the edge over attackers.
“Our industry is investing billions in terms of their AI tooling and capabilities,” noted McArdle.
For example, AI will enhance defenders’ capabilities will be to act as digital assistants – moving beyond chatbots. This includes generating internal reports, log analysis and forensics rapidly, saving enormous time and resources for security teams.