- The best foldable phones of 2024: Expert tested and reviewed
- Redefining customer experience: How AI is revolutionizing Mastercard
- The Apple Pencil Pro has dropped down to $92 on Amazon ahead of Black Friday
- This tiny USB-C accessory has a game-changing magnetic feature (and it's 30% off)
- Schneider Electric ousts CEO over strategic differences
Researchers Uncover New “Conversation Overflow” Tactics
Threat researchers have revealed a new cyber-attack using cloaked emails to deceive machine learning (ML) systems, enabling the infiltration of enterprise networks.
An advisory published by SlashNext today called the tactic a “Conversation Overflow” attack, a method that circumvents advanced security measures to deliver phishing messages directly into victims’ inboxes.
The malicious emails consist of two distinct components. The visible portion prompts the recipient to take action, such as entering credentials or clicking on links. Below this, numerous blank lines separate the hidden section, which contains benign text resembling ordinary email content.
This hidden text is crafted to deceive machine learning algorithms into categorizing the email as legitimate, thereby allowing it to bypass security checks and reach the target’s inbox.
This technique has been observed repeatedly by SlashNext researchers, indicating potential beta testing by threat actors to evade artificial intelligence (AI) and ML security platforms.
Read more on AI-driven security: RSA eBook Details How AI will Transform Cybersecurity in 2024
Unlike traditional security measures that rely on detecting ‘known bad’ signatures, machine learning systems identify anomalies from ‘known good’ communication patterns. By mimicking benign communication, threat actors exploit this aspect of ML to disguise their malicious intent.
Once infiltrated, attackers deploy credential theft messages disguised as legitimate requests for re-authentication, mainly targeting top executives. The stolen credentials fetch high prices on dark web forums.
According to SlashNext, this sophisticated form of credential harvesting poses a significant challenge to advanced AI and ML engines, signaling a shift in cybercriminal tactics amid the evolving landscape of AI-driven security.
“From these findings, we should conclude that cyber crooks are morphing their attack techniques in this dawning age of AI security,” reads the advisory. “As a result, we are concerned that this development reveals an entirely new toolkit being refined by criminal hacker groups in real-time today.”
To defend against threats like this, security teams are recommended to enhance AI and ML algorithms, conduct regular security training and implement multi-layered authentication protocols.