- LinkedIn gets its own suite of video tools as it grows video presence on platform
- One of my favorite Android smartwatches has a 100-hour battery life - and it's on sale
- Reaching My Ultimate Goal: Cisco Live!
- Four Transformative Trends Shaping HR in 2025
- Surge in Infostealer Attacks Threatens EMEA Organizations
Cisco researchers highlight emerging threats to AI models
Cisco security researchers this week detailed a number of threats they are seeing from bad actors trying to infect or attack AI’s most common component – the large language model.
Some techniques used to hide messages or attacks from anti-spam systems are familiar to security specialists: “Hiding the nature of the content displayed to the recipient from anti-spam systems is not a new technique. Spammers have included hidden text or used formatting rules to camouflage their actual message from anti-spam analysis for decades,” wrote Martin Lee, a security engineer with Cisco Talos, in a blog post about current and future AI threats. “However, we have seen increase in the use of such techniques during the second half of 2024.”
Being able to disguise and hide content from machine analysis or human oversight is likely to become a more important vector of attack against AI systems, according to Lee. “Fortunately, the techniques to detect this kind of obfuscation are well known and already integrated into spam detection systems such as Cisco Email Threat Defense. Conversely, the presence of attempts to obfuscate content in this manner makes it obvious that a message is malicious and can be classed as spam,” Lee wrote.