- This 2-in-1 Windows tablet I found is designed for creatives and lasts all day
- Tripwire Patch Priority Index for January 2025
- Casio and Others Hit by Magento Web Skimmer Campaign
- Cisco researchers highlight emerging threats to AI models
- La macroplataforma logística de Alcampo en Illescas automatiza el ‘picking’
Cisco researchers highlight emerging threats to AI models
Cisco security researchers this week detailed a number of threats they are seeing from bad actors trying to infect or attack AI’s most common component – the large language model.
Some techniques used to hide messages or attacks from anti-spam systems are familiar to security specialists: “Hiding the nature of the content displayed to the recipient from anti-spam systems is not a new technique. Spammers have included hidden text or used formatting rules to camouflage their actual message from anti-spam analysis for decades,” wrote Martin Lee, a security engineer with Cisco Talos, in a blog post about current and future AI threats. “However, we have seen increase in the use of such techniques during the second half of 2024.”
Being able to disguise and hide content from machine analysis or human oversight is likely to become a more important vector of attack against AI systems, according to Lee. “Fortunately, the techniques to detect this kind of obfuscation are well known and already integrated into spam detection systems such as Cisco Email Threat Defense. Conversely, the presence of attempts to obfuscate content in this manner makes it obvious that a message is malicious and can be classed as spam,” Lee wrote.