- Apple doesn't need better AI as much as AI needs Apple to bring its A-game
- I tested a Pixel Tablet without any Google apps, and it's more private than even my iPad
- My search for the best MacBook docking station is over. This one can power it all
- This $500 Motorola proves you don't need to spend more on flagship phones
- Finally, budget wireless earbuds that I wouldn't mind putting my AirPods away for
Cisco researchers highlight emerging threats to AI models

Cisco security researchers this week detailed a number of threats they are seeing from bad actors trying to infect or attack AI’s most common component – the large language model.
Some techniques used to hide messages or attacks from anti-spam systems are familiar to security specialists: “Hiding the nature of the content displayed to the recipient from anti-spam systems is not a new technique. Spammers have included hidden text or used formatting rules to camouflage their actual message from anti-spam analysis for decades,” wrote Martin Lee, a security engineer with Cisco Talos, in a blog post about current and future AI threats. “However, we have seen increase in the use of such techniques during the second half of 2024.”
Being able to disguise and hide content from machine analysis or human oversight is likely to become a more important vector of attack against AI systems, according to Lee. “Fortunately, the techniques to detect this kind of obfuscation are well known and already integrated into spam detection systems such as Cisco Email Threat Defense. Conversely, the presence of attempts to obfuscate content in this manner makes it obvious that a message is malicious and can be classed as spam,” Lee wrote.