- Can you watch movies on your TV using a USB stick? You most certainly can
- Will AI replace software engineers? It depends on who you ask
- 칼럼 | AI에 보안 맡겨도 될까?··· CISO의 '에이전틱 AI' 대비 방법
- Tal Saraf (Atlassian): “Construimos una IA interna, nuestro propio patio de recreo para emplearla de forma segura”
- 구글, 제미나이 2.5 플래시 하이브리드 추론 모델 공개
Cisco researchers highlight emerging threats to AI models

Cisco security researchers this week detailed a number of threats they are seeing from bad actors trying to infect or attack AI’s most common component – the large language model.
Some techniques used to hide messages or attacks from anti-spam systems are familiar to security specialists: “Hiding the nature of the content displayed to the recipient from anti-spam systems is not a new technique. Spammers have included hidden text or used formatting rules to camouflage their actual message from anti-spam analysis for decades,” wrote Martin Lee, a security engineer with Cisco Talos, in a blog post about current and future AI threats. “However, we have seen increase in the use of such techniques during the second half of 2024.”
Being able to disguise and hide content from machine analysis or human oversight is likely to become a more important vector of attack against AI systems, according to Lee. “Fortunately, the techniques to detect this kind of obfuscation are well known and already integrated into spam detection systems such as Cisco Email Threat Defense. Conversely, the presence of attempts to obfuscate content in this manner makes it obvious that a message is malicious and can be classed as spam,” Lee wrote.