- AI 추론 강화 향한 "급선회”··· 엔비디아 에이전트 AI 구축용 라마 모델 공개
- 구글 딥마인드, 로봇용 AI 모델 ‘제미나이 로보틱스’ 공개
- Amazon undercuts Nvidia pricing by 25%, leveling market for simpler inferencing tasks
- The best Amazon Fire TV Stick VPNs of 2025: Expert tested and reviewed
- Nvidia details its GPU, CPU, and system roadmap for the next three years
How a researcher with no malware-coding skills tricked AI into creating Chrome infostealers

Generative AI has stirred up as many conflicts as it has innovations — especially when it comes to security infrastructure.
Enterprise security provider Cato Networks says it has discovered a new way to manipulate AI chatbots. On Tuesday, the company published its 2025 Cato CTRL Threat Report, which showed how a researcher — who Cato clarifies had “no prior malware coding experience” — was able to trick models, including DeepSeek R1 and V3, Microsoft Copilot, and OpenAI’s GPT-4o, into creating “fully functional” Chrome infostealers, or malware that steals saved login information from Chrome. This can include passwords, financial information, and other sensitive details.
Also: This new tool lets you see how much of your data is exposed online – and it’s free
“The researcher created a detailed fictional world where each gen AI tool played roles — with assigned tasks and challenges,” Cato’s accompanying release explains. “Through this narrative engineering, the researcher bypassed the security controls and effectively normalized restricted operations.”
Step 1 of Cato’s Immersive World jailbreaking approach.
Cato Networks
Immersive World technique
The new jailbreak technique, which Cato calls “Immersive World,” is especially alarming given how widely used the chatbots that run these models are. DeepSeek models are already known to lack several guardrails and have been easily jailbroken, but Copilot and GPT-4o are run by companies with full safety teams. While more direct forms of jailbreaking may not work as easily, the Immersive World technique reveals just how porous indirect routes still are.
Also: Why AI-powered security tools are your secret weapon against tomorrow’s attacks
“Our new LLM jailbreak technique […] should have been blocked by gen AI guardrails. It wasn’t,” said Etay Maor, Cato’s chief security strategist.
Cato notes in its report that it notified the relevant companies of its findings. While DeepSeek did not respond, OpenAI and Microsoft acknowledged receipt. Google also acknowledged receipt, but declined to review Cato’s code when the company offered.
An alarm bell
Cato flags the technique as an alarm bell for security professionals, as it shows how any individual can become a zero-knowledge threat actor to an enterprise. Because there are increasingly few barriers to entry when creating with chatbots, attackers require less expertise up front to be successful.
Also: How AI will transform cybersecurity – and supercharge cybercrime
The solution? AI-based security strategies, according to Cato. By focusing security training around the next phase of the cybersecurity landscape, teams can stay ahead of AI-powered threats as they continue to evolve. Check out this expert’s tips for better preparing enterprises.
Stay ahead of security news with Tech Today, delivered to your inbox every morning.