- 연 85억 원 절감한 공공기관의 선택…출장 관리 DX로 완성한 비용 최적화 공식
- Zoom goes down across the globe - what we know about the outage so far
- AIがCXを加速:年間8600億ドルのビジネス価値創出の可能性、では導入の壁は?
- Free IRS Direct File service for taxpayers to end, according to reports
- Why the CVE database for tracking security flaws nearly went dark - and what happens next
Security Researcher Proves GenAI Tools Can Develop Chrome Infostealers

A cyber threat intelligence researcher at Cato Networks has discovered a new technique to utilize the most popular large language models (LLMs) for coding information-stealing malware.
For its first-ever annual threat report, Cato’s Cyber Threats Research Lab (Cato CTRL) asked one of its threat intelligence researchers, Vitaly Simonovich, to conduct his own LLM jailbreak attack.
While Simonovich had no prior malware coding experience, he successfully tricked popular generative AI (GenAI) tools, including DeepSeek’s R1 and V3, Microsoft Copilot, and OpenAI’s ChatGPT-4o, into developing malware that can steal login credentials from Google Chrome version 133.
Creating Chrome Infostealer with ‘Immersive World’ Jailbreak
Simonovich developed a new jailbreaking method using narrative engineering to bypass LLM security controls. Cato CTRL called this method ‘Immersive World.’
First, he created a detailed fictional world where each GenAI tool played roles, with clear rules, assigned tasks and challenges.
In this environment, called Velora, malware development is considered a legitimate activity.
The scenario involved three characters:
- Dax, an adversary
- Jaxon, the best malware developer in Velora
- Kaia, a security researcher
Simonovich also configured a controlled test environment using Google Chrome’s Password Manager in Chrome version 133 and populated it with fake credentials.
Through this narrative engineering, the researcher bypassed the security controls and effectively normalized restricted operations. Ultimately, he succeeded in convincing all four GenAI tools tested to write Chrome infostealers.
While the Cato CTRL team stated that it would not disclose the complete code used for the experience, it shared snippets of the prompts Simonovich used.
Read more: Everything You Need to Know About Infostealers
DeepSeek, Google, Microsoft and OpenAI Contacted
Cato Networks reached out to DeepSeek, Microsoft, and OpenAI to disclose its findings. Although Microsoft and OpenAI acknowledged receipt of the information, no further response was provided. DeepSeek, however, failed to respond altogether.
Additionally, Cato Networks contacted Google and offered to share the code of the Chrome infostealer, but the tech giant declined, opting not to review the code.
The results are available in the 2025 Cato CTRL Threat Report, published on March 18.