- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
- El papel del CIO en 2024: una retrospectiva del año en clave TI
- How control rooms help organizations and security management
- ITDM 2025 전망 | “효율경영 시대의 핵심 동력 ‘데이터 조직’··· 내년도 활약 무대 더 커진다” 쏘카 김상우 본부장
- 세일포인트 기고 | 2025년을 맞이하며… 머신 아이덴티티의 부상이 울리는 경종
Stolen GenAI Accounts Flood Dark Web With 400 Daily Listings
Cybercriminals have been observed capitalizing on the growing use of Generative AI (GenAI) platforms by selling stolen account credentials on underground markets.
According to recent findings by eSentire’s Threat Response Unit (TRU), each day, approximately 400 GenAI account credentials are being sold on dark web platforms, including credentials for GPT, Quillbot, Notion, HuggingFace and Replit.
These credentials are often obtained from corporate users’ computers infected with infostealer malware, which harvests all data entered into a user’s internet browser.
A notable underground market, previously known as LLM Paradise, specialized in selling stolen GPT-4 and Claude API keys.
Despite its recent closure, it highlighted the extent of the problem, with stolen keys being advertised for as little as $15. The closure of LLM Paradise did not stop the trend – cybercriminals continue to exploit stolen GenAI credentials to create phishing campaigns, develop malware and produce malicious chatbots.
The potential impact on corporate data is significant, eSentire warned. Stolen GenAI credentials can provide access to sensitive company information, including customer data, financial records, intellectual property and employee personal identifiable information (PII).
Moreover, attackers targeting GenAI platform providers can access a wealth of data from their corporate subscribers, exacerbating the threat landscape.
Read more on AI and cybersecurity: RSA eBook Details How AI will Transform Cybersecurity in 2024
eSentire also identified several critical threats associated with GenAI in their report. These include LLM jacking, the abuse of stolen credentials for cybercrime and prompt injection attacks to bypass platform guardrails.
The report also noted aggressive data collection practices and supply chain risks as critical concerns. An example cited was OpenAI, which experienced a breach in 2023, underscoring the vulnerabilities inherent in these platforms. OpenAI credentials are the most commonly stolen and sold, with an average of 200 accounts being listed daily.
To mitigate these risks, companies must implement robust security measures, such as usage monitoring, advanced multi-factor authentication (MFA) methods and dark web monitoring services.