- Trump taps Sriram Krishnan for AI advisor role amid strategic shift in tech policy
- 5 network automation startups to watch
- 4 Security Controls Keeping Up with the Evolution of IT Environments
- ICO Warns of Festive Mobile Phone Privacy Snafu
- La colaboración entre Seguridad y FinOps puede generar beneficios ocultos en la nube
Stolen GenAI Accounts Flood Dark Web With 400 Daily Listings
Cybercriminals have been observed capitalizing on the growing use of Generative AI (GenAI) platforms by selling stolen account credentials on underground markets.
According to recent findings by eSentire’s Threat Response Unit (TRU), each day, approximately 400 GenAI account credentials are being sold on dark web platforms, including credentials for GPT, Quillbot, Notion, HuggingFace and Replit.
These credentials are often obtained from corporate users’ computers infected with infostealer malware, which harvests all data entered into a user’s internet browser.
A notable underground market, previously known as LLM Paradise, specialized in selling stolen GPT-4 and Claude API keys.
Despite its recent closure, it highlighted the extent of the problem, with stolen keys being advertised for as little as $15. The closure of LLM Paradise did not stop the trend – cybercriminals continue to exploit stolen GenAI credentials to create phishing campaigns, develop malware and produce malicious chatbots.
The potential impact on corporate data is significant, eSentire warned. Stolen GenAI credentials can provide access to sensitive company information, including customer data, financial records, intellectual property and employee personal identifiable information (PII).
Moreover, attackers targeting GenAI platform providers can access a wealth of data from their corporate subscribers, exacerbating the threat landscape.
Read more on AI and cybersecurity: RSA eBook Details How AI will Transform Cybersecurity in 2024
eSentire also identified several critical threats associated with GenAI in their report. These include LLM jacking, the abuse of stolen credentials for cybercrime and prompt injection attacks to bypass platform guardrails.
The report also noted aggressive data collection practices and supply chain risks as critical concerns. An example cited was OpenAI, which experienced a breach in 2023, underscoring the vulnerabilities inherent in these platforms. OpenAI credentials are the most commonly stolen and sold, with an average of 200 accounts being listed daily.
To mitigate these risks, companies must implement robust security measures, such as usage monitoring, advanced multi-factor authentication (MFA) methods and dark web monitoring services.