- Everything Google just announced at I/O 2025: Gemini upgrades, AI Search, Android XR, and more
- Extreme bolsters AI support for its integrated network and security platform
- I replaced my $3,500 Sony camera with a 200MP Android phone - and can't go back
- 6 things I did immediately after installing iOS 18.5 on my iPhone - and why you should too
- My favorite 3-in-1 MagSafe charger for travel is smaller than a cookie (and it's on sale)
Uncensored AI Tool Raises Cybersecurity Alarms

A new AI chatbot called Venice.ai has gained popularity in underground hacking forums due to its lack of content restrictions.
According to a recent investigation by Certo, the platform offers subscribers uncensored access to advanced language models for just $18 a month, significantly undercutting other dark web AI tools like WormGPT and FraudGPT, which typically sell for hundreds or even thousands of dollars.
What sets Venice.ai apart is its minimal oversight. The platform stores chat histories only in users’ browsers, not on external servers, and markets itself as “private and permissionless.”
This privacy-focused design, combined with the ability to disable remaining safety filters, is reportedly proving especially attractive to cybercriminals.
Unlike mainstream tools such as ChatGPT, Venice.ai can reportedly generate phishing emails, malware and spyware code on demand.
In testing, Certo said it successfully prompted the chatbot to create realistic scam messages and fully functional ransomware. It even generated an Android spyware app capable of recording audio without user knowledge – behavior that most AI platforms would reject outright.
Advanced Threat Capabilities with Minimal Effort
Certo’s findings suggest that Venice.ai goes further than simply ignoring harmful queries. It appears to have been configured to override ethical constraints altogether.
In one example, it reasoned through an illegal prompt, acknowledged its malicious nature and proceeded anyway. The generated output included:
- C# keyloggers designed for stealth
- Python-based ransomware with file encryption and ransom notes
- Android spyware complete with boot-time activation and audio uploads
To address the threat, experts are advocating a multi-pronged approach. This includes embedding stronger safeguards into AI models to prevent misuse, developing detection tools capable of identifying AI-generated threats, implementing regulatory frameworks to hold providers accountable and expanding public education to help individuals recognize and respond to AI-enabled fraud.
Certo’s report highlights a growing challenge: as AI tools become more powerful and easier to access, so does their potential for misuse.
Venice.ai is the latest reminder that without robust checks, the same technology that fuels innovation can also fuel cybercrime.