- ITDM 2025 전망 | 금융 플랫폼 성패, 지속가능한 사업 가치 창출에 달렸다” KB국민카드 이호준 그룹장
- “고객경험 개선하고 비용은 절감, AI 기반까지 마련” · · · AIA생명의 CCM 프로젝트 사례
- 2025年、CIOはAIに意欲的に投資する - そしてその先も
- The best robot vacuums for pet hair of 2024: Expert tested and reviewed
- These Sony headphones eased my XM5 envy with all-day comfort and plenty of bass
Russian Hackers Try to Bypass ChatGPT’s Restrictions For Malicious Purposes
Russian cyber-criminals have been observed on dark web forums trying to bypass OpenAI’s API restrictions to gain access to the ChatGPT chatbot for nefarious purposes.
Various individuals have been observed, for instance, discussing how to use stolen payment cards to pay for upgraded users on OpenAI (thus circumventing the limitations of free accounts).
Others have created blog posts on how to bypass the geo controls of OpenAI, and others still have created tutorials explaining how to use semi-legal online SMS services to register to ChatGPT.
“Generally, there are a lot of tutorials in Russian semi-legal online SMS services on how to use it to register to ChatGPT, and we have examples that it is already being used,” wrote Check Point Research (CPR), which shared the findings with Infosecurity ahead of publication.
“It is not extremely difficult to bypass OpenAI’s restricting measures for specific countries to access ChatGPT,” said Sergey Shykevich, threat intelligence group manager at Check Point Software Technologies.
“Right now, we are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes.”
The executive also added that CPR believes these hackers are most likely trying to implement and test ChatGPT in their day-to-day criminal operations.
“Cyber-criminals are growing more and more interested in ChatGPT because the AI technology behind it can make a hacker more cost-efficient,” Shykevich explained.
Case in point, just last week, Check Point Research published a separate advisory highlighting how threat actors had already created malicious tools using ChatGPT. These included infostealers, multi-layer encryption tools and dark web marketplace scripts.
More generally, the cybersecurity firm is not the only one believing ChatGPT could democratize cybercrime, with various experts warning that the AI bot could be used by potential cyber-criminals to teach them how to create attacks and even write ransomware.