- The newest Echo Show 8 just hit its lowest price ever for Black Friday
- 기술 기업 노리는 북한의 가짜 IT 인력 캠페인··· 데이터 탈취도 주의해야
- 구글 클라우드, 구글 워크스페이스용 제미나이 사이드 패널에 한국어 지원 추가
- The best MagSafe accessories of 2024: Expert tested and reviewed
- Threads will show you more from accounts you follow now - like Bluesky already does
Cybercriminals Hesitant About Using Generative AI
Cybercriminals are so far reluctant to use generative AI to launch attacks, according to new research by Sophos.
Examining four prominent dark-web forums for discussions related to large language models (LLMs), the firm found that threat actors showed little interest in using these tools, and even expressed concerns about the wider risks they pose.
In two of the forums included in the research, just 100 posts on AI were found. This compares to 1000 posts related to cryptocurrency during the same period.
The researchers revealed that the majority of LLM-related posts related to compromised ChatGPT accounts for sale and ways to circumvent the protections built into LLMs, known as ‘jailbreaks.’
Additionally, they observed 10 ChatGPT-derivatives that the creators claimed could be used to launch cyber-attacks and develop malware. However, Sophos X-Ops said that cybercriminals had mixed reactions to these derivatives, with many expressing concerns that the creators of the ChatGPT imitators were trying to scam them.
The researchers added that many of the attempts to create malware or attack tools using LLMs were “rudimentary” and often met with skepticism by other users. For example, one threat actor inadvertently revealed information about their real identity while showcasing the potential of ChatGPT. Many users had cybercrime-specific concerns about LLM-generated code, including operational security worries and AV/EDR detection.
There were even numerous ‘thought pieces’ posted on the forums about the negative effects of AI on society.
Christopher Budd, director of X-Ops research at Sophos, noted: “At least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us.”
He added: “While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more skeptical than enthused.”
Preparing for the Proliferation of AI Based Threats
Despite the current reluctance from cybercriminals to use AI tools, Sophos published separate research demonstrating that LLMs can be used to conduct fraud on a “massive scale” with minimal technical skills.
Using LLM tools like GPT-4, the team built a fully functioning e-commerce website with AI-generated images, audio and product descriptions. It also contained a fake Facebook login and checkout page to steal users’ log in credentials and credit card details.
Sophos X-Ops said it was able to create hundreds of similar websites in seconds with one button.
The firm explained the research was conducted to help prepare for AI-based threats of this nature before they proliferate.
“If an AI technology exists that can create complete, automated threats, people will eventually use it. We have already seen the integration of generative AI elements in classic scams, such as AI-generated text or photographs to lure victims,” explained Ben Gelman, senior data scientist at Sophos.