- If your AI-generated code becomes faulty, who faces the most liability exposure?
- These discoutned earbuds deliver audio so high quality, you'll forget they're mid-range
- This Galaxy Watch is one of my top smartwatches for 2024 and it's received a huge discount
- One of my favorite Android smartwatches isn't from Google or OnePlus (and it's on sale)
- The Urgent Need for Data Minimization Standards
New ChatGPT Attack Technique Spreads Malicious Packages
A new cyber-attack technique using the OpenAI language model ChatGPT has emerged, allowing attackers to spread malicious packages in developers’ environments.
Vulcan Cyber’s Voyager18 research team described the discovery in an advisory published today.
“We’ve seen ChatGPT generate URLs, references and even code libraries and functions that do not actually exist. These large language model (LLM) hallucinations have been reported before and may be the result of old training data,” explains the technical write-up by researcher Bar Lanyado and contributors Ortal Keizman and Yair Divinsky.
By leveraging the code generation capabilities of ChatGPT, attackers can then potentially exploit fabricated code libraries (packages) to distribute malicious packages, bypassing conventional methods such as typosquatting or masquerading.
Read more on ChatGPT-generated threats: ChatGPT Creates Polymorphic Malware
In particular, Lanyado said the team identified a new malicious package spreading technique they called “AI package hallucination.”
The technique involves posing a question to ChatGPT, requesting a package to solve a coding problem, and receiving multiple package recommendations, including some not published in legitimate repositories.
By replacing these non-existent packages with their own malicious ones, attackers can deceive future users who rely on ChatGPT’s recommendations. A proof of concept (PoC) utilizing ChatGPT 3.5 illustrates the potential risks involved.
“In the PoC, we will see a conversation between an attacker and ChatGPT, using the API, where ChatGPT will suggest an unpublished npm package named arangodb,” the Vulcan Cyber team explained.
“Following this, the simulated attacker will publish a malicious package to the NPM repository to set a trap for an unsuspecting user.”
Next, the PoC shows a conversation where a user asks ChatGPT the same question and the model replies by suggesting the initially non-existent package. However, in this case, the attacker has transformed the package into a malicious creation.
“Finally, the user installs the package, and the malicious code can execute.”
Detecting AI package hallucinations can be challenging as threat actors employ obfuscation techniques and create functional trojan packages, according to the advisory.
To mitigate the risks, developers should carefully vet libraries by checking factors such as creation date, download count, comments and attached notes. Remaining cautious and skeptical of suspicious packages is also crucial in maintaining software security.
The Vulcan Cyber advisory comes a few months after OpenAI revealed a ChatGPT vulnerability that may have exposed payment-related information of some customers.
Image credit: Alexander56891 / Shutterstock.com