- Apple Intelligence hasn't lived up to my expectations, but these 3 upgrades could win me back
- Samsung launches One UI 8 beta - what's new and how to join
- This 230-piece Craftsman toolset is still just $99 at Lowe's
- Grab this 85-inch Samung TV and home audio bundle for $2,500 off
- Standing Together Against Scams: McAfee Joins the Global Anti-Scam Alliance | McAfee Blog
Malicious Machine Learning Model Attack Discovered on PyPI

A new campaign exploiting machine learning (ML) models via the Python Package Index (PyPI) has been observed by cybersecurity researchers.
ReversingLabs said threat actors are using the Pickle file format to conceal malware inside seemingly legitimate AI-related software packages.
In this recent incident, attackers published three deceptive packages: aliyun-ai-labs-snippets-sdk, ai-labs-snippets-sdk and aliyun-ai-labs-sdk, claiming to offer a Python SDK for Alibaba’s AI services.
These packages, however, contained no functional code related to AI. Instead, they deployed an infostealer payload embedded within PyTorch models, which are essentially zipped Pickle files.
Upon installation, the payload was activated from the initialization script.
The malware was designed to extract:
- User and network information
- The target machine’s organizational affiliation
- Contents of the .gitconfig file
Notably, the malicious models also attempted to identify developers associated with the Chinese video conferencing tool AliMeeting, suggesting a regional focus.
PyTorch and Pickle: A Dangerous Combination
According to ReversingLabs, this incident highlights the growing threat posed by the misuse of ML model formats.
Pickle allows serialized Python objects to execute arbitrary code. As a result, it has become a preferred vector for attackers aiming to bypass traditional security controls. Two of the three identified packages used this method to deliver fully functional malware.
The researchers believe the appeal of ML formats is because many security tools do not yet support robust detection of embedded malicious behavior within such files.
“Security tools are at a primitive level when it comes to malicious ML model detection,” said Karlo Zanki, a reverse engineer at ReversingLabs.
“Legacy security tooling is currently lacking this required functionality.”
The infected packages were briefly available on PyPI and downloaded approximately 1600 times before removal.
While the exact method used to lure users remains unclear, social engineering or phishing is suspected.
As AI and ML tools become central to software development, this attack underscores the need for stricter validation and zero-trust principles in handling ML artifacts.
Photo credits: sdx15/Shutterstock