- This video doorbell camera has just as many features are my Ring - and no subscription required
- LG is giving away free 27-inch gaming monitors, but this is the last day to grab one
- I tested this Eufy security camera and can't go back to grainy night vision
- I replaced my iPhone with a premium dumbphone - here's my verdict after a month
- Build your toolkit with the 10 DIY gadgets every dad should have
New Research Highlights Vulnerabilities in MLOps Platforms

Security researchers have identified multiple attack scenarios targeting MLOps platforms like Azure Machine Learning (Azure ML), BigML and Google Cloud Vertex AI, among others.
According to a new research article by Security Intelligence, Azure ML can be compromised through device code phishing, where attackers steal access tokens and exfiltrate models stored in the platform. This attack vector exploits weaknesses in identity management, allowing unauthorized access to machine learning (ML) assets.
BigML users face threats from exposed API keys found in public repositories, which could grant unauthorized access to private datasets. API keys often lack expiration policies, making them a persistent risk if not rotated frequently.
Google Cloud Vertex AI is vulnerable to attacks involving phishing and privilege escalation, allowing attackers to extract GCloud tokens and access sensitive ML assets. Attackers can leverage compromised credentials to perform lateral movements within an organization’s cloud infrastructure.
Read more on machine learning security: New Research Exposes Security Risks in ChatGPT Plugins
Protective Measures
Security experts recommend several protective measures for each platform.
- For Azure ML, best practices include enabling multi-factor authentication (MFA), isolating networks, encrypting data and enforcing role-based access control (RBAC)
- For BigML, users should apply MFA, rotate credentials frequently and implement fine-grained access controls to restrict data exposure
- For Google Cloud Vertex AI, it is advised to follow the principle of least privilege, disable external IP addresses, enable detailed audit logs and minimize service account permissions
As businesses increasingly rely on AI technologies for critical operations, securing MLOps platforms against threats such as data theft, model extraction and dataset poisoning becomes essential. Implementing proactive security configurations can strengthen defenses and safeguard sensitive AI assets from evolving cyber-threats.
Broader Findings
The Security Intelligence research highlighted vulnerabilites impacting a broad range of MLOps platforms including Amazon SageMaker, JFrog ML (formerly Qwak), Domino Enterprise AI and MLOps Platform, Databricks, DataRobot, W&B (Weights & Biases), Valohai and TrueFoundry.