Key Updates in the OWASP Top 10 List for LLMs 2025


Last November, the Open Web Application Security Project (OWASP) released its Top Ten List for LLMs and Gen AI Applications 2025, making some significant updates from its 2023 iteration. These updates can tell us a great deal about how the LLM threat and vulnerability landscape is evolving – and what organizations need to do to protect themselves.

Sensitive Information Disclosure Risks Grow

In 2023, sensitive information disclosure ranked sixth on the OWASP Top 10 List for LLMs. Today, it ranks second. This massive leap reflects growing concerns about LLMs exposing sensitive data as more organizations and staff use the technology in day-to-day operations.

The problem stems from staff increasingly using – or, rather, misusing – LLMs by inputting sensitive data into them. Not only can inputting sensitive information, like intellectual property or personally identifiable information, to a GenAI tool result in that information appearing in responses to other external users, but these tools are themselves susceptible to data breaches. In 2023, for example, Samsung banned the use of AI after ChatGPT suffered a breach that exposed sensitive company data.

Amidst the growing risk of sensitive information disclosure through LLMs, it’s enormously important to ensure staff understand how to use AI tools responsibly. Terranova Security, Fortra’s security awareness training offering, can help your organization achieve this goal. It provides engaging, informative content that educates employees on, among other things, the risks associated with LLMs.

Supply Chain Risks Compound

Supply chain vulnerabilities also saw a significant jump in rankings from the 2023 to 2025 lists, climbing from fifth to third place. LLM development relies heavily on external components, like pre-trained models and datasets, which can create vulnerabilities in the supply chain, including:

  • Data Poisoning: Malicious actors can manipulate training data, leading to biased or harmful outputs.
  • Model Tampering: Third-party models can be compromised, introducing backdoors or security flaws.
  • Fine-tuning Risks: While efficient, techniques like LoRA and PEFT increase reliance on external components and their potential vulnerabilities.
  • On-Device LLMs: Deploying models directly on devices expands the attack surface, making them more susceptible to exploits.

John Wilson, project lead for the OWASP Top 10 for LLM Project, told Infosecurity Magazine that while supply chain risks for LLMs were largely theoretical in 2023, this is no longer the case. ‘We saw concrete examples of poisoned foundation models and tainted datasets causing real-world disruptions,’ he stated, explaining the significant rise in their ranking.

New Risks Enter the Fray

It’s also worth mentioning that the 2025 list has two new additions: system prompt leakage and vector and embedding weaknesses. Let’s examine them in more detail.

System Prompt Leakage

OWASP recognized system prompt leakage, which comes in at the number seven spot, in response to requests from the community following a slew of real-world incidents. System prompt leakage occurs when an LLM inadvertently reveals its internal instructions or system prompts in its response. If discovered, attackers can use this information to facilitate other attacks.

To prevent and mitigate system prompt leakage, OWASP recommends separating sensitive data from system prompts, avoiding reliance on system prompts for strict behavior control, implementing a system of guardrails outside of the LLM itself, and ensuring that security controls are enforced independently from the LLM.

Vector and Embedding Weaknesses

Vector and embedding weaknesses enter the list ranked number eight in response to community requests for guidance on securing Retrieval-Augmented Generation (RAG) and other embedding-based methods.

RAG, a model adaptation technique, is the default architecture for enterprise LLM applications. It enhances the performance and contextual relevance of responses from LLM Applications by combining pre-trained language models with external knowledge sources.

OWASP recommends implementing fine-grained access controls and permission-aware vector and embedding stores, data validation and source authentication, data review for combination and classification, and maintaining detailed and immutable logs of retrieval activities to mitigate and prevent vector and embedding risks.

Updated and Expanded Risks

Finally, OWASP has updated some of the risks included in the Top Ten List for LLMs 2025 from its 2023 iteration. They are:

  • Misinformation: Expanded to address overreliance, emphasizing the risks inherent with taking LLM responses as gospel.
  • Unbounded Consumption: Renamed from denial of service and expanded to include risks tied to resource management and unexpected operational costs.
  • Excessive Agency: Expanded to recognize the risks of unchecked permissions.

Looking Ahead

It may well be beating a dead horse, but it really is mentioned here that LLMs and, by extension, their risks and vulnerabilities are in a constant state of flux. By the time OWASP releases its next Top Ten List for LLMs and Gen AI, the risks and vulnerabilities included in it could look very different all over again. The key takeaway, then, is that organizations must stay vigilant to emerging threats and vulnerabilities in their LLM models and, perhaps more importantly, expect the unexpected.


Editor’s Note: The opinions expressed in this and other guest author articles are solely those of the contributor and do not necessarily reflect those of Tripwire.



Source link

Leave a Comment