Did you know that OWASP (the Open Worldwide Application Security Project) has published its Top 10 security risks for LLMs and Generative AI systems? No? Thought not! In this blog, we'll take a look at the OWASP Top 10 LLM Security Risks and consider how prepared enterprises are for these evolving threats.
OWASP
Let's start with OWASP itself. OWASP was set up on December 1st, 2001, and was incorporated as a US non-profit charity on April 21st, 2004. Its remit is to be a global open community that powers secure software through education, tools, and collaboration. Its vision is "no more insecure software." To achieve this, it provides free, open-source information, tools, documentation, and standards for developers and security professionals.
The highest profile project within the overall OWASP umbrella is the OWASP Top 10 Web Application Security Vulnerabilities. The exact contents of the Top 10 have changed over the years as web technology has evolved and as the ways in which threat actors exploit web applications have shifted. The current Top 10 (published in 2021) includes Injection, Broken Access Control, and Cryptographic Failures, and can be found at https://owasp.org/Top10. Take a look at our OWASP Top Ten Web App Security course if you're interested in instructor-led training for your web developers.
OWASP is on track to publish a new 2025 Top 10 very soon - watch this space!
In addition to the Top 10 security issues, OWASP also makes available training resources such as the OWASP Juice Shop and the OWASP ZAP tool. The OWASP Juice Shop is an intentionally vulnerable web application used for security training and demonstrations. The OWASP ZAP tool is an open-source security scanner for finding vulnerabilities in web applications.
OWASP Gen AI Security Project
The OWASP Generative AI Security Project is a global open-source initiative dedicated to identifying, mitigating, and documenting security and safety risks associated with generative AI technologies, including Large Language Models (LLMs), Agentic AI systems, and AI-driven applications.
The project objectives include: risk identification and documentation, AI application security best practices, applied research and community collaboration, education and knowledge sharing, and the provision of guidance on Enterprise AI adoption.
The project was founded in May 2023 and now boasts over 15,000 members in 15+ countries with over 20 publications to its name. These publications include the OWASP Gen AI Solutions Reference Guide and the OWASP Gen AI Cheat Sheet – A Practical Guide for Securely Using Third-Party MCP. Here, MCP stands for Model Context Protocol, which is an open standard used to connect LLMs to external tools and data.

OWASP LLM Top Ten
For the 2023–2024 period, OWASP published its first Top 10 vulnerabilities for LLMs and Generative AI systems. This was revised in 2025 with an updated Top 10.
This revised list highlights the very different security risks and challenges posed by these systems. (This list identifies the ten most critical security risks, not the only ten risks.)
The current Top 10 LLM security risks are:
Prompt Injection: This relates to malicious inputs that trick the LLM into ignoring its original instructions and executing a different, often harmful, command.
Sensitive Information Disclosure: This risk involves the LLM revealing confidential or sensitive data that it was trained on or has access to, which could include personal, proprietary, or other 'secret' information.
Supply Chain: The LLM supply chain, which may be made up of plug-ins, third-party components, or datasets, can be susceptible to vulnerabilities that affect the integrity of the training data, models, and host platforms. This can result in biased outputs, security breaches, or system failures.
Data and Model Poisoning: Data poisoning occurs when pre-training, fine-tuning, or embedding data is manipulated to introduce vulnerabilities, backdoors, or biases.
Improper Output Handling: Failure to properly validate and sanitize the LLM's output can lead to security exploits, such as code execution or data exfiltration in downstream systems.
Excessive Agency: An LLM-based system is often granted a degree of agency by its developer - meaning it is allowed to call functions or interface with other systems via extensions to undertake actions in response to a prompt. Excessive Agency occurs when damaging actions are performed by an LLM in response to unexpected, ambiguous, or manipulated outputs from these external functions or systems.
System Prompt Leakage: In this risk, sensitive instructions that define the LLM's behaviour are exposed to the user through prompt injection or other means, allowing the user to manipulate or bypass security controls.
Vector and Embedding Weaknesses: This security risk relates to exploiting weaknesses in vector databases or embeddings (the numerical representations of data) to manipulate the LLM's understanding and responses.
Misinformation: In this risk, the LLM generates (for whatever reason) false or misleading information, which may be used by a user or another system, potentially leading to legal, ethical, or business implications.
Unbounded Consumption: An LLM can be manipulated to consume excessive resources, leading to a denial-of-service (DoS) style state.

How prepared are enterprises?
The question here is, "How prepared are enterprises for the introduction of LLMs and Generative AI?"
This is an interesting and very pertinent question at this time, not least because many enterprises are still in the throes of the "we need to use AI" stage without truly understanding what that means, how they should be using it, or what advantage it may give them. However, it's the current buzz technology, and everyone is scared of being left behind and missing out!
However, jumping on the Gen AI or LLM bandwagon is one thing; doing so in a way that won't harm reputation, negatively impact current business, or negatively affect profitability, while avoiding any legal repercussions, may be another.
For example, if your wonderful Generative AI system provides erroneous information or conclusions and an individual or an organization as a whole acts on it, who is liable? Racist AI systems, for instance, are a well-documented problem, with not all of them arising from intentional bias or overt racism. Rather, the data used to train such systems may reflect existing historical or societal biases, stereotypes, or inequalities. However, the effect of this may be to target or marginalize certain ethnic groups (depending on the application purpose). While this may be manageable if the system merely advises a human user, if that system automates a process, say a recruitment process, it may leave the organization open to accusations of racism.
More broadly, these so-called 'intelligent agent' systems are often deployed in a wide variety of different roles, often with public-facing interfaces; think of the classic chatbot providing help and advice on a website. Such a chatbot might be based on a system such as ChatGPT but have access to internal corporate systems and databases. This allows the chatbot to respond in an intelligent, and organization-specific, manner.
Herein lies the problem: the chatbot has access to these internal systems. As illustrated by the OWASP LLM Top Ten, this provides several attack vectors for hackers. If these systems were being accessed by human users, a plethora of different verification, validation, and other checks would be performed to ensure only those who should have access to these systems and the information they contain are granted access.
However, with the rapid deployment of this new and often poorly understood (by the host organization) technology, the same security measures are often not applied or are poorly organized or configured. This is due, in part, to a lack of understanding of the implications and security risks associated with these 'wonder' systems.
To ensure that an intelligent agent-based system is safe and secure involves explicit management of data, resources (including potentially numerous plug-ins), system access, and validation and verification of those systems and data. This is non-trivial; it takes a lot of investment in infrastructure, systems, planning, organization, and understanding. However, the pressure to get to the market 'now' with these systems is often so high that time isn't spent understanding their (security) requirements, leaving enterprises under the illusion that their systems are safe and secure because they have followed 'traditional' web application principles.
So, to go back to the question at the start of this section - "How prepared are enterprises for the introduction of LLMs and Generative AI?" - the short answer, in the main, is not enough! And worse, many do not realize this yet.
Summary
Generative AI and LLMs offer many enterprises new and exciting opportunities; however, these opportunities come with their own security risks and challenges.
Being aware of these potential risks is essential for any organization wishing to deploy such systems, and the OWASP LLM Top Ten is a perfect starting point.
However, careful consideration and planning are required to determine how and in what way to integrate these systems within an organization's existing infrastructure, data, and both in-house and external systems.
