About the course
AI-powered applications bring new security challenges. In this intensive course, participants will move beyond theory and gain practical experience with the most significant vulnerabilities affecting Large Language Models (LLMs).
Through a series of hands-on labs, you will explore and mitigate the Top Ten vulnerabilities identified by OWASP, focusing on the five that pose the greatest risks in real-world deployments. The remaining five will be covered through guided demonstrations and case studies, ensuring complete coverage without sacrificing lab depth.
You will leave with a practical toolkit for building more secure and resilient LLM-powered systems.
Instructor-led online and in-house face-to-face options are available - as part of a wider customised training programme, or as a standalone workshop, on-site at your offices or at one of many flexible meeting spaces in the UK and around the World.
-
- Understand all ten OWASP LLM vulnerabilities
- Gain practical experience exploiting and mitigating the five most critical risks
- Learn to assess your own LLM-driven systems for weaknesses
- Build a security toolkit and mitigation strategies to apply immediately
-
This course is designed for:
Developers incorporating LLMs into products
Security professionals responsible for testing or securing AI systems
Technical project managers overseeing LLM-driven applications
-
Delegates will benefit from this course most if they have
A foundational understanding of web development and programming concepts
Basic familiarity with API interactions (e.g. using curl or Python requests)
We can customise the training to match your team's experience and needs though - with more time and coverage of fundamentals for new developers, for instance.
-
This LLM OWASP top ten course is available for private / custom delivery for your team - as an in-house face-to-face workshop at your location of choice, or as online instructor-led training via MS Teams (or your own preferred platform).
Get in touch to find out how we can deliver tailored training which focuses on your project requirements and learning goals.
-
Each of the course modules includes:
overview
real-world examples
hands-on exploit
mitigation strategies
Prompt Injection (LLM01)
How attackers manipulate LLMs with malicious prompts
Lab: bypassing a secure chat application
Mitigation: input validation, sanitisation, user warnings
Insecure Output Handling (LLM02)
How unsafe outputs lead to command execution or denial of service
Lab: generating malicious outputs and observing effects
Mitigation: sandboxing, escaping, strict output validation
Training Data Poisoning (LLM03)
The risks of poisoned or biased datasets
Lab: identifying subtle data poisoning in a sample dataset
Mitigation: provenance checks, integrity validation
Model Denial of Service (LLM04)
Using resource-intensive prompts to degrade service or drive up costs
Lab: crafting recursive prompts to exhaust resources
Mitigation: rate limiting, cost monitoring, prompt complexity analysis
Supply Chain Vulnerabilities (LLM05)
Risks of third-party models, libraries, and datasets
Lab: auditing a manifest for insecure dependencies
Mitigation: SBOMs, vendor due diligence, integrity verification
Trusted by