About the course
Large Language Models (LLMs) are transforming how we work and interact with technology, but harnessing their full capabilities depends on our ability to communicate with them effectively. Prompt Engineering is the essential skill of designing and refining inputs to guide LLMs towards generating accurate, relevant, and desired outputs. This 2-day training course is designed to provide participants with the foundational knowledge and practical techniques needed to interact with LLMs effectively, whether using web-based interfaces or programmatically via APIs.
The course begins by exploring what LLMs are, their underlying concepts, and the landscape of available models and providers. You will gain a crucial understanding of key LLM parameters like tokens and temperature that control output, and critically, learn about the current limitations such as hallucination and bias that affect how we interact with these models. A core focus is placed on mastering practical prompt engineering techniques, including zero-shot, few-shot, Chain-of-Thought, and role prompting, along with learning how to structure prompts effectively for clarity and control. Hands-on labs throughout the course will provide practical experience applying these techniques to various real-world tasks.
Furthermore, the workshop introduces the basics of interacting with LLMs via APIs, covering setup considerations, making simple API calls, and handling responses, enabling participants to understand how LLMs can be integrated into applications. The course also covers practical use cases for prompt engineering in personal and code productivity, briefly introduces related concepts like RAG and Function Calling, and concludes with a look at the rapid future of AI, including vital ethical considerations and responsible AI interaction practices. By the end of this 2-day course, you will be equipped with practical prompt engineering skills and a solid understanding of effective and responsible LLM interaction.
Instructor-led online and in-house face-to-face options are available - as part of a wider customised training programme, or as a standalone workshop, on-site at your offices or at one of many flexible meeting spaces in the UK and around the World.
-
- Explain what Large Language Models (LLMs) are and why Prompt Engineering is essential for effective interaction.
- Understand key LLM parameters (tokens, temperature, etc.) and how they influence output, as well as common LLM limitations.
- Apply core Prompt Engineering techniques, including zero-shot, few-shot, Chain-of-Thought, and role prompting, to guide LLM responses.
- Structure prompts effectively using roles, delimiters, and clear instructions.
- Perform basic interactions with LLMs programmatically via APIs.
- Understand the concepts of Function Calling/Tool Use and Retrieval-Augmented Generation (RAG) in the context of LLM applications (overview).
- Apply Prompt Engineering techniques to practical personal and code productivity use cases.
- Identify common LLM limitations and ethical considerations related to prompting and output.
- Understand the rapid pace of development and future trends in the AI and LLM landscape.
-
This 2-day Prompt Engineering and LLM training course is designed for anyone who wants to leverage the power of Large Language Models effectively, whether through user interfaces or by integrating them into applications and workflows. It is ideal for:
Developers, data scientists, and engineers looking to understand how to interact with LLMs via APIs and incorporate them into their work.
Analysts, researchers, and knowledge workers seeking to improve their productivity using LLMs for tasks like summarisation, drafting, and ideation.
Content creators, marketers, and communicators interested in using LLMs for generating and refining text outputs.
Project managers, team leads, and decision-makers who need to understand the capabilities, limitations, and practical applications of LLMs.
Anyone interested in gaining practical skills to communicate effectively with AI models.
No prior experience with Large Language Models or Prompt Engineering is required, though basic computer literacy and the ability to use web interfaces are assumed.
-
Participants should have:
Basic computer literacy and experience using web browsers and online tools.
While not strictly required, a basic understanding of programming concepts may be beneficial for the module covering LLM APIs and programmatic interaction examples.
No prior experience with Large Language Models or Prompt Engineering is necessary.
-
This Prompt Engineering course is available for private / custom delivery for your team - as an in-house face-to-face workshop at your location of choice, or as online instructor-led training via MS Teams (or your own preferred platform).
Get in touch to find out how we can deliver tailored training which focuses on your project requirements and learning goals.
-
Introduction to LLMs & Prompt Engineering
LLMs description and origin.
Interacting with LLMs (UIs, APIs, landscape overview).
Why Prompt Engineering Matters: Effective communication, unlocking capabilities.
Hands-On Lab: Exploring different LLM UIs and their capabilities.
Understanding LLM Parameters & Limitations
Tokens, context windows, and managing input/output length.
Temperature and other parameters (top-p, penalties) for controlling output style.
Understanding LLM Limitations: Hallucinations, bias, knowledge cut-off, factual accuracy.
Hands-On Lab: Experimenting with parameters to influence output.
Core Prompt Engineering Techniques
Effective Communication Principles: Clarity, specificity, constraints.
Prompt Structure: Roles (System, User), delimiters, clear instructions.
Zero-shot, One-shot, and Few-shot Prompting with examples.
Chain-of-Thought (CoT) Prompting for improved reasoning.
Role Prompting for persona control.
Negative Constraints / Anti-Prompts.
Iterative Prompt Refinement techniques.
Hands-On Lab: Practicing various core prompting techniques with diverse tasks.
Developing with LLM APIs
API Setup and Budgeting (overview).
Making Basic API Calls (using a simple script/tool).
Handling API Responses (parsing output).
Current Integration Examples.
Introduction to Function Calling / Tool Use (conceptual overview).
Hands-On Lab: Making basic API calls to an LLM model.
Prompt Engineering Use Cases & Advanced Concepts
Prompting for Personal Productivity (summarisation, drafting, ideas).
Prompting for Code Productivity (documentation, explanation, testing assistance).
Introduction to Retrieval-Augmented Generation (RAG) - conceptual overview.
Prompting for Creative Tasks.
Hands-On Lab: Applying techniques to productivity and creative tasks.
The Future of AI, Ethics, and Next Steps
Recent Developments & Coming Soon.
Threats and Opportunities.
Ethical Considerations in Prompting: Bias, fairness, safety, responsible AI.
Your Next Steps for continued learning.
Hands-On Lab: Discussion on ethical considerations, analysing biased outputs.
-
Google AI Documentation: Resources and documentation for Google's AI models, including the Gemini series, relevant for understanding capabilities and API interaction. https://ai.google.dev/
OpenAI Documentation: Official documentation for OpenAI's models, including the GPT series, covering API usage, prompting guidelines, and best practices. https://platform.openai.com/docs/
Anthropic Documentation: Resources for Anthropic's Claude models, providing insights into prompting and model behaviour. https://docs.anthropic.com/
Hugging Face: A central hub for open-source AI models, datasets, and tools, offering valuable resources for exploring the wider LLM landscape. https://huggingface.co/
Trusted by



