About the course
This three-day, applied software engineering course is designed to move developers beyond simple API calls to building reliable, cost-effective, and secure applications powered by the Claude model family. The core focus is on production readiness: mastering advanced prompting for predictable output, implementing robust error handling, and applying security measures to mitigate risks like prompt injection.
Participants will gain hands-on experience integrating the Anthropic SDK, utilizing modern models (like Claude 3 Opus and Sonnet), mastering structured data generation, and designing systems that intelligently manage costs and latency. You will leave with a clear methodology for versioning, testing, and deploying high-performance LLM-powered features.
Instructor-led online and in-house face-to-face options are available - as part of a wider customised training programme, or as a standalone workshop, on-site at your offices or at one of many flexible meeting spaces in the UK and around the World.
-
- Optimize API Integration: Manage request latency, implement effective rate limiting strategies, and handle streaming patterns efficiently.
- Control Output: Master the use of System Instructions and techniques to reliably enforce structured output (JSON, XML) and deterministic behavior.
- Implement Cost Control: Accurately estimate token usage and employ caching and early-exit strategies to manage production costs.
- Design for Tool Use: Build robust, reliable systems that utilize function calling to connect Claude with internal APIs and databases.
- Mitigate Risk: Implement guardrails and input validation to defend against common security vulnerabilities like prompt injection.
- Test and Monitor: Establish a repeatable methodology for testing prompt changes, logging execution, and monitoring prompt performance in production.
-
This workshop is aimed at Software Engineers, ML Engineers, Technical Leads, and Solution Architects who are actively designing, building, or maintaining applications that use the Claude API for core business logic, code generation, or content processing.
-
Attendees must be comfortable with Python or JavaScript and have foundational experience integrating with external APIs.
-
This Claude Code course is available for private / custom delivery for your team - as an in-house face-to-face workshop at your location of choice, or as online instructor-led training via MS Teams (or your own preferred platform).
Get in touch to find out how we can deliver tailored training which focuses on your project requirements and learning goals.
-
The Claude API Ecosystem
Deep dive into the Anthropic SDK (Python/Node.js): Authentication and core client configuration.
Model Selection Strategy: Comparing the use cases for the Claude 3 family: Opus (reasoning), Sonnet (speed/scale), and Haiku (latency/cost).
Tokenization and Cost: Understanding the token model, calculating estimated costs, and utilizing the tokenizer endpoint.
High-Performance Integration Patterns
Asynchronous Execution: Implementing concurrent calls to maximize throughput without hitting rate limits.
Streaming vs. Batching: Architecting for user-facing streaming applications versus backend batch processing.
Error Handling & Resilience: Implementing exponential backoff, circuit breakers, and tailored error messages for API failures.
Hands-on Lab: Setting up an integrated environment and implementing robust error and rate-limit handling for a batch task.
Proactive Cost Management
The Context Window: Strategies for condensing long documents and managing conversation history to minimize token usage.
Caching Layer Design: Implementing an effective cache (e.g., Redis) for prompt-output pairs to avoid repeated calls for static queries.
Early-Exit Logic: Coding business logic to stop the API call early if the desired output is identified mid-stream.
System Instructions and Persona
Mastering the System Prompt: Using system instructions to define Claude's persona, constraints, and operational rules, separating them from the user query.
The "One-Shot" Principle: Structuring prompts to include all necessary context, examples, and rules in a single, clear block.
Version Control for Prompts: Storing and managing prompts as code (PaaS) to enable testing and historical review.
Structured Data Generation
Enforcing JSON/XML Output: Utilizing specialized Claude prompting techniques and parameters to ensure the output conforms to a strict schema.
Pydantic/Zod Integration: Using developer tooling to quickly validate and parse LLM output schemas in code.
Recovery Strategies: Implementing client-side logic to attempt repairs or re-prompting when the model outputs malformed data.
Hands-on Lab: Building a Python application that uses a system prompt to consistently generate and parse a user feedback object as strict JSON.
Advanced Tool Use (Function Calling)
Tool Design: Principles for designing effective functions (tools) that are clear, atomic, and useful for the model.
Reasoning and Execution: Architecting the loop where Claude decides to call a tool, the tool is executed client-side, and the result is fed back.
Handling Ambiguity in Tool Use: Strategies for guiding Claude when it attempts to use the wrong tool or asks for missing parameters.
Hands-on Lab: Integrating a simple internal API (e.g., inventory lookup) as a tool for Claude to use in a simulated customer service scenario.
LLM Testing and Evaluation
Creating a Golden Test Set: Building a static corpus of inputs and expected correct outputs for prompt regression testing.
Fidelity and Regression Testing: Automating tests to measure prompt degradation over time or after model updates.
Metrics for LLM Performance: Moving beyond unit tests to track metrics like relevance, toxicity, and adherence to system rules.
Security and Guardrails
Prompt Injection Defense: Techniques for separating user input from the system prompt context to prevent malicious instructions.
Input Sanitization: Stripping potentially harmful code or injection attempts before feeding user data to the model.
Output Guardrails: Using post-processing filters (e.g., simple Regex or secondary LLM calls) to review and reject inappropriate or non-compliant output.
Deployment and Operationalizing
Prompt Management Systems: Best practices for storing, deploying, and versioning prompts alongside application code.
Observability: Setting up logging and monitoring dashboards to track key production metrics: tokens consumed, latency (p95), error rates, and prompt version usage.
A/B Testing Prompts: Implementing a system to safely test the impact of new system instructions or prompt changes on production metrics before full deployment.
-
Core Anthropic/Claude API Documentation
Anthropic API Documentation Home: The central hub for all technical documentation, covering authentication, rate limits, and service status. https://docs.anthropic.com/en/
The Messages API Reference: The primary reference for the messages endpoint, detailing request/response structure, parameters, and error codes. This is crucial for high-performance applications. https://docs.anthropic.com/en/api/messages
Model Overview and Pricing: Essential for understanding the capabilities of the Claude 3 family (Opus, Sonnet, Haiku) and accurately estimating token costs (covered in Module 1 and 3). https://docs.anthropic.com/en/docs/models-overview
SDKs and Tooling
Anthropic Python SDK Documentation: The official documentation and installation guide for the recommended Python client. https://docs.anthropic.com/en/api/python
Anthropic TypeScript/Node.js SDK Documentation: The official documentation for JavaScript/TypeScript developers using the SDK. https://docs.anthropic.com/en/api/typescript
Pydantic (for Structured Output): A popular external library (Python) used by developers to define and validate strict data schemas, aiding in the parsing of LLM-generated JSON (Module 5). https://docs.pydantic.dev/
Advanced Techniques and Security
Tool Use (Function Calling) Guide: A specific, detailed guide on how to define tools, pass them to Claude, and manage the reasoning/execution loop (Module 6). https://docs.anthropic.com/en/docs/tool-use
Prompt Engineering Best Practices: Anthropic's guide to writing effective prompts, with a focus on System Instructions for deterministic output (Module 4). https://docs.anthropic.com/en/docs/prompt-engineering
Prompt Injection and Security: Resources detailing the risks of prompt injection and best practices for creating defensive programming layers around user input (Module 8). https://docs.anthropic.com/en/docs/prompt-security
Trusted by



