About the course
Transitioning machine learning models from development environments to reliable, scalable production systems is a critical challenge in the ML lifecycle. MLOps (Machine Learning Operations) addresses this challenge by applying engineering principles and automation to the ML workflow. This 3-day workshop provides a practical introduction to MLOps principles and deployment techniques specifically using key tools from the Python ecosystem. It is designed for individuals who have experience building ML models and now need to understand how to effectively deploy, manage, and monitor them in production environments.
The course covers fundamental MLOps concepts, the challenges specific to putting ML into production, and best practices for writing production-ready Python code. You will learn how to manage dependencies for reproducible environments using modern tools like poetry or uv, and understand how to properly package and version your trained models for deployment. Experiment tracking with MLFlow is covered as a crucial tool for managing your ML development process and ensuring reproducibility. A significant focus is placed on practical deployment patterns, including containerizing models with Docker and building custom inference endpoints using Python web frameworks like FastAPI, as well as exploring the benefits and options of managed cloud deployment services.
Key MLOps practices for post-deployment are introduced, such as monitoring model performance and detecting data drift in production, along with basic logging and alerting concepts. The course also provides an introduction to automating parts of the ML workflow and the principles of CI/CD for ML, setting the stage for building automated pipelines. Through extensive hands-on labs using Python tools like Docker, FastAPI, and MLFlow, you will gain the practical skills needed to package, deploy, and begin managing your machine learning models in a production context.
Instructor-led online and in-house face-to-face options are available - as part of a wider customised training programme, or as a standalone workshop, on-site at your offices or at one of many flexible meeting spaces in the UK and around the World.
-
- Explain the principles and challenges of MLOps and its role in the ML lifecycle.
- Prepare ML code for production and manage dependencies for reproducible environments using Python tools (venv, poetry/uv).
- Package trained ML models for deployment.
- Understand and apply concepts of model versioning.
- Use MLFlow for experiment tracking and model management.
- Containerise ML models using Docker.
- Build custom model inference endpoints using Python web frameworks like FastAPI.
- Understand different model deployment patterns (real-time vs. batch) and the role of cloud endpoints.
- Implement basic monitoring and logging for production models.
- Identify model performance degradation and data drift concepts.
- Understand concepts of MLOps automation and Continuous Integration/Continuous Deployment (CI/CD) for ML.
-
This 3-day workshop is designed for individuals who have experience building machine learning models (equivalent to completing the prior courses in this programme) and need to learn how to deploy and manage them effectively in production environments using Python tools. It is ideal for:
Data Scientists looking to operationalise their models and bridge the gap to production.
Machine Learning Engineers seeking practical skills in MLOps tools and deployment patterns.
Software Developers involved in integrating or deploying ML models.
Anyone looking to understand the MLOps lifecycle and gain practical experience with production practices for ML using Python.
-
Participants should have attended our Intro to Machine Learning and Deep Learning with PyTorch training courses, or have equivalent experience:
Solid working knowledge of the Python programming language, including writing functions, working with standard libraries, and ideally some familiarity with object-oriented concepts.
Experience building Machine Learning models using Python libraries (e.g., Scikit-learn, PyTorch, TensorFlow), equivalent to completing the "Introduction to Machine Learning and Classic Algorithms with Python" and "Deep Learning Fundamentals with PyTorch" courses.
Familiarity with the command line/terminal interface.
Basic understanding of Git for version control is beneficial.
No prior experience with MLOps concepts, Docker, or specific deployment tools like FastAPI or MLFlow is required, as these will be introduced in the course.
-
This MLOps course is available for private / custom delivery for your team - as an in-house face-to-face workshop at your location of choice, or as online instructor-led training via MS Teams (or your own preferred platform).
Get in touch to find out how we can deliver tailored training which focuses on your project requirements and learning goals.
-
Introduction to MLOps and Production Challenges
What is MLOps? Understanding its definition, goals, and importance in the ML lifecycle.
Challenges of taking ML models from research and development environments (like notebooks) to reliable, scalable production systems.
The MLOps Lifecycle (high-level overview): Stages and feedback loops.
Focus of this course: Exploring MLOps principles and practical tools using the Python ecosystem.
Production-Ready ML Code and Environments
Transitioning from exploratory code (e.g., in notebooks) to modular, testable, and production-ready Python scripts and packages.
Importance of Code Version Control (using Git) for collaboration, tracking changes, and reproducibility (assumes basic Git knowledge).
Dependency Management for Reproducibility:
The necessity of isolated environments.
Using standard Virtual Environments (venv, virtualenv).
Using Modern Dependency Managers (poetry, uv): Concepts, benefits (locked dependencies, reproducible builds), and basic practical usage for managing project dependencies.
Structuring ML Projects for production readiness (basic project layout principles).
(Optional/Brief Introduction) Writing basic Unit Tests for ML code components (e.g., data preprocessing functions).
Model Packaging and Versioning
Review: Essential methods for Saving and Loading Trained Models in Python (using Pickle, Joblib, or framework-specific methods like torch.save).
Model Packaging: Creating a deployable artifact by bundling the trained model file(s) with the necessary code for inference and any required preprocessing/postprocessing steps.
Common model serialization formats.
Model Versioning Concepts: Why versioning models is important for tracking, reproducibility, and rollbacks in production.
Experiment Tracking and Model Management
Experiment Tracking: Understanding why tracking experiments (parameters, metrics, artifacts, code versions) is crucial for reproducibility, comparison, and collaboration in MLOps.
Using MLFlow for experiment tracking: Setting up MLFlow tracking, logging parameters, metrics, and artifacts from your Python training runs, viewing results in the MLFlow UI.
Introduction to Model Registries: Concepts and their role in managing model versions, stages (e.g., staging, production), and approval workflows (using MLFlow Model Registry or discussing similar concepts in cloud platforms).
Model Deployment Patterns (Python and Containers)
Overview of Model Deployment Challenges in production (scalability, latency, reliability).
Understanding Real-time Inference vs. Batch Inference scenarios.
Containerization with Docker:
Introduction to Docker concepts (Images, Containers, Dockerfile).
Why Docker is essential for reproducible deployments.
Building a Docker image for a simple Python-based ML model inference service.
Running the inference container locally.
Building Custom Model Endpoints with Python:
Using a lightweight Python web framework like FastAPI to build a simple REST API for serving model predictions.
Integrating the packaged model and inference code into the FastAPI application.
Deploying the FastAPI application using Docker.
Introduction to Cloud Deployment Endpoints:
Brief overview of managed ML model hosting services offered by cloud providers (e.g., AWS SageMaker Endpoints, GCP Vertex AI Endpoints, Azure ML Endpoints).
Understanding the benefits of using managed services for scaling, monitoring, and reliability.
(Optional: Brief mention of serverless deployment options like AWS Lambda).
Hands-On Labs: Building a Docker image for a simple model, creating a basic FastAPI inference service, integrating the model, running the service in Docker, exploring options for pushing/pulling images from a registry.
Monitoring and Alerting
Why monitor ML models specifically in production?
Monitoring Model Performance: Tracking key ML metrics (from evaluation module) over time in production.
Detecting Data Drift: Identifying changes in the distribution of input data fed to the model.
Setting up basic application Logging within the Python inference service.
Introduction to Alerting: Setting up simple alerts based on monitoring metrics or logs.
Introduction to MLOps Automation and CI/CD
Why automate the ML workflow?
Introduction to Continuous Integration/Continuous Deployment (CI/CD) concepts applied to Machine Learning.
Building simple automated workflows (e.g., using Makefiles, simple shell scripts, or Python scripts) to orchestrate steps like data validation, training, model evaluation, packaging, and deployment.
(Optional/Brief Overview) Introduction to MLOps Orchestration Tools and Platforms (e.g., Kubeflow Pipelines, MLFlow Pipelines, AWS SageMaker Pipelines, GCP Vertex AI Pipelines - focus on what these tools do at a high level).
Course summary
Review of key MLOps principles, practices, and Python tooling covered.
Connecting MLOps concepts to the full ML lifecycle (referencing the prior courses).
Discussing next steps in the MLOps journey and exploring more advanced topics.
Q&A
-
MLFlow Documentation - https://mlflow.org/docs/latest/index.html
Docker Documentation - https://docs.docker.com/
FastAPI Documentation - https://fastapi.tiangolo.com/
Poetry Documentation - https://python-poetry.org/docs/
uv Documentation - https://docs.astral.sh/uv/
Trusted by



