Public Sector

We've had the pleasure of working with UK and overseas central and local government departments, including Healthcare (NHS and Foundation Trusts), Defence, Education (Universities and colleges), many of the main Civil Service departments, Emergency Services; also public-owned corporations including the BBC, Bank of England, Ordnance Survey, and regulatory bodies such as Ofgem.

We are registered on Crown Commercial Service’s (CCS) Dynamic Purchasing System (RM6219 Training and Learning) and also with numerous tender portals such as Ariba, Coupa and Delta E-Sourcing.

Read more...

Graduate Training Schemes

Framework Training has a strong track record of providing a solid introduction into the working world for technical graduates across myriad industries. We provide the opportunity to learn and gain valuable hands-on experience in a supportive, friendly and sociable training environment.

Attract & retain the brightest new starters

We know it is vital for our clients to invest in the future of their talented grads; not only to provide them with high-quality, professional training essential for their roles, but to embed them within the organisation’s culture and guide them on the right path to a successful career.

After all, your new hires could well be the next leaders and their creative ideas and unique insights are invaluable to your business.

Read more ...

Learning & Development

Our unique portfolio of high-quality technical courses and training programmes are industry-respected. They’re carefully designed so that delegates can seamlessly apply what they’ve learnt back in the workplace. Our team of domain experts, trainers, and support teams know our field — and all things tech — inside out, and we work hard to keep ourselves up to speed with the latest innovations. 

We’re proud to develop and deliver innovative learning solutions that actually work and make a tangible difference to your people and your business, driving through positive lasting change. Our training courses and programmes are human-centred. Everything we do is underpinned by our commitment to continuous improvement and learning and generally making things much better.

Read more...

Corporate & Volume Pricing

Whether you are looking to book multiple places on public scheduled courses (attended remotely or in our training centres in London) or planning private courses for a team within your organisation, we will be happy to discuss preferential pricing which maximise your staff education budget.

Enquire today about:

  • Training programme pricing models  

  • Multi-course voucher schemes

Read more...

Custom Learning Paths

We understand that your team training needs don't always fit into a "one size fits all" mould, and we're very happy to explore ways in which we can tailor a bespoke learning path to fit your learning needs.

Find out about how we can customise everything from short overviews, intensive workshops, and wider training programmes that give you coverage of the most relevant topics based on what your staff need to excel in their roles.

Read more...

MLOps and Deployment with Python

From Machine Learning development to deployment: your MLOps path with Python.

About the course

Transitioning machine learning models from development environments to reliable, scalable production systems is a critical challenge in the ML lifecycle. MLOps (Machine Learning Operations) addresses this challenge by applying engineering principles and automation to the ML workflow. This 3-day workshop provides a practical introduction to MLOps principles and deployment techniques specifically using key tools from the Python ecosystem. It is designed for individuals who have experience building ML models and now need to understand how to effectively deploy, manage, and monitor them in production environments.

The course covers fundamental MLOps concepts, the challenges specific to putting ML into production, and best practices for writing production-ready Python code. You will learn how to manage dependencies for reproducible environments using modern tools like poetry or uv, and understand how to properly package and version your trained models for deployment. Experiment tracking with MLFlow is covered as a crucial tool for managing your ML development process and ensuring reproducibility. A significant focus is placed on practical deployment patterns, including containerizing models with Docker and building custom inference endpoints using Python web frameworks like FastAPI, as well as exploring the benefits and options of managed cloud deployment services.

Key MLOps practices for post-deployment are introduced, such as monitoring model performance and detecting data drift in production, along with basic logging and alerting concepts. The course also provides an introduction to automating parts of the ML workflow and the principles of CI/CD for ML, setting the stage for building automated pipelines. Through extensive hands-on labs using Python tools like Docker, FastAPI, and MLFlow, you will gain the practical skills needed to package, deploy, and begin managing your machine learning models in a production context.

Instructor-led online and in-house face-to-face options are available - as part of a wider customised training programme, or as a standalone workshop, on-site at your offices or at one of many flexible meeting spaces in the UK and around the World.

    • Explain the principles and challenges of MLOps and its role in the ML lifecycle.
    • Prepare ML code for production and manage dependencies for reproducible environments using Python tools (venv, poetry/uv).
    • Package trained ML models for deployment.
    • Understand and apply concepts of model versioning.
    • Use MLFlow for experiment tracking and model management.
    • Containerise ML models using Docker.
    • Build custom model inference endpoints using Python web frameworks like FastAPI.
    • Understand different model deployment patterns (real-time vs. batch) and the role of cloud endpoints.
    • Implement basic monitoring and logging for production models.
    • Identify model performance degradation and data drift concepts.
    • Understand concepts of MLOps automation and Continuous Integration/Continuous Deployment (CI/CD) for ML.
  • This 3-day workshop is designed for individuals who have experience building machine learning models (equivalent to completing the prior courses in this programme) and need to learn how to deploy and manage them effectively in production environments using Python tools. It is ideal for:

    • Data Scientists looking to operationalise their models and bridge the gap to production.

    • Machine Learning Engineers seeking practical skills in MLOps tools and deployment patterns.

    • Software Developers involved in integrating or deploying ML models.

    • Anyone looking to understand the MLOps lifecycle and gain practical experience with production practices for ML using Python.

  • Participants should have attended our Intro to Machine Learning and Deep Learning with PyTorch training courses, or have equivalent experience:

    • Solid working knowledge of the Python programming language, including writing functions, working with standard libraries, and ideally some familiarity with object-oriented concepts.

    • Experience building Machine Learning models using Python libraries (e.g., Scikit-learn, PyTorch, TensorFlow), equivalent to completing the "Introduction to Machine Learning and Classic Algorithms with Python" and "Deep Learning Fundamentals with PyTorch" courses.

    • Familiarity with the command line/terminal interface.

    • Basic understanding of Git for version control is beneficial.

    No prior experience with MLOps concepts, Docker, or specific deployment tools like FastAPI or MLFlow is required, as these will be introduced in the course.

  • This MLOps course is available for private / custom delivery for your team - as an in-house face-to-face workshop at your location of choice, or as online instructor-led training via MS Teams (or your own preferred platform).

    Get in touch to find out how we can deliver tailored training which focuses on your project requirements and learning goals.

  • Introduction to MLOps and Production Challenges

    • What is MLOps? Understanding its definition, goals, and importance in the ML lifecycle.

    • Challenges of taking ML models from research and development environments (like notebooks) to reliable, scalable production systems.

    • The MLOps Lifecycle (high-level overview): Stages and feedback loops.

    • Focus of this course: Exploring MLOps principles and practical tools using the Python ecosystem.

    Production-Ready ML Code and Environments

    • Transitioning from exploratory code (e.g., in notebooks) to modular, testable, and production-ready Python scripts and packages.

    • Importance of Code Version Control (using Git) for collaboration, tracking changes, and reproducibility (assumes basic Git knowledge).

    • Dependency Management for Reproducibility:

      • The necessity of isolated environments.

      • Using standard Virtual Environments (venv, virtualenv).

      • Using Modern Dependency Managers (poetry, uv): Concepts, benefits (locked dependencies, reproducible builds), and basic practical usage for managing project dependencies.

    • Structuring ML Projects for production readiness (basic project layout principles).

    • (Optional/Brief Introduction) Writing basic Unit Tests for ML code components (e.g., data preprocessing functions).

    Model Packaging and Versioning

    • Review: Essential methods for Saving and Loading Trained Models in Python (using Pickle, Joblib, or framework-specific methods like torch.save).

    • Model Packaging: Creating a deployable artifact by bundling the trained model file(s) with the necessary code for inference and any required preprocessing/postprocessing steps.

    • Common model serialization formats.

    • Model Versioning Concepts: Why versioning models is important for tracking, reproducibility, and rollbacks in production.

    Experiment Tracking and Model Management

    • Experiment Tracking: Understanding why tracking experiments (parameters, metrics, artifacts, code versions) is crucial for reproducibility, comparison, and collaboration in MLOps.

    • Using MLFlow for experiment tracking: Setting up MLFlow tracking, logging parameters, metrics, and artifacts from your Python training runs, viewing results in the MLFlow UI.

    • Introduction to Model Registries: Concepts and their role in managing model versions, stages (e.g., staging, production), and approval workflows (using MLFlow Model Registry or discussing similar concepts in cloud platforms).

    Model Deployment Patterns (Python and Containers)

    • Overview of Model Deployment Challenges in production (scalability, latency, reliability).

    • Understanding Real-time Inference vs. Batch Inference scenarios.

    • Containerization with Docker:

      • Introduction to Docker concepts (Images, Containers, Dockerfile).

      • Why Docker is essential for reproducible deployments.

      • Building a Docker image for a simple Python-based ML model inference service.

      • Running the inference container locally.

    • Building Custom Model Endpoints with Python:

      • Using a lightweight Python web framework like FastAPI to build a simple REST API for serving model predictions.

      • Integrating the packaged model and inference code into the FastAPI application.

      • Deploying the FastAPI application using Docker.

    • Introduction to Cloud Deployment Endpoints:

      • Brief overview of managed ML model hosting services offered by cloud providers (e.g., AWS SageMaker Endpoints, GCP Vertex AI Endpoints, Azure ML Endpoints).

      • Understanding the benefits of using managed services for scaling, monitoring, and reliability.

      • (Optional: Brief mention of serverless deployment options like AWS Lambda).

    • Hands-On Labs: Building a Docker image for a simple model, creating a basic FastAPI inference service, integrating the model, running the service in Docker, exploring options for pushing/pulling images from a registry.

    Monitoring and Alerting

    • Why monitor ML models specifically in production?

    • Monitoring Model Performance: Tracking key ML metrics (from evaluation module) over time in production.

    • Detecting Data Drift: Identifying changes in the distribution of input data fed to the model.

    • Setting up basic application Logging within the Python inference service.

    • Introduction to Alerting: Setting up simple alerts based on monitoring metrics or logs.

    Introduction to MLOps Automation and CI/CD

    • Why automate the ML workflow?

    • Introduction to Continuous Integration/Continuous Deployment (CI/CD) concepts applied to Machine Learning.

    • Building simple automated workflows (e.g., using Makefiles, simple shell scripts, or Python scripts) to orchestrate steps like data validation, training, model evaluation, packaging, and deployment.

    • (Optional/Brief Overview) Introduction to MLOps Orchestration Tools and Platforms (e.g., Kubeflow Pipelines, MLFlow Pipelines, AWS SageMaker Pipelines, GCP Vertex AI Pipelines - focus on what these tools do at a high level).

    Course summary

    • Review of key MLOps principles, practices, and Python tooling covered.

    • Connecting MLOps concepts to the full ML lifecycle (referencing the prior courses).

    • Discussing next steps in the MLOps journey and exploring more advanced topics.

    • Q&A

Trusted by

IBM company logo University of Oxford logo / crest CERN organisation logo

Public Courses Dates and Rates

Please get in touch for pricing and availability.

Related courses