Public Sector

We've had the pleasure of working with UK and overseas central and local government departments, including Healthcare (NHS and Foundation Trusts), Defence, Education (Universities and colleges), many of the main Civil Service departments, Emergency Services; also public-owned corporations including the BBC, Bank of England, Ordnance Survey, and regulatory bodies such as Ofgem.

We are registered on Crown Commercial Service’s (CCS) Dynamic Purchasing System (RM6219 Training and Learning) and also with numerous tender portals such as Ariba, Coupa and Delta E-Sourcing.

Read more...

Graduate Training Schemes

Framework Training has a strong track record of providing a solid introduction into the working world for technical graduates across myriad industries. We provide the opportunity to learn and gain valuable hands-on experience in a supportive, friendly and sociable training environment.

Attract & retain the brightest new starters

We know it is vital for our clients to invest in the future of their talented grads; not only to provide them with high-quality, professional training essential for their roles, but to embed them within the organisation’s culture and guide them on the right path to a successful career.

After all, your new hires could well be the next leaders and their creative ideas and unique insights are invaluable to your business.

Read more ...

Learning & Development

Our unique portfolio of high-quality technical courses and training programmes are industry-respected. They’re carefully designed so that delegates can seamlessly apply what they’ve learnt back in the workplace. Our team of domain experts, trainers, and support teams know our field — and all things tech — inside out, and we work hard to keep ourselves up to speed with the latest innovations. 

We’re proud to develop and deliver innovative learning solutions that actually work and make a tangible difference to your people and your business, driving through positive lasting change. Our training courses and programmes are human-centred. Everything we do is underpinned by our commitment to continuous improvement and learning and generally making things much better.

Read more...

Corporate & Volume Pricing

Whether you are looking to book multiple places on public scheduled courses (attended remotely or in our training centres in London) or planning private courses for a team within your organisation, we will be happy to discuss preferential pricing which maximise your staff education budget.

Enquire today about:

  • Training programme pricing models  

  • Multi-course voucher schemes

Read more...

Custom Learning Paths

We understand that your team training needs don't always fit into a "one size fits all" mould, and we're very happy to explore ways in which we can tailor a bespoke learning path to fit your learning needs.

Find out about how we can customise everything from short overviews, intensive workshops, and wider training programmes that give you coverage of the most relevant topics based on what your staff need to excel in their roles.

Read more...

Applied Natural Language Processing (NLP) with Python

Learn to create NLP solutions with Python: make sense of your unstructured text data.

About the course

Natural Language Processing (NLP) is a rapidly evolving field focused on enabling computers to understand, interpret, and generate human language. With the explosion of text data available from sources like social media, emails, and documents, NLP skills are increasingly vital across industries for tasks such as sentiment analysis, information extraction, topic discovery, and building conversational agents. This course provides a thorough introduction to Natural Language Processing using the powerful Python ecosystem, covering fundamental concepts, essential techniques, and an outlook on modern advancements driven by deep learning and transformer models.

The course begins with the foundations of NLP, exploring common applications and the rich Python ecosystem of libraries like NLTK, spaCy, Gensim, and the cutting-edge Hugging Face Transformers library. You will master essential text processing techniques necessary to prepare text data for analysis, including tokenisation, stemming, lemmatisation, and using Regular Expressions for pattern matching and cleaning. A significant focus is placed on converting unstructured text into numerical formats that machine learning algorithms can understand, covering traditional methods like Bag-of-Words and TF-IDF, and introducing modern word and document embeddings, including the conceptual role of contextual embeddings derived from Transformer models.

You will gain practical skills in core NLP tasks, including Text Classification (categorising documents using both classical ML approaches and modern techniques leveraging pre-trained Transformer models), Topic Modelling (discovering underlying themes using algorithms like LDA), and Information Extraction (specifically Named Entity Recognition using powerful libraries like spaCy and Hugging Face).

The course also provides an outlook on advanced NLP applications such as Text Summarisation and Natural Language Generation, demonstrating capabilities using pre-trained models and setting realistic expectations for developing solutions in these complex areas. Through dedicated hands-on labs integrated throughout, you will gain practical experience applying these techniques and libraries to real-world text data in Python.

Instructor-led online and in-house face-to-face options are available - as part of a wider customised training programme, or as a standalone workshop, on-site at your offices or at one of many flexible meeting spaces in the UK and around the World.

    • Understand core NLP concepts, common applications, and the landscape of the Python NLP ecosystem (NLTK, spaCy, Hugging Face Transformers...).
    • Perform essential text processing and cleaning techniques (tokenisation, stemming, lemmatisation, regex) using Python libraries.
    • Create numerical representations of text using methods like Bag-of-Words, TF-IDF, and understand different types of word, document, and contextual embeddings.
    • Implement Text Classification using both classical ML models (e.g., Naive Bayes, SVMs with Scikit-learn) and by leveraging pre-trained Transformer models (e.g., with Hugging Face).
    • Perform Topic Modelling using algorithms like Latent Dirichlet Allocation (LDA) in Python.
    • Perform Named Entity Recognition (NER) using libraries like spaCy and Hugging Face Transformers.
    • Understand the concepts and applications of advanced NLP tasks such as Text Summarisation and Natural Language Generation.
    • Use key Python NLP libraries including NLTK, spaCy, scikit-learn, Gensim, and Hugging Face Transformers for various NLP tasks.
    • Apply NLP techniques to process, analyse, and build models for real-world text data through hands-on labs.
  • This course is designed for developers, data scientists, data analysts, and researchers who want to gain a practical introduction to Natural Language Processing using the Python programming language and its rich ecosystem of libraries. It is ideal for:

    • Data Professionals who work with or anticipate working with text data.

    • Developers interested in building applications that involve processing or understanding human language.

    • Individuals looking to add foundational and modern NLP skills to their repertoire.

  • Participants should have attended our Python Programming and Machine Learning courses, or have equivalent experience:

    • Working knowledge of the Python programming language, including experience with libraries like NumPy and Pandas is beneficial.

    • Basic familiarity with machine learning concepts (e.g., features, training, evaluation metrics) is beneficial but not strictly required, as relevant concepts will be introduced in context.

    No prior experience with Natural Language Processing is required.

  • This NLP course is available for private / custom delivery for your team - as an in-house face-to-face workshop at your location of choice, or as online instructor-led training via MS Teams (or your own preferred platform).

    Get in touch to find out how we can deliver tailored training which focuses on your project requirements and learning goals.

  • Foundations - Text Processing and the NLP Ecosystem

    • Understanding Natural Language Processing: What is NLP? Key challenges in understanding and processing human language. Overview of diverse real-world NLP applications (e.g., sentiment analysis, chatbots, search).

    • The Python NLP Ecosystem: Exploring the landscape of popular Python libraries for NLP and their strengths (NLTK, spaCy, Gensim, scikit-learn, Hugging Face Transformers). Guidance on choosing the right tool for different tasks.

    • Working with Text: Essential techniques for preparing text data.

      • Tokenisation: Breaking text into meaningful units (words, sub-word tokens).

      • Text Pre-processing: Techniques like lowercasing, punctuation removal, handling noise, stemming (NLTK) and lemmatisation (spaCy).

      • Regular Expressions: Using RegEx for pattern matching, searching, and cleaning text.

    • Basic Text Analysis:

      • Word Frequencies and Distributions: Identifying common and rare words, understanding the concept of Stop-words.

      • Introduction to Zipf's Law (for context on word distribution patterns).

      • Mining simple word co-occurrences to identify basic relationships between words.

    • Hands-On Lab: Setting up the Python NLP environment, loading and cleaning text data, performing tokenisation and pre-processing steps using NLTK and spaCy, calculating word frequencies and identifying stop words.

    Text Representation - Converting Text to Data

    • The Need for Numerical Representation: Why text needs to be converted into numerical formats (vectors, matrices) for machine learning algorithms.

    • Traditional Methods:

      • N-grams: Representing sequences of words or characters.

      • Bag-of-Words (BoW): Simple frequency-based representation of documents.

      • TF-IDF (Term Frequency-Inverse Document Frequency): Weighing word importance in a document relative to a corpus.

    • Introduction to Word Embeddings: Dense vector representations that capture semantic relationships between words.

      • Concepts of classic word embeddings (Word2Vec, GloVe, FastText).

      • Loading and using pre-trained word embeddings.

    • Introduction to Document Embeddings: Representing entire documents as vectors (e.g., averaging word embeddings, or simple Doc2Vec concepts).

    • Modern Contextual Embeddings: Introduction to the concept of embeddings derived from Transformer models (e.g., BERT), where the vector representation of a word changes based on its surrounding context.

    • Hands-On Lab: Implementing BoW and TF-IDF representations using Scikit-learn, working with pre-trained word embeddings, conceptual discussion and brief demonstration of contextual embeddings.

    Text Classification

    • The Text Classification Problem: Training models to automatically assign predefined categories or labels to documents.

    • Common Applications: Topic Classification, Sentiment Analysis, Spam Detection, Intent Recognition.

    • Classical ML Approaches: Using Scikit-learn for Text Classification pipelines with traditional representations (BoW, TF-IDF).

      • Algorithms: Naive Bayes, Support Vector Machines (SVMs), Logistic Regression.

    • Introduction to Deep Learning Approaches: Concepts of using simple feed-forward neural networks with word embeddings for classification.

    • Modern Approaches with Transformers: Using pre-trained Transformer models (e.g., from the Hugging Face transformers library) for Text Classification. Understanding the fine-tuning paradigm (briefly).

    • Model Evaluation: Assessing Classification Quality using appropriate metrics (Accuracy, Precision, Recall, F1-score, Confusion Matrix) and cross-validation.

    • Model Introspection: Basic techniques for understanding why a model made a specific classification decision.

    • Hands-On Lab: Implementing text classification using classical ML (Scikit-learn pipeline), implementing text classification using a pre-trained Transformer model (Hugging Face transformers), evaluating and comparing model performance.

    Topic Modelling

    • Topic Modelling: Discovering abstract "topics" or themes that occur in a collection of documents.

    • Understanding Probabilistic Topic Models.

    • Algorithm: Latent Dirichlet Allocation (LDA) - Concepts and practical implementation using Gensim or Scikit-learn.

    • Interpreting the results of Topic Models: Identifying key words for each topic and assigning documents to topics.

    • Evaluating Topic Models (basic concepts).

    • Hands-On Lab: Implementing LDA on a collection of documents, exploring the resulting topics and dominant themes in documents.

    Information Extraction

    • Introduction to Information Extraction: Automatically extracting structured information from unstructured text.

    • Named Entity Recognition (NER): Identifying and classifying named entities in text (e.g., persons, organisations, locations, dates).

      • Techniques: Rule-based approaches (briefly) vs. Machine Learning/Deep Learning approaches.

      • Practical NER using libraries like spaCy and pre-trained models from Hugging Face Transformers.

    • (Optional/Brief): Introduction to other IE tasks like Relation Extraction (identifying relationships between entities) or Event Extraction (identifying mentions of events).

    • Hands-On Lab: Performing Named Entity Recognition on sample text using spaCy and/or Hugging Face Transformers, exploring the output and evaluating results.

    Introduction to Advanced NLP Applications - An Outlook

    • Overview on Advanced NLP Problems: Introduction to complex tasks that often build on deep learning and large models.

    Text Summarisation:

    • Understanding the goal: Condensing a document or collection into a shorter version.

    • Approaches: Extractive Summarisation (selecting key sentences, e.g., TextRank concepts) vs. Abstractive Summarisation (generating new sentences).

    • Conceptual Overview: How sequence-to-sequence models and Transformers are used for summarisation.

    • Demonstration: Using a pre-trained summarisation model from Hugging Face Transformers.

    Natural Language Generation (NLG):

    • Understanding the goal: Creating human-like text from structured data or as a continuation of existing text.

    • Techniques Overview: Simple methods (e.g., template-based, N-gram generation) vs. Deep Learning methods (RNNs, LSTMs, and especially Transformer-based Language Models).

    • Setting Realistic Expectations: Highlighting the complexity and current state of the art in generating coherent, relevant, and fluent text, and the significant role of large pre-trained models (LLMs).

    • Demonstration: Using a pre-trained language model from Hugging Face Transformers for text generation (e.g., prompting the model, exploring different decoding strategies).

    • (Optional/Brief): Introduction to Machine Translation concepts, Conversational AI / Chatbot basics.

    • Hands-On Demo/Brief Lab: Using libraries to perform summarisation and text generation with pre-trained models on sample data.

    Summary and next steps

    o    Review of key NLP concepts, techniques, and the Python ecosystem covered throughout the course.

    o    Connecting the learned concepts and tools to real-world applications and participant interests.

    o    Discussing next steps in your NLP journey: Exploring specific libraries in more depth, diving deeper into Deep Learning for NLP theory, working with advanced architectures, fine-tuning large pre-trained models, deploying NLP models (MLOps for NLP).

    o    Q&A

Trusted by

Amadeus Services company logo CERN organisation logo University of Oxford logo / crest

Public Courses Dates and Rates

Please get in touch for pricing and availability.

Related courses