About the course:
Our Apache Kafka training course takes you through the steps necessary to obtain, configure and deploy Kafka so you can benefit from scalable high-speed data stream management, message queuing, storage and clustering.
Kafka isn't limited to one particular platform, so you can interface with applications and services written with Python, Spring Boot, C#, Spark, Java...
Kafka Broker plays nicely with Docker too, so if you are looking to deploy your message queue to the cloud, or an internal cluster alongside other containers / microservices, then we will be able to help you get to grips with the concepts and techniques you need to learn.
By the end of the course, you will have learnt about:
- What is a distributed streaming platform?
- Brokers, Consumers, Producers, and the Zookeeper
- Topics, partitions, and the Kafka log
- Message Delivery Semantics
- Connectors and Kafka Connect
- Streams Processors
- Basic administration and configuration of a Kafka / Zookeeper cluster
Who should attend
This course is aimed at sysadmins and Software Developers who need to set up Kafka for the first time.
Prerequisites
Some of experience of working with Command Line and Linux would be useful, and any exposure to programming in a language such as Java, Python or C# would be highly beneficial.
Live, instructor-led online and on-site training
We appreciate that you need flexibility to fit in with new working situations - whether you're an individual, part of a distributed team, or simply have projects and deadlines to meet.
Our remote training can take place online in a virtual classroom, with content split into modules to accommodate your scheduling challenges and meet your learning goals. Get in touch today to find out how we can help design a cost-effective, flexible training solution.
As soon as it's safe, we'll return to also offering the on-site custom training courses and programmes upon which we've built our reputation.
Introduction to Kafka
- Overview of Kafka Architecture
- Key terms
- Obtaining and installing Kafka
What is a distributed streaming platform?
- Publishing and subscribing to Streams
- Storing Streams
- Processing
Brokers, Consumers, Producers, and the Zookeeper
- Understanding Brokers, Consumers and Producers
- When do I use Zookeeper?
- Tracking consumed offsets
- Load Balancing
Topics, partitions, and the Kafka log
- Fault tolerance - Replication, Failover and Parallel Processing
- Ordering and Cardinality
- Leaders & Followers
Message Delivery Semantics
- At most once
- At least once
- Exactly once
Connectors and Kafka Connect
- Connector model
- Worker model
- Data model
- Data pipelines management
- Log and metric collection, processing, and aggregation
- ETL for data warehousing
Streams Processors
- Prcoessor API
- Defining a Stream Processor
- State Stores
- Defining and creating a State Store
- Fault-tolerant State Stores
- Manage Fault Tolerance of State Stores (Store Changelogs)
- Custom State Stores
- Connecting Processors and State Stores
Basic administration and configuration of a cluster
- Setting up Kafka and Zookeeper on AWS
- Starting up and shutting down brokers
- Performance Optimisations