- Регистрация
- 27 Авг 2018
- Сообщения
- 37,817
- Реакции
- 544,690
- Тема Автор Вы автор данного материала? |
- #1
Every enterprise application creates data, whether it’s log messages, metrics, user activity, outgoing messages, or something else. And how to move all of this data becomes nearly as important as the data itself. If you’re an application architect, developer, or production engineer new to Apache Kafka, this practical guide shows you how to use this open source streaming platform to handle real-time data feeds.
Engineers from Confluent and LinkedIn who are responsible for developing Kafka explain how to deploy production Kafka clusters, write reliable event-driven microservices, and build scalable stream-processing applications with this platform. Through detailed examples, you’ll learn Kafka’s design principles, reliability guarantees, key APIs, and architecture details, including the replication protocol, the controller, and the storage layer.
- Understand publish-subscribe messaging and how it fits in the big data ecosystem.
- Explore Kafka producers and consumers for writing and reading messages
- Understand Kafka patterns and use-case requirements to ensure reliable data delivery
- Get best practices for building data pipelines and applications with Kafka
- Manage Kafka in production, and learn to perform monitoring, tuning, and maintenance tasks
- Learn the most critical metrics among Kafka’s operational measurements
- Explore how Kafka’s stream delivery capabilities make it a perfect source for stream processing systems