More talks in the program:
09:00 - 17:00
Apache Kafka is a de facto standard streaming data processing platform, being widely deployed as a messaging system, and having a robust data integration framework (Kafka Connect) and stream processing API (Kafka Streams) to meet the needs that common attend real-time message processing. But there’s more! Kafka now offers KSQL, a declarative, SQL-like stream processing language that lets you define powerful stream-processing applications easily. What once took some moderately sophisticated Java code can now be done at the command line with a familiar and eminently approachable syntax.
Content & Process
In this workshop, we’ll cover the following:
• How to produce and consume from a Kafka topic
• Using Kafka Connect to consume data from a relational database
• How to model real-world enterprise problems in a streaming data platform
• How to enrich and aggregate streaming data using KSQL
Audience & Requirements
Before attending, please clone this repo on GitHub [https://github.com/confluentinc/kafka-workshop], then do all the steps in Exercise 0. This will involve you setting up Docker on your machine and doing a docker-compose pull to download the workshop images. We will have a hard time on workshop day troubleshooting Docker-related problems, so the day will be much more successful if you get things installed and running before you show up.