๐ The journey concludes! I'm excited to share the final installment, Part 5 of my "๐๐๐ญ๐ญ๐ข๐ง๐ ๐๐ญ๐๐ซ๐ญ๐๐ ๐ฐ๐ข๐ญ๐ก ๐๐๐๐ฅ-๐๐ข๐ฆ๐ ๐๐ญ๐ซ๐๐๐ฆ๐ข๐ง๐ ๐ข๐ง ๐๐จ๐ญ๐ฅ๐ข๐ง" series:
"๐
๐ฅ๐ข๐ง๐ค ๐๐๐๐ฅ๐ ๐๐๐ - ๐๐๐๐ฅ๐๐ซ๐๐ญ๐ข๐ฏ๐ ๐๐ง๐๐ฅ๐ฒ๐ญ๐ข๐๐ฌ ๐๐จ๐ซ ๐๐ฎ๐ฉ๐ฉ๐ฅ๐ข๐๐ซ ๐๐ญ๐๐ญ๐ฌ ๐ข๐ง ๐๐๐๐ฅ ๐๐ข๐ฆ๐"!
After mastering the fine-grained control of the DataStream API, we now shift to a higher level of abstraction with the ๐๐น๐ถ๐ป๐ธ ๐ง๐ฎ๐ฏ๐น๐ฒ ๐๐ฃ๐. This is where stream processing meets the simplicity and power of SQL! We'll solve the same supplier statistics problem but with a concise, declarative approach.
This final post covers:
โข Defining a ๐๐๐๐ฅ๐ over a streaming ๐๐๐ญ๐๐๐ญ๐ซ๐๐๐ฆ to run queries.
โข Writing declarative, ๐๐๐-๐ฅ๐ข๐ค๐ ๐ช๐ฎ๐๐ซ๐ข๐๐ฌ for windowed aggregations.
โข Seamlessly ๐๐ซ๐ข๐๐ ๐ข๐ง๐ ๐๐๐ญ๐ฐ๐๐๐ง ๐ญ๐ก๐ ๐๐๐๐ฅ๐ ๐๐ง๐ ๐๐๐ญ๐๐๐ญ๐ซ๐๐๐ฆ ๐๐๐๐ฌ to handle complex logic like late-data routing.
โข Using Flink's built-in Kafka connector with the avro-confluent format for declarative sinking.
โข Comparing the declarative approach with the imperative DataStream API to achieve the same business goal.
โข Demonstrating the practical setup using ๐
๐๐๐ญ๐จ๐ซ ๐๐จ๐ฎ๐ฌ๐ ๐๐จ๐๐๐ฅ and ๐๐ฉ๐จ๐ฐ for a seamless Kafka development experience.
This is the final post of the series, bringing our journey from Kafka clients to advanced Flink applications full circle. It's perfect for anyone who wants to perform powerful real-time analytics without getting lost in low-level details.
Read the article:
https://jaehyeon.me/blog/2025-06-17-kotlin-getting-started-flink-table/
Thank you for following along on this journey! I hope this series has been a valuable resource for building real-time apps with Kotlin.
๐ ๐๐๐ ๐ญ๐ก๐ ๐๐ฎ๐ฅ๐ฅ ๐ฌ๐๐ซ๐ข๐๐ฌ ๐ก๐๐ซ๐:
1. Kafka Clients with JSON
2. Kafka Clients with Avro
3. Kafka Streams for Supplier Stats
4. Flink DataStream API for Supplier Stats