Apache Kafka +
Hazelcast for Developers
You can be doing more with your Apache Kafka applications.
They can be faster and more connected than ever before with Hazelcast.
Kafka Unleashed
As countless people have asserted over the years - data is the new oil. Just like oil that sits underground, undiscovered, untapped, its full potential goes unfulfilled. The power that is unleashed from oil that is pumped to the surface is the same as data that is transformed from data-at-rest to data-in-motion.
Developers who can see this potential from their current Apache Kafka solutions that implement the typical producer/consumer paradigm. These Apache Kafka solutions have data pipelines that are left untapped. Apache Kafka with the Hazelcast Platform can unleash the untapped value of these data reserves.
Apache Kafka and Hazelcast together create a powerful stream processing architecture. Apache Kafka, the streaming engine, manages the events. Hazelcast, the stream processor, acts as the consumer, catching events and processing them. Combining Apache Kafka with Hazelcast you get all the scalability, fault-tolerance, and data consistency benefits from both technologies plus the elasticity that Hazelcast provides.
See why Hazelcast is recognized as an exceptional streaming platform vendor
The GigaOm Radar for Streaming Data Platforms is an essential source for data leaders to understand the market landscape of streaming data platform vendors. In the 2023 edition, they list Hazelcast as a forward mover in the Leadership circle of the Streaming Data Platforms Radar.
Learn how Hazelcast out-performed Apache Flink and Spark
The ESPBench paper from the Hasso Plattner Institute compares the performance of popular stream processors. This independent report benchmarked Hazelcast against Apache Flink and Apache Spark. It states, "Overall, the latency results are diverse with Hazelcast [Jet] often performing best with respect to the 90%tile and mean values."
Champion Innovation
Change is not easy. Unfortunately real change doesn’t happen without a catalyst. Unlike a company that doesn’t take cybersecurity seriously until after a major security breach. The savvy Developer looks for opportunities to make incremental improvements to applications before a catalyst causes massive disruption to the business.
A simple three step action plan: streamline, automate and enhance will help you champion innovation in your business.
Streamline
Complexity is inherent in all businesses and processes. The problem isn't that it occurs, it's that it rarely gets reviewed, analyzed, and simplified. As an architect and developer, be prepared to ask the hard questions about existing processes and data. Do we need all these steps? Do we need all these data points? Does anyone remember what we were going to use this for again? Chances are you will discover that initiative got canceled or the person who wanted this is no longer with the company.
Automate
Having reexamined your data and process requirements you can now look for opportunities to add connections between solutions to accelerate business outcomes. The savvy Developer will be able to identify the points where data is typically "stored for later" and convert them from data-at-rest to data-in-motion results.
Enhance
Of course, as soon as these new capabilities come online new requirements will surface. While keeping a finger on the pulse of the business keep an eye to the future - noting how you can take advantage of improvements in cloud technologies for managing costs and increasing performance.
Kafka + Hazelcast in Action
Here’s a quick example that shows Kafka + Hazelcast in Action. This is a code snippet from the Patient Monitoring Solution which demonstrates how easy it is to turn data-at-rest into data-in-motion.
In the Patient Monitoring Solution, the Pipeline Implementation class creates Hazelcast streaming pipelines and jobs for each of the devices monitoring the patient. A Pipeline Factory class creates each Pipeline object. In the createPipeline() function below, the topicMapNameIn parameter is associated with the device name. For example, the “heart” device would have a “heart” topicMapNameIn value. The topicMapNameIn also corresponds to the ebTopicName from the produceRecords() function above. (13) Each pipeline object is tied to a device (ebTopicName/TopicMapNameIn). (16) After the pipeline is read in the record from Kafka it will create a map entry in the (18) Patient map. (18) The EntryProcessorImpl class matches subsequent device records with the Patient to create a complete Patient record for this instance. (20) The device record is written to the corresponding device map. (22) The Patient record is sent to the Results map for scoring. (23) All output is written to the console.
public static Pipeline createPipeline( String hazelcastIpAddressIn, String patientMapIn, String topicMapNameIn, String processorNameIn, String processorPortIn ) { // Initialize an empty pipeline Pipeline pipeline = Pipeline.create(); try { // Read from the Message Bus (Apache Kafka) StreamStage<Entry<String, String>> streamStage = pipeline.readFrom(KafkaSources.<String, String>kafka(kafkaProps(), topicMapNameIn)) .withoutTimestamps() .map(r -> entry(r.getKey(), r.getValue())); // Write to Patient Map streamStage.writeTo(Sinks.mapWithEntryProcessor(patientMapIn, entryKey(), e -> new EntryProcessorImpl(topicMapNameIn, e.getValue(), new Patient(e.getKey())))); // Write to Device Map streamStage.writeTo(Sinks.map(topicMapNameIn)); // Sink is designated to send results streamStage.writeTo(Result.buildGraphiteSink(hazelcastIpAddressIn, topicMapNameIn, processorNameIn, processorPortIn)); // Write to log streamStage.writeTo(Sinks.logger()); } catch (Exception e) { e.printStackTrace(); } return pipeline; }
Accelerate Your Learning
Get started with Hazelcast today. We have resources to help you learn. Check these out!
Webinar
Streaming SQL on Apache Kafka for Real-Time Processing
White Paper
The Hazelcast and Apache Kafka Transaction Processing Reference Architecture
Blog
Enriching Kafka Applications with Contextual Data
Use case
Why Apache Kafka and Hazelcast
Featured Resources

Accelerate Your Learning
Get started with Hazelcast today. We have resources to help you learn. Check these out!
Webinar: Streaming SQL on Apache Kafka for Real-Time Processing
White Paper: The Hazelcast and Apache Kafka Transaction Processing Reference Architecture
Blog Post: Enriching Kafka Applications with Contextual Data
Why Apache Kafka and Hazelcast: Apache Kafka and Hazelcast