Companies need a data-processing solution that increases the speed of business agility, not one that is complicated by too many technology requirements. This requires a system that delivers continuous/real-time data-processing capabilities for the new business reality.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Setting up servers and configuring software can get in the way of the problems you are trying to solve. With Hazelcast Cloud we take all of those pain points away.
Watch this webinar to learn how you can instantly fire up and then work with Hazelcast Cloud from anywhere in the world. With our auto-generated client stubs for Java, Go, Node.js, Python and .NET, we can have you connected and coding in less than a minute!
Get a 30-day free trial.
Get started today with the
industry’s leading in-memory computing platform.
The in-memory speed you count on, with the convenience and scalability of cloud.
The streaming benchmark is intended to measure the latency overhead for a streaming system under different conditions such as message rate and window size. It compares Hazelcast Jet, Apache Flink, and Apache Spark Streaming.
The streaming benchmark is based on a stock exchange aggregation. Each message representing a trade is published to Kafka and then a simple windowed aggregation which calculates the number of traders per ticker symbol is done using various data processing frameworks.
The latency is measured as the delay between the earliest time we could have received results for the window the event is in, and the actual time the result was received.
For example, if we want the result for events happening between 12:00:00.000 and 12:00:01.000, theoretically, the earliest time we can get an aggregated result is at 12:00:01.000. However, this does not take into account that events happening at 12:00:00.999 might not reach the system immediately and there also can be some out of orderness due to partitioning (Kafka only guarantees ordering by partition).
As a solution, we need to allow for some delay to allow all the events to reach the system. If the delay is configured as one second, we should wait until receiving an event with timestamp 12:00:02 before we can compute the window for events that happened between 12:00:00-12:00:01. Based on this, the earliest time we can get a result will be the time when we received an event with a timestamp 12:00:02.000. If the system’s actual output happens at 12:00.02.100, then we define the latency as 100ms. This latency includes all of the following:
Each framework is expected to output tuples to one or more files in the following format:
(WINDOW_TIME, TICKER, COUNT, CALCULATION_TIME, LATENCY)
WINDOW_TIME is defined as the end time of a window. For example, for a window of events between 12:00:00 and 12:00:01, the WINDOW value would be 12:00:01. Latency can then be calculated based on the difference between the WINDOW and the CALCULATION_TIME.
A sample output could look as follows:
The first value is the window close timestamp, which indicates what time period this value is for (WINDOW_TIME). The second value is the stock ticker, and the third the count for that ticker within that window. The next value represents when the processing for that window was completed (CALCULATION_TIME), and the last value (LATENCY) is simply CALCULATION_TIME – WINDOW_TIME.
CALCULATION_TIME – WINDOW_TIME
If the allowed latency was 1000 ms, then this number should also be subtracted from LATENCY to find the real latency of the processing framework.
The following windowing combinations are tested:
Allowed out of orderness is 1 sec
The output files as above are parsed by a simple log parser written in Python to calculate the average latencies.
All source is available here: big-data-benchmark
Version 2.0.0 – Scala 2.11
72 Partitions (16 per node)
retention.ms=60000 (1 minute)
Number of Network Threads
2 nodes (type c5.9xlarge)
Snapshot interval was 10 seconds, Jet JVM heap was 32G (60G available on machine
Kafka source vertex local parallelism was increased from default 1 to 36 to be able to handle the load
32G heap for taskmanagers, 36 task slots
jobmanager running on one of the taskmanager instances
Measured latencies were very stable, snapshotting is very fast and does not affect latency
Used 10 second snapshot interval, snapshotting to local file system
All latency results are given in milliseconds, to three significant digits.
Due to clock skew between machines, there is a ±20 ms uncertainty in the results. This is especially relevant to Jet’s results, where we measured about 22 ms minimum latency. The real minimum latency may be closer 0-2 ms and average latency about 20 ms.
Duration of a benchmark run: 140 seconds.
The tests used 2 million messages per second and 10,000 distinct keys.
min: 23, max: 223
min: 128, max: 476
min: 243, max: 5620
min: 22, max: 612
min: 234, max: 95100
min: 5760, max: 22700
min: 22, max: 281
min: 234, max: 36900
min: 226, max: 5220
The 1-second tumbling window is the only benchmark for which it makes sense to compare all three of Jet, Flink and Spark on the same chart.
One-second tumbling window:
60-second window sliding by 1 second:
10-second window sliding by 0.1 seconds: