Companies need a data-processing solution that increases the speed of business agility, not one that is complicated by too many technology requirements. This requires a system that delivers continuous/real-time data-processing capabilities for the new business reality.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Setting up servers and configuring software can get in the way of the problems you are trying to solve. With Hazelcast Cloud we take all of those pain points away.
Watch this webinar to learn how you can instantly fire up and then work with Hazelcast Cloud from anywhere in the world. With our auto-generated client stubs for Java, Go, Node.js, Python and .NET, we can have you connected and coding in less than a minute!
Get a 30-day free trial.
Get started today with the
industry’s leading in-memory computing platform.
The in-memory speed you count on, with the convenience and scalability of cloud.
Table of Contents
Overall RadarGun configuration
Zing reduces full GC pauses.
So we would expect this to reflect in average latencies and, because larger full GCs are avoided,
much more so in the largest max latencies. And this is what we found.
The average was reduced by 40%. The 99.99th percentile max latency was only 8ms for Zing
but 55ms for HotSpot.
We conclude that Hazelcast® with Azul Zing JVM has lower latency and much lower variability
for more predictable latencies. This in turn allows Hazelcast to fit into tighter response time SLAs.
These results are applicable to small heap sizes where Hazelcast is storing small amounts of data of 500MB to 1.5Gb
per node and are relevant to Hazelcast and Hazelcast Enterprise.
Note that in Hazelcast Enterprise HD, we store data off heap and only run with a small heap.
So a similar benefit to one in these tests with 1 to 2GB heaps would be realized.
See below for summaries of results for 1GB and 2GB heap cases and the extensive test results further down.
Average response time for Get of 0.8ms versus for Get of .5ms for Zing
Max response for 99th percentile for Get of 2ms compared to 2ms for Zing
Max response for 99.99th percentile for Get of 40ms compared to 8ms for Zing
Max response for 99.99th percentile for Get of 55ms compared to 8ms for Zing