This short video explains why companies use Hazelcast for business-critical applications based on ultra-fast in-memory and/or stream processing technologies.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Now, deploying Hazelcast-powered applications in a cloud-native way becomes even easier with the introduction of Hazelcast Cloud Enterprise, a fully-managed service built on the Enterprise edition of Hazelcast IMDG. Can't attend the live times? You should still register! We'll be sending out the recording after the webinar to all registrants.
The purpose of the document is to compare Redis Open Source (ROSS) 4.0.11 against Redis Labs Enterprise (REE) 5.2.0. The comparison should answer the question, whether we can use Redis Open Source as a proxy for testing Redis Labs Enterprise, when trying to re-run the tests published in Redis Labs blog post. The reason why we cannot use Redis Labs Enterprise directly is that we don’t have a license that allows a desired number of shards, despite the fact we repeatedly asked Redis Labs to provide it.
Redis Open Source and Redis Labs Enterprise have the same performance, give or take a few percent variability on each test run, in all tested scenarios. As a conclusion, according to our tests, Redis Open Source is a valid proxy for testing to get an idea of the performance of Redis Labs Enterprise.
Number of server machines
1 – 2 (see “Shards configuration”)
Number of client machines
Redis Open Source
Redis Labs Enterprise
The scenarios were scaled down in order to be comparable, because we don’t have more than 4 shards for REE. In order to investigate behavior when replication is turned on and off, we needed to setup the same number of master shards for both scenarios. That’s because of the way how Redis works – (by default) all operations go through to the master.
2 (1 master + 1 slave)
The scenarios were scaled down in order to be comparable, because we don’t have more than 4 shards for REE. That means that given the original test scenario had 96 shards and we have no more than 4 shards, we scaled number of objects by the factor of 1/24.
RadarGun 3.0.0 with modifications to support Redis.
Number of threads per
Number of objects
(Redis Open Source)
(Redis Labs Enterprise)
Command line used
dist.sh -c <benchmark_xml_above> -t -m <master_ip>:2103 client1 client2 client3
In this scenario, we had 4 shards of REE/ROSS on one physical box. Hover over the chart to see the description what scenario it is.
From the above charts, we see that Redis Open Source and Redis Labs Enterprise perform approximately the same.
In this scenario, we had 4 shards in total, but spread across two machines. So one machine contained 1 master + 1 slave. Hover over the chart to see the description what scenario it is.