Companies need a data-processing solution that increases the speed of business agility, not one that is complicated by too many technology requirements. This requires a system that delivers continuous/real-time data-processing capabilities for the new business reality.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Setting up servers and configuring software can get in the way of the problems you are trying to solve. With Hazelcast Cloud we take all of those pain points away.
Watch this webinar to learn how you can instantly fire up and then work with Hazelcast Cloud from anywhere in the world. With our auto-generated client stubs for Java, Go, Node.js, Python and .NET, we can have you connected and coding in less than a minute!
Get a 30-day free trial.
Get started today with the
industry’s leading in-memory computing platform.
The in-memory speed you count on, with the convenience and scalability of cloud.
The purpose of the document is to compare Redis Open Source (ROSS) 4.0.11 against Redis Labs Enterprise (REE) 5.2.0. The comparison should answer the question, whether we can use Redis Open Source as a proxy for testing Redis Labs Enterprise, when trying to re-run the tests published in Redis Labs blog post. The reason why we cannot use Redis Labs Enterprise directly is that we don’t have a license that allows a desired number of shards, despite the fact we repeatedly asked Redis Labs to provide it.
Redis Open Source and Redis Labs Enterprise have the same performance, give or take a few percent variability on each test run, in all tested scenarios. As a conclusion, according to our tests, Redis Open Source is a valid proxy for testing to get an idea of the performance of Redis Labs Enterprise.
Number of server machines
1 – 2 (see “Shards configuration”)
Number of client machines
Redis Open Source
Redis Labs Enterprise
The scenarios were scaled down in order to be comparable, because we don’t have more than 4 shards for REE. In order to investigate behavior when replication is turned on and off, we needed to setup the same number of master shards for both scenarios. That’s because of the way how Redis works – (by default) all operations go through to the master.
2 (1 master + 1 slave)
The scenarios were scaled down in order to be comparable, because we don’t have more than 4 shards for REE. That means that given the original test scenario had 96 shards and we have no more than 4 shards, we scaled number of objects by the factor of 1/24.
RadarGun 3.0.0 with modifications to support Redis.
Number of threads per
Number of objects
(Redis Open Source)
(Redis Labs Enterprise)
Command line used
dist.sh -c <benchmark_xml_above> -t -m <master_ip>:2103 client1 client2 client3
In this scenario, we had 4 shards of REE/ROSS on one physical box. Hover over the chart to see the description what scenario it is.
From the above charts, we see that Redis Open Source and Redis Labs Enterprise perform approximately the same.
In this scenario, we had 4 shards in total, but spread across two machines. So one machine contained 1 master + 1 slave. Hover over the chart to see the description what scenario it is.