Redis 3.0.7 vs Hazelcast 3.6 Benchmark
This is a comparison between a four server Redis 3.0.7 cluster and a four server Hazelcast 3.6 cluster, prepared using the standard caching benchmarking tool, RadarGun.
Hazelcast was more than five times faster than Redis with a 10% near cache enabled. With no near cache, meaning that all gets go across the network, Hazelcast was 32% faster.
Hazelcast was 5% faster on puts.
Why Hazelcast is Faster Than Redis
We think Hazelcast is faster because of the following design differences:
- Hazelcast uses highly optimized, multi-threaded clients and servers, and uses efficient asynchronous IO. Each thread owns its partitions, so there is never contention.
- Redis is single-threaded, meaning that single instances cannot efficiently utilize available CPU resources under heavy load. Hazelcast is fully multi-threaded, with partitions owned by partition threads.
- Jedis uses blocking sockets, while Hazelcast’s Java client uses asynchronous I/O.
- We also have a high-performance binary protocol. Redis uses RESP, a human-readable text-based protocol.
- Specifically, in the near cache tests, Hazelcast is more than 500% faster that Redis. This is expected because distributed Java caches have near caches and Redis does not.
The effect of very small numbers of clients/threads:
How do you reconcile the test results we got with Redis’ reputation for sheer speed? Often a developer test will show Redis as blazing fast. A simple dev test will use a single client and a single thread in that client. Sure enough, Redis is faster in this scenario. However, when we explored the effect of adding clients and client threads to the throughput, crossover with Hazelcast being faster occurred somewhere between 8 threads and 32 threads. Most production systems running Hazelcast or Redis are very busy–otherwise you wouldn’t need us–and a very busy system has lots of clients and threads. In this scenario, Hazelcast beats Redis.
So, be wary of the results you get from simple dev performance comparisons. For more on the effect of client and thread counts, see Redis vs. Hazelcast – RadarGun Puts Them To A Challenge.
See the chart below for a visual of when the crossover happens.
|Environment||Hazelcast Performance Lab|
|Server Heap||4 GB|
|Client Heat||4 GB|
|Java||Java 1.7.0_85 OpenJDK 64-bit|
|Network||40Gbps Network with Solar Flare NICs|
|Framework||RadarGun 2.1.0.Final https://github.com/GhostInAMachine/radargun/tree/redis|
|Topology||4 clients / 4 servers|
|Number of Threads/Client||40|
|Test Duration (minutes)||10|
|Number of Unique Keys||100,000|
|Get/Put Ratio||9:1 (Caching Use case)|
|Key Selection||Gaussian (aka Bell Curve) frequency|
- Jedis is used as Redis client
- There are four servers. C-based, so no heap configuration
- No backups configured
- Redis persistence is disabled
- Default settings used for Jedis
- Redis server configuration (redis.conf)
cluster-con g- le nodes.conf
- Hazelcast Java client is used as the Redis client. There are 4 clients, with 40 threads/client
- The cluster has four servers, each with a heap size of 4GB
- Hazelcast has a backup count of zero
- Hazelcast persistence is disabled
- Hazelcast has near-cache enabled with default setup (10K max size, LRU eviction)
Graphs represent throughput of the entire system (of all 160 threads)
Raw benchmark files: Redis-3.0.7_vs_Hazelcast-3.6_Benchmark.zip
Hazelcast Get – Near Cache 10K
Hazelcast Get – Without Near Cache