This short video explains why companies use Hazelcast for business-critical applications based on ultra-fast in-memory and/or stream processing technologies.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Now, deploying Hazelcast-powered applications in a cloud-native way becomes even easier with the introduction of Hazelcast Cloud Enterprise, a fully-managed service built on the Enterprise edition of Hazelcast IMDG. Can't attend the live times? You should still register! We'll be sending out the recording after the webinar to all registrants.
This is a comparison between a Redis 3.2.8 cluster and a Hazelcast IMDG® 3.8 cluster.
Read our previous benchmark here >>
Hazelcast IMDG was up to 56% faster than Redis.
Note that near cache was disabled for Hazelcast®.
As you can see from our previous benchmark, enabling near cache makes us 5 times faster.
Hazelcast IMDG was up to 44% faster on puts.
We think Hazelcast IMDG is faster because of the following design differences:
This is the second performance test we have done where Hazelcast IMDG beats Redis. See our earlier Redis 3.0.7 vs Hazelcast IMDG 3.6 Benchmark. We have extended our performance lead over Redis with Hazelcast IMDG 3.8.
3 physical boxes dedicated to cluster members, 5 physical boxes for clients.
Hazelcast IMDG uses a map configured with HD in memory format and async backups, by default read from backups is disabled.
Redis master Slave replication is async, by default Redis allows read from backups.
<native-memory allocator-type="POOLED" enabled="true">
<size unit="GIGABYTES" value="100" />