Hazelcast 3.5 – It’s time to make the move to Hazelcast – Our finest release ever…

Hazelcast has just released Hazelcast 3.5. From my point of view this is the best version we’ve ever created. Not only did we tighten our development process and increase QA efforts, it’s also the first version being tested in our new test-lab which Peter Veentjer loved to talk about a couple of weeks ago:

 

Testing Hazelcast on these new beasts gives me the feeling we did everything that we could do to make it stable, fast and efficient.

But that’s not all, our main focus for Hazelcast 3.5 was stability and performance, the new test-lab was just one part of that. Our engineering team did an amazing job and dug deep into the different subsystems and optimized multiple areas with glorious results.

Performance Improvements

JCache and Map operations have been improved between 10% to 50%. The values depend on the different setups tested and the biggest improvement was using 2 Hazelcast members. This is still a common use case to have a simple fail-over and recovery solution but still, there is a 10% improvement when running bigger clusters. In addition, the general distributed query performance was improved by an order of magnitude compared with latest 3.4 releases and we changed our MapLoader implementation to load keys and values in a more efficient, distributed way by loading them once and distributing the loading process into the cluster.

MapReduce was also sped up. The biggest improvement was achieved when using a KeyPredicate by removing internal usage Java HashMaps which tend to re-hash to often. This gave a nice speedup of up to 75% depending on the data-size of the test situation. Several other setups were also optimized with runtime improvements between 4% and about 30%.

Another very important change is the new balanced IO subsystem. This successor version automatically balances itself based on long send operations to make best use of the underlying IO channels. Using this new subsystem the improved latency predictability and the overall throughput went up by about 4%.

I loved to seeing people test the early access version and recognized the performance improvements in their own tests: https://groups.google.com/d/msg/hazelcast/NSZ4C9tQiJ0/8tXw4EwNQh8J

Last but not least the concurrency code inside our C# client was rewritten to optimize the internal locking scheme. This gives a performance improvement between 50% and almost 100% (16 or 128 threads). This improvement lifts the C# client to almost the same speed as our current Java reference implementation. We have also gotten C++ performance up to the same performance as our Java client.

We will further improve the speed and concurrency in later versions but I’m more than proud about the prior achievements and I’m excited to talk about them.

Provable Stability

Fortunately we have not only worked on performance but also on stability. We recently released our Hazelcast Simulator which is used internally to simulate production use of Hazelcast. We try to heckle our members and clients as much as possible to guarantee a smooth operation in production environments. With the release of Hazelcast Simulator we open this power to our users to test their own applications and systems based on Hazelcast.

Additionally, we introduced a back-pressure system for asynchronous operations to prevent overloading the system by using massive rate of asynchronous operations on Hazelcast data structures.

New Features

That itself would be enough effort to justify a blog post but that’s not all. Our engineering team also worked on a few but very nice new features.

Hazelcast now offers a way to configure the topic implementation from fire-and-forget to reliable transmission and we also exposed the underlying used and new data structure, a ringbuffer, to our users. So with one feature users get two benefits 🙂

We added Continuous Query Caching which allows continuous population and updating, in near real-time, of a IMap based on a given query. This speeds up queries massively since they are not executed on retrieval but at insertion time.

The last new feature is High-Density Memory Store-based client near-cache. For people not yet familiar with a near-cache: a near-cache stores data inside the requesting client for even faster access on multiple retrievals. Values in near-caches are invalidated automatically whenever a corresponding key or value changes. Until Hazelcast 3.5 this feature was available for on-heap near-caches only, whereas now near-caches can also be stored into native memory.

Last but not least, my personal favorite. Hazelcast 3.5 is delivered with a preview-version of our new client protocol. This new protocol is the first step towards full rolling-update support and offers the options to use Hazelcast clients with higher versions of a Hazelcast cluster. Using this feature, the application landscape can be slowly upgraded without going a full shutdown. I invite users to heavily test this new protocol and give us feedback, we want to make it the standard protocol in the upcoming Hazelcast 3.6 release but we really need your help! If you have the chance to test it, do so!

Following the client protocol, we will look into further support for rolling-updates on members to move further along the way to support 24/7 runtime support.

Again, I am excited for the 3.5 release and I want to encourage all users to update to this our new “go-to” Hazelcast version. Download it at www.hazelcast.org/download or using Maven (coordinates: com.hazelcast:hazelcast:3.5) and give it a try!