Hazelcast IMDG 3.9 Is Out

We just released Hazelcast IMDG 3.9. It’s a release packed with new features, improvements and optimizations. Let me introduce you to a few of them:

User Code Deployment from Clients

In Hazelcast IMDG 3.8, we introduced an option for automatic distribution of your domain classes across cluster members. This greatly simplifies the deployment process as you don’t need to copy JARs across your cluster members. It also allows you to add new classes without a cluster restart. We received a lot of feedback on this feature. Probably the most common feature request was to allow clients to push new classes into a running cluster. Well, here is the good news – it’s part of the Hazelcast IMDG 3.9 release! It’s disabled by default for security reasons, but it only takes one switch to enable it.

Dynamic Data Structure Configs

Dynamic configuration of data structure is another popular feature request that we have implemented. In older Hazelcast IMDG versions, you would have to provide all configurations upfront. There was no supported way to submit, for example, a new IMap configuration into a running cluster. Some people applied wildcards in configuration names as a workaround, but we have always felt it was an unnecessary limitation.

In Hazelcast IMDG 3.9, this is no longer necessary. You can now submit new configurations into a running cluster. It’s as easy as:

MapConfig mapConfig = new MapConfig("myDynamicNewMap");
    mapConfig.setSomeOption();
    mapConfig.setAnotherOption();
    hazelcastInstance.getConfig().addMapConfig(mapConfig);

And Hazelcast will automatically distribute this configuration to all cluster members.

Store-by-Reference in Near Cache

Enhanced application speed is the key of benefit of using near cache. Near cache is often used to serve a small subset of very hot items with minimal latency. When an entry is found in a near cache, it can be immediately returned to the caller and no network communication is involved. In older versions of Hazelcast IMDG, near cache keys were stored as serialized blobs, consequently keys were serialized with each get request. This proved to affect application performance which is why Hazelcast IMDG 3.9 allows users to use de-serialized objects as near cache keys. Along with other improvements, near cache lookups now drop down to high nanosecond times, making near cache as fast as dedicated in-process caches.

Finer-Grained Anti-Entropy

We have invested a lot of engineering resource into making Hazelcast IMDG 3.9 more stable. We have had an anti-entropy mechanism for automatic correction of out-of-the-sync backups for a long time. When the mechanism detects an inconsistency between primary replica and backups, it starts a reconciliation process.

This is invisible to users as it is considered to be an internal concern. However, it can still have negative side effects – when partitions are bulky, it can cause quite a burden on the system. In older versions of Hazelcast IMDG, a single partition was the minimal reconciliation unit. Even when a single entry was missing in a single map, anti-entropy mechanism would always copy the whole partition.

In Hazelcast IMDG 3.9, we have taken a huge step towards a finer grained reconciliation, migrating only the maps and caches with missing records. This reduces latency spikes caused by this process and contributes to a smoother operational experience.

Lite Member Promotion

Lite cluster members act as regular members, but they do not own any data. You can use them as pure computation nodes or as an alternative to clients. In Hazelcast IMDG 3.9 we implemented a feature request we received via our Gitter chat: it is now possible to start a Lite member and promote it later to a full member if needed. This is useful when your application is starting and you want to delay data migrations until the application is fully initialized.

Phi Accrual Failure Detector

Hazelcast uses failure detectors to check if a remote member is still reachable and healthy. When a detector is too aggressive, it can remove a healthy member from a cluster. This may lead to unnecessary data rebalancing and other issues. On the other hand, when a detector is too lenient an unresponsive member can be kept in a cluster for too long which can impact on availability.

Hazelcast IMDG 3.9 includes an implementation of The Phi Accrual Failure Detector described by Hayashibara et al. Phi Accrual Failure Detector is adaptive to network/environment conditions and it allows faster failure detection while minimizing the risk of removing a healthy member.

Manual Migration Control

When a member joins or leaves a cluster Hazelcast IMDG automatically triggers data re-balancing. This makes the cluster more elastic, but data re-balancing can be an expensive process, as it usually involves sending big chunks of data over the network. We are now introducing a new cluster state NO_MIGRATION – where we do not trigger re-balancing when new members are joining or leaving. This has a few use cases. For instance, adding several new nodes and postponing the re-balancing until all nodes are joined.

Client Connection Strategies

You can now configure how a client behaves when it has no connection to a cluster. Clients can now support three different behaviors:

  1. Silent Reconnect. This blocks all operations until it reconnects.
  2. Abort all in-flight operations and reconnect.
  3. Do not reconnect at all.

High-Density Memory Store Indexes (Enterprise only)

High-Density (HD) IMap is great way to achieve consistent performance regardless of your data size. It massively reduces the impact of JVM Garbage Collection (GC) on your system. In older versions, search indexes could be a problem; the indexes were stored in a regular Java heap and therefore contributed to GC pauses. In Hazelcast IMDG 3.9, HD IMap always stores indexes in HD memory, meaning you can have predictable performance and still use indexes to speed-up queries.

Rolling Upgrade from 3.8 (Enterprise only)

Hazelcast IMDG 3.8 included an infrastructure to support Rolling Upgrade to a new minor release. Hazelcast IMDG 3.9 is the first version using this infrastructure, meaning customers can upgrade their 3.8 cluster into 3.9 without any outage.

WAN with Discovery Service Provider Interface (Enterprise only)

We’ve had the Discovery Service Provider Interface (SPI) for a number of Hazelcast IMDG releases. There are providers for Kubernetes, Consul, Eureka, and many others. It’s a simple and popular way for cluster member discovery. In Hazelcast IMDG 3.9, we use the same SPI for WAN discovery. This means you can use the same providers in WAN configuration and different clusters will discover each other automatically!

Closing Words

Hazecast IMDG 3.9 was a massive engineering effort and I am proud to be part of such a great team. We are already working on features for 3.10 and I can tell you, it’s gonna rock! You might learn more about our plans if you stop by our Gitter channel.