Hazelcast IMDG 3.8 EA is Out
I am excited to introduce Hazelcast IMDG 3.8 Early Access. It is “Early Access”, we do not recommend using this in production because we have not tested it intensely yet. You can however use it in your development phase in order to try out the new features. It is available in maven repos and for download.
Let’s start talking about new features:
Scheduled Executor Service
Scheduled Executor Service is one of the most requested features by the community. See: https://github.com/hazelcast/hazelcast/issues/115 You will be able to schedule tasks at a given moment in time, or repetitive scheduling at fixed intervals in your cluster.
Open Sourcing Continuous Query Cache
Continuous Query Cache is supported since Hazelcast Enterprise 3.5. We decided to make it open source in 3.8. Continuous Query Cache combines the cache content results with a stream of events to keep the cache up-to-date. This is especially beneficial when you need to query the distributed IMap data very frequent and quickly. By using Continuous Query Cache, the result of the query will always be ready and local to the application. Going forward, we will be referring to this feature simply as Continuous Query. See here: http://docs.hazelcast.org/docs/3.8-EA/manual/html-single/index.html#continuous-query-cache
Projection for Queries
In 3.8, queries will be able to return specific fields of an entry. Currently, queries do:
select * from table where .... Using projection, behavior will be able to do
select t.foo, t.bar from table t where .... This will cut the network and serialization overhead and increase the throughput when it is desired to return a subset of attributes only. Following API for IMap will enable the projection:
<R> Collection<R> project(Projection<Map.Entry<K, V>, R> projection); <R> Collection<R> project(Projection<Map.Entry<K, V>, R> projection, Predicate<K, V> predicate);
Aggregation API Revisited
Before Hazelcast 3.8, aggregations were based on our Map-Reduce engine. As we were not happy with performance of our map reduce module, in 3.8, aggregations are implemented on top of query implementation that is much faster and also has a simpler API. You will pass an Aggregator to aggregate() method in IMap:
R aggregate(Aggregator<Map.Entry<K, V>, R> aggregator);
See below for API and more details: http://docs.hazelcast.org/docs/3.8-EA/manual/html-single/index.html#fast-aggregation
Improvements on Near Cache
We made 2 major improvements into Near Cache:
- It is eventually consistent now. This is a big step-forward from the weakly consistent Near-Cache in previous versions.
- Client Near Cache can persist keys on a filesystem and reload them on restart. This means you can have your Near Cache hot right after application start!
User Code Deployment
User Code Deployment enables you to load new classes to Hazelcast IMDG nodes dynamically without restarting all servers. You can call it “distributed classloading”. If it is enabled in your cluster, a new class will be copied and loaded by other servers transparently. Client support will be added in next release so the copy classes from clients to nodes will be possible. Note that we release this feature as a beta in 3.8, so you can expect API and behavior changes in up coming releases.
HyperLogLog is a new data structure that we are introducing in 3.8. It is a probabilistic data structure used to “estimate” cardinalities of unique elements on huge sets. Some common use cases includes:
- Calculating unique site visitor metrics (real-time) daily, weekly, monthly, yearly or ever, based on IP or user.
- Measuring how a campaign performs (impressions, clicks etc) in advertising.
Ring Buffer Store
This is storage mechanism for the Ring Buffer, analogous to the existing queue store. When enabled, the storage mechanism allows reading of items which are no longer in the Ring Buffer. Also when a new Ring Buffer is created, it can ask the Ring Buffer store for the largest sequence in the store and continue from there. This allows the Ring Buffer to continue with new sequence IDs and without overwriting existing sequence IDs which are in the Ring Buffer store. Besides persisting Ring Buffer items, this feature will allow users of the ReliableTopic to replay messages older than the messages in the underlying Ring Buffer.
Split Brain Protection for Queue and Lock
Cluster quorum is an effort to make data structures more consistent against failures and network partitioning scenarios. When the number of members in a cluster drops below a predefined threshold, Queue and Lock will stop working. Current implementation does not guarantee strict correctness rather it is a best effort. Note that Quorum for IMap and ICache had already been implemented in 3.5. See: http://docs.hazelcast.org/docs/3.8-EA/manual/html-single/index.html#split-brain-protection
Rolling Upgrade (Enterprise)
Rolling Upgrade is the ability to upgrade cluster nodes’ versions without service interruption. Currently this is only supported between patch versions, e.g., from 3.7.1 to 3.7.3. Starting from Hazelcast IMDG 3.8, we will support Rolling Upgrade among minor versions too. You will be able to upgrade your cluster nodes from 3.8 to 3.9 without any service interruption. You have to restart one node at a time in order to not lose any data. Rolling Upgrade will be supported in Hazelcast Enterprise. See: http://docs.hazelcast.org/docs/3.8-EA/manual/html-single/index.html#rolling-member-upgrades
Dynamic WAN Sync (Enterprise)
Using WAN Replication, you will be able to copy one cluster’s data to another without any service interruption. You can start sync process inside WAN Sync interface of Management Center. Also in Hazelcast IMDG 3.8, you can add a new WAN Replication endpoint to a running cluster using Management Center. So at any time, you can create a new WAN replication destination and create a snapshot of your current cluster using sync ability.
Hot Restart Store with Incomplete Members (Enterprise)
Before 3.8, if you enabled the Hot Restart Store capability, the Hazelcast cluster required all members to be up and started to succeed for a successful restart. This was a very strict requirement, since a single permanent node failure would lead to loss of the whole of the data in the cluster on restart. This enhancement makes the procedure more flexible by allowing users to do a partial start, which basically means the cluster can be restarted with some missing members.
More Flexible Hot Restart Store Deployments (Enterprise)
Before 3.8, it was required that servers must use the same IP addresses before and after the restart. This was restricting many use cases like moving data to other servers, replacing a server, having different cloud instances. In 3.8, Hot Restart Store will allow different servers (IP addresses) to take part in the restart in phases.
Hot Backup Cluster (Enterprise)
This capability is bound to Hot Restart Store. So if you have this enabled in your cluster, you will be able to take a backup of your running cluster. You can backup your cluster via Java/REST API or Management Center. This is useful when you want to create a snapshot of your cluster.
Personally, I believe Hazelcast IMDG 3.8 implementation phase has been one of our most productive periods. We much enjoyed developing for the Hazelcast community. I hope you will like it too. We are looking forward to your contributions, bug reports, feedback and questions on our mail group and github.