Get Hazelcast

Hazelcast 2.0 is coming! What is new?

February 20, 2012

Hazelcast 2.0 is a huge step forward in building the best IMDG and making Hazelcast experience even more pleasant.

As always this release contains many fixes, enhancements and improvements. But there are different reasons that make 2.0 very special. We have many big changes in the internals of Hazelcast. Many of these changes are made to prepare Hazelcast for the era of “BigData In-Memory”!

Distributed Backups

Do you remember; the next member is your backup… thing? Now it is time to forget that. With 2.0 the backups are spanned into the cluster evenly. We call this, distributed backup implementation. Data owned by a member will be evenly backed up by all the other members. In other word, every member takes equal responsibility to backup every other node. This leads to better memory usage and less influence in the cluster when you add/remove nodes. The new backup system makes it possible to form backup-groups so that you can have backups and owners fall into different groups. We will have another blog entry explaining this in detail.

Parallel IO

Prior to 2.0, regardless of the cluster size,  each member had one -In- and one -Out- Thread to handle the communication to other members using NIO channels. With the new Parallel IO implementation we have combined In and Out thread into a single IO thread. But now it is possible to have many IO threads, each handling communication to different set of members. So in a 100 node cluster each member may have 10 IO threads and each thread dealing with roughly 10 members. This is great if you have many cores on the node. The motivation behind Parallel IO is to utilize more CPU, more network and achieve lower latency. We made the number of IO threads configurable so you can have one or much more depending on your environment.

Connection Management: Hazelcast 2.0 is more tolerant to connection failures. On connection failure it tries to repair it before declaring the member as dead. So now it is ok to have short socket disconnections… No problem if your virtual server migrates to a new host.

Adding Listeners and Indexes via Configuration:

You were able to add Map/Queue/Topic/Membership/Migration listeners after Hazelcast instance is started. So you were missing all events happened before you add your listener(s).  With 2.0, you can declare your listeners in configuration. You can also add indexes for your maps via configuration so that

1. You don’t have to have IMap.addIndex in your code anymore.

2. If you have a MapLoader to pre-populate your map, all these pre-populated entries can be indexed.

New Event Objects: Event Listeners for Queue/List/Set/Topic were delivering the item itself on event methods. That’s why the items had to be deserialized by Hazelcast Threads before invoking the listeners. Sometimes this was causing class loader problems too. With 2.0, we have introduced new event containers for Queue/List/Set and Topic just like Map has EntryEvent. The new listeners now receive ItemEvent and Message objects respectively. The actual items are deserialized only if you call the appropriate get method on the event objects. This is where we brake the compatibility with the older versions of Hazelcast. Sorry for any inconvenience this many cause but in the long-term, we all win.

Client Config: We had tons of factory methods to instantiate a HazelcastClient instance. 2.0 RC2 contains a ClientConfig API to get rid of all those factory methods: HazelcastClient.newHazelcastClient(ClientConfig). Nice and clean.

SSL Support: Hazelcast was able to encrypt the communication over socket both with symmetric and asymmetric keys. Now it supports SSL communication also.

Other updates: 

  • Distributed MultiMap value collection can be either List or Set.
  • SuperClient is renamed to LiteMember to avoid confusion. Be careful! It is a member, not a client.
  • New IMap.set (key, value, ttl, TimeUnit) implementation, which is optimized put(key, value) operation as set doesn’t return the old value.
  • HazelcastInstance.getLifecycleService().kill() will forcefully kill the node. Useful for testing.
  • forceUnlock, to unlock the locked entry from any Node and any thread regardless of the owner.
  • Enum type query support.. new SqlPredicate (“level = Level.WARNING”) for example

Issues: Up to now the following issues have been fixed. We are trying to fix more until the final . 430, 459, 471, 567, 574, 629, 632, 646, 666, 686, 669, 690, 692, 693, 695, 698, 705, 710, 711, 712, 713, 714, 715, 719 , 721, 722, 724, 727, 728, 729, 730, 731, 732, 733, 738, 740, 747, 751, 756, 758, 759, 761, 765, 767, 770, 773, 779, 781, 783, 790

Documentation is not up to date but will be completed until the final. The ETA for 2.0 final is end of February. Go and grab a fresh RC and start Hazelcasting 2.0!

About the Author
Subscribe

Subscribe to the blog

Follow us on: