The Hazelcast Platform Rocketing to the Next Level

To boldly go where no stream processing platform has gone before!

If you’re already familiar with the Hazelcast Platform, see our ‘Show me the code’ blog post, which jumps right into 5.2 features and shows you how to use them.

Activate the countdown clock and prepare for launch!

T-6 hours and counting…

Akin to loading the rocket’s external tanks with liquid hydrogen and liquid oxygen propellants, bounded (batch) data and unbounded (streaming) data can be loaded into the Hazelcast Platform using Hazelcast Connectors. These out-of-the-box connectors provide an easy and efficient way to access data sources (wherever they are, including files, messaging systems, databases, data structures, etc.) and support multiple data formats (text, CSV, JSON, and Avro).

Now with Hazelcast Platform 5.2, the Hazelcast JDBC Connector provides a configuration-driven capability to connect to any JDBC-compliant external data stores. New data stores can also be added dynamically at runtime. SQL over JDBC can map the table in external data stores to relevant data structures in the Hazelcast Platform, along with performing CRUD operations. The zero-code connector kicks this up a notch; it enables the setup of read-through / write-through / write-behind data access to a relational database (no need to programmatically load data using MapStore!). This connector is in beta and currently supports AWS RDS for MySQL and PostgreSQL. This release further improves the Change Data Capture CDC Connectors based on Debezium that turn databases into streaming data sources.

T-9 minutes and counting… 

Main engines start… The Hazelcast Platform, powered by the proven low-latency data store and real-time stream processing engine, provides a unified platform for real-time applications. The platform does the heavy lifting of ingesting, processing, storing data, and making it available in one distributed, easy-to-scale, highly available cluster so that developers can focus on the business logic.

The low-latency data store supports multiple serialization schemes (Serializable, Portable, etc.), and we’re happy to announce GA status for compact serialization. Compact serialization is the recommended and de facto standard object serialization mechanism. It is highly performant as it separates the schema from the data, uses less memory and bandwidth and supports partial deserialization of fields without deserializing the whole object during queries or indexing. 

In case of engine failure, we have you covered as we’ve added more self-healing capabilities! The automated cluster state management for persistence on Kubernetes was enhanced to support the cluster-wide shutdown, rolling restart and partial member recovery from failures.

Hazelcast Platform 5.2 significantly enhances the SQL capabilities of the Calcite-based SQL engine residing atop the underlying stream processing engine. SQL-based data pipelines can be constructed to ingest and transform data. ANSI-compliant SQL can filter, merge, enrich, and aggregate data. Streaming SQL with temporal filters for both tumbling and sliding windows is supported. As an example, the average price of a stock can be computed for the past hour or for every hour since the market opened simply by defining the sliding window. SQL can be used to combine multiple streams and handle late-arriving records using watermarks. Combining streams of orders and shipments, as an example, can help generate more accurate fulfillment data on a real-time basis.  SQL select statements together with JSON Path syntax can be used to query JSON objects and compute object and array aggregations. We have also extended SQL support to query nested java objects.

The latest release delivers on the Hazelcast Platform’s capability to support storage of much larger volumes of data while maintaining low latency access to this data. Tiered Storage, an enterprise feature, provides spill-to-disk functionality which eliminates the constraint on storage being limited by the cluster’s memory size. Storing data across tiers, memory and disks, allows larger volumes of data to be stored while the intelligent migration of this data across memory regions and disks allows us to provide low-latency access. 

So how do I access data across the layers ? Users can simply use SQL to transparently query these larger datasets spread across memory and disk. Tiered Storage lowers TCO by reducing the reliance on separate storage technologies. And it does this while delivering low latency storage in a reliable, consistent manner. We expect the ability to scale further in a cost-effective manner as we introduce additional tiers such as Amazon S3.

T-0: ignition & liftoff!

The data has now been ingested, processed and stored, it is now ready for consumption! The familiar Management Center that provides cluster management and monitoring tools, has an enhanced “SQL browser” to execute streaming SQL queries. This release also introduces Hazelcast Command-Line Client (CLC) in beta. CLC is a command-line tool to connect to and operate on Hazelcast Platform. Popular SQL tools such as DBeaver can also be used to connect to the Hazelcast Platform.

And off it goes…

We’re ready to handover the launch codes, I mean the license keys to you. You can take Hazelcast Platform spaceship on an interplanetary test drive with Hazelcast Viridian Serverless or just explore the new features in the comfort of your enterprise data center!

Wrapping Up

You can read more about what’s new and what’s been fixed in our release notes. What’s more, in GitHub, we’ve closed 250 issues and merged 540 PRs with Platform 5.2.

If you’d like to become more involved in our community or just ask some questions about Hazelcast please join us on our Slack channel,  and also please check out our new Developers homepage.

Again, don’t forget to try this new version and let us know what you think. Our engineers love hearing your feedback and we’re always looking for ways to improve.  You can automatically receive a 30-day Enterprise license key by completing the form at https://hazelcast.com/trial-request/ 

I’d like to thank James Gardiner and Nandan Kidambi for their contributions and review of this blog post.

All trademarks are the property of their respective owners.