Bigger, Better, True Real-Time with Hazelcast

Leading businesses have already benefited from reduced costs, better agility, and increased revenue in the short time that they have been delivering real-time services to their ever-demanding consumers. The key to their success was to embrace the challenges that real-time processing presented at the time in exchange for class-leading, exemplary customer experiences and the associated benefits that came with that.

But true real-time is hard to achieve. That is, being able to act on information in the moment. While, when, during and before an event, not after. Many providers still claim to deliver real-time, while waiting on databases and/or other services, causing lag and processing data after the event and missing out on real-time benefits. True real-time involves continuously collecting, analysing, canonicalizing and performing functions (like processing against rules or machine learning models) on the data “in-flow” outside of a database or any other service – in other words, the entire process should be run on very low latency storage and delivered to users or downstream processes long before it touches a database.

With Hazelcast Platform 5.1, we delivered true real-time capabilities that would help drive greater innovation. Our platform hosted a high-performance stream processing engine tightly coupled with low latency storage and machine learning inference capabilities. This enabled developers to quickly build high-performance, intelligent, business applications that could ingest huge volumes of event-based data and deliver valuable real-time insights for any use case.

True real-time involves continuously collecting, analysing, canonicalizing and performing functions on the data “in-flow” outside of a database or any other service.

We also enabled businesses to deploy where they wanted – in any cloud, on-premises or hybrid – and delivered a fully managed service for those not wanting to manage infrastructure operations and provided connectivity to any data source and destination, benefiting omnichannel partnerships by providing a seamless experience across the divide.

Fortune favours the curious!

So true real-time processing sets leading businesses apart from the rest. But new ways of discovering value and driving innovation are being sought in an ever-competitive market.

Even more vast streams of data, in various formats from many distinct sources, are being generated from social media, smart devices, mechanical device sensors, et al. And, that volume is growing at an alarming rate. Given the proven benefits of real-time thus far, businesses are curious whether hidden patterns exist within untapped, disparate data that can offer undiscovered insights. Processing more quality data can yield more accurate, context-rich insights, removing the need for guesswork and assumptions when acting on data. To do this, you need the ability to ingest and process against even larger volumes of data while delivering value within the business SLA. 

In addition, there is a need to grow the data analysis user base. Historically, only highly qualified specialists would be capable of analysing data. With more modern, digital, intuitive applications emerging, the need to enable access to business-savvy users increases, thus reversing the historical trend of data analysis being a niche, polarised activity. Imagine larger numbers of users acting on enriched and scored data while still “in flow.” Rather than a funnel approach to discovering insights, broadening the scope to more users with various skills and expertise will drastically increase the chances of finding many insights from different business areas. Analysing data using SQL increases the likelihood of analysis adoption as many organisations use SQL to create and manipulate data.

Innovation creates opportunities. With vast quantities of data from various data sources, there is an appetite to determine whether insights can be realised by correlating multiple streams of changing event data based on conditions common to both streams of events. 

For example, imagine an inventory operations application that tracks when orders have been shipped. You would have two streams of event-based data: one for orders and one for shipments. You would need to join these streams on the order-id to find out which orders were successfully shipped. Or a railway service provides real-time updates on train arrival and departure times and whether a passenger will miss their train based on their current location and travel speed. This would involve processing two streams of information: the passenger’s GPS position and the train’s GPS position, in real time. You would need to join these two streams with a common condition like the train’s identification – generated when the passenger purchased their ticket.

In both these use cases, static reference information would be of limited value as one condition would not be in real-time. So correlating constantly changing data from different sources can add another dimension to data enrichment based on moving data rather than static.

Reinforcing the Real-Time Economy with Hazelcast 

Hazelcast Platform 5.2 extends on what was already a proven solution to the true real-time problem, with the three core tenets in mind:

Acting on vast, continuous streams of data, immediately

Extending low-latency storage removes limitations on the amount of data that can be realistically processed. Tiered storage enables data to extend to high-performance SSDs where use cases require vast volumes of data growing to tens or hundreds of terabytes. This adds the additional benefit of cost optimisation and deployment practicality (rightsizing the number of clusters, for example). With fraud detection, more historical reference information (i.e., 10 years rather than 2 years) creates a more comprehensive picture, reducing the chances of assumptions and guesswork. More reference data can lead to greater analytical accuracy. 

Providing the easiest access to data for everyone, everywhere, at any time

Zero code connectors broaden the appeal for non-technical business leaders to create enterprise applications, allowing them to customise the functionality to their specific needs due to a more simplified application development approach. A familiar declarative command structure simplifies joining reference data from relational databases to data streams for a broader range of users. Streaming SQL provides flexibility for managing streamed data exceptions, and JSON support simplifies integration with the vast majority of JSON-based technologies.

Uncovering priceless revelations from seemingly distinct, de-coupled but concomitant data moments 

Stream-to-stream joins enable the matching of continuous streams of related data to determine correlations “in-flow” as the data is being created. Answering more complex questions as concurrent events unfold allows businesses to present “what if” scenarios across previously deemed inaccessible or unfeasible data. 

Other capabilities broaden the range of options for better management, even lower administration, and greater productivity.

Get Started with Hazelcast

For additional information on our latest release, check out our press release, Hazelcast Alleviates ‘Wait-and-See’ Paradigm Limiting Real-Time Applications, and our technical blog, The Hazelcast Platform Rocketing to the Next Level.

You can get started with the software by downloading it from here. Also, check out our Hazelcast Viridian Serverless offering, which helps you get started with real-time applications in the cloud. A sign-up form is on the aforementioned download page.