This short video explains why companies use Hazelcast for business-critical applications based on ultra-fast in-memory and/or stream processing technologies.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Now, deploying Hazelcast-powered applications in a cloud-native way becomes even easier with the introduction of Hazelcast Cloud Enterprise, a fully-managed service built on the Enterprise edition of Hazelcast IMDG. Can't attend the live times? You should still register! We'll be sending out the recording after the webinar to all registrants.
Future Grid works with several Australian utility companies to automate the processing of sensor and smart meter data which crosses energy networks. Their customers are collecting approximately 3 billion data points per day. In terms of daily post processing, this equates to 20 billion records as each record has multiple, individual data points –a massive scaling challenge. To make the most of this information, utility organizations need a real-time data aggregation and processing solution which enables them to make complex real-time decisions.
When Future Grid first tried to solve this problem, it used traditional relational databases. However, it soon became apparent traditional databases couldn’t cope with huge volumes of data in real-time, main issue being that they can’t execute algorithms against incoming data fast enough. Future Grid then decided to build its own solution combining Hazelcast IMDG® with Apache Cassandra’s persistence data store capabilities.
This case study tells the story of how Future Grid built its data platform and the primary use cases of their customers including: