This short video explains why companies use Hazelcast for business-critical applications based on ultra-fast in-memory and/or stream processing technologies.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Now, deploying Hazelcast-powered applications in a cloud-native way becomes even easier with the introduction of Hazelcast Cloud Enterprise, a fully-managed service built on the Enterprise edition of Hazelcast IMDG. Can't attend the live times? You should still register! We'll be sending out the recording after the webinar to all registrants.
Hazelcast Jet is an event stream processing engine built into the Hazelcast In-Memory Computing Platform, which also features an in-memory data grid (Hazelcast IMDG) to allow extremely fast processing for both batch and streaming data sources. The platform is typically deployed across a series of hardware nodes in a cluster, either in an on-premises or cloud environment. This scale-out architecture lets you incrementally and efficiently add more nodes to handle more capacity as your load grows.
One exercise as part of deploying Jet is to determine a good estimate of the number of computing resources you need to optimally run your Jet application(s). Since there are many different use cases that can be handled by Jet, there is no one-size-fits-all strategy for estimating cluster sizing. Sizing can be complicated and uncertain, but providing the characteristics of an example deployment is a good place to start.
This guide will walk you through specific environments as examples which can then be extrapolated for your own specific workloads. These examples are based on deployments set up by Hazelcast, with workloads using published source code, which should give you more information on how you can compare your actual workloads.