This short video explains why companies use Hazelcast for business-critical applications based on ultra-fast in-memory and/or stream processing technologies.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Now, deploying Hazelcast-powered applications in a cloud-native way becomes even easier with the introduction of Hazelcast Cloud Enterprise, a fully-managed service built on the Enterprise edition of Hazelcast IMDG. Can't attend the live times? You should still register! We'll be sending out the recording after the webinar to all registrants.
With our recent release of Hazelcast IMDG 4.0, we would like to invite you to watch this video, where we will discuss the new features in this release at a high level and talk about how you can take advantage of them.
Hazelcast Cloud Enterprise is the new cloud-native managed service that allows you to quickly set up Hazelcast IMDG in a public cloud, fully managed for you by Hazelcast. This tutorial will walk through deployment of Hazelcast Cloud Enterprise on Amazon AWS.
No posts were found matching that criteria.
Looking for info on our live events? We're busy coordinating developer events at the moment, so please check back in a few days for the latest info. In the meantime, check out our free, on-demand training.
Join us as we discuss the 5 biggest pain points in retail banking, technology considerations to address those pain points, and how in-memory and streaming technologies have solved those problems.
BNP Paribas Bank Polska increases revenue through real-time offers driven by specific customer needs.
Kubernetes brings new ideas on how to improve the performance of your microservices. You can use a cache or a distributed in-memory store and set them up with several different topologies: embedded, embedded distributed, client-server, cloud, sidecar, reverse proxy, and reverse-proxy sidecar. In this session you’ll see: A walk-through of all topologies for in-memory storage […]
This whitepaper discusses how an in-memory computing platform is used in the healthcare industry to help improve patient care.
Learn how the cloud-native architecture of Hazelcast works with Kubernetes when deploying fast cloud applications.
This paper covers implementation details behind the Hazelcast credit value adjustment risk calculation solution.
This paper describes a modern architecture for calculating risk so banks can rethink their legacy systems and aspire for greater efficiency.
This video by Hazelcast senior solutions architect Sharath Sahadevan walks through a setup of WAN Replication on Google Cloud Platform.
Machine learning (ML) brings exciting new opportunities, but applying the technology in production workloads has been cumbersome, time consuming, and error prone. In parallel, data generation patterns have evolved, generating streams of discrete events that require high-speed processing at extremely low response latencies. Enabling these capabilities requires a scalable application of high-performance stream processing, distributed application of ML technology, and dynamically scalable hardware resources.
There are no more posts.