This short video explains why companies use Hazelcast for business-critical applications based on ultra-fast in-memory and/or stream processing technologies.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Now, deploying Hazelcast-powered applications in a cloud-native way becomes even easier with the introduction of Hazelcast Cloud Enterprise, a fully-managed service built on the Enterprise edition of Hazelcast IMDG. Can't attend the live times? You should still register! We'll be sending out the recording after the webinar to all registrants.
In-memory data grids have historically been the exclusive domain of large investment banks and proprietary solutions such as Oracle Coherence, Pivotal Gemfire, and Software AG Terracotta. Hazelcast provides an open source solution that is easy to develop, elastic in scaling and fault tolerant. Implemented in Java it offers some key advantages to the competition. You can’t send logic execution to Memcached node and Redis LUA interpreter is no match for the state of art JVM. Also, it fits easily in Java micro-services architectures. The first part of this presentation will cover simple use case, fictional stock brokerage system, that shows basic distributed structures and their behavior. The second part will show some advanced features of Hazelcast like event listeners and data affinity. At the end comparison between Hazelcast and Redis and Memcached.