This short video explains why companies use Hazelcast for business-critical applications based on ultra-fast in-memory and/or stream processing technologies.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Now, deploying Hazelcast-powered applications in a cloud-native way becomes even easier with the introduction of Hazelcast Cloud Enterprise, a fully-managed service built on the Enterprise edition of Hazelcast IMDG. Can't attend the live times? You should still register! We'll be sending out the recording after the webinar to all registrants.
Looking for DEVELOPER content?
Hazelcast.org | Open Source Projects
Hazelcast can be used as a fast and lightweight messaging system that lets applications communicate in real time at high speed. You can write to Hazelcast in a publish-subscribe paradigm, which means your applications can write (“publish”) to specific channels, called “topics,” and then one or more subscribers can read the messages from that topic.
Hazelcast processes all data in memory, so you are never speed-constrained by slow disk accesses. Also, Hazelcast is lightweight, as it’s delivered in a compact JAR file, allowing you to embed it in any of your applications.
Messaging in Hazelcast is inspired by JMS topics, but is scalable and extremely fast. This is important for large-scale applications that need the highest throughput and lowest latency.
When an event occurs, your applications immediately publish messages to topics, which can then be read from any number of subscribers. Hazelcast is elastic, horizontally scalable, and fault-tolerant, so organizations can publish any number of messages with low latency. You can publish messages to Hazelcast, and subscribe to a Hazelcast topic, with just a few lines of code.
This type of messaging implementation is good for preserving the order and timing of messages because much of the logic around order is built into these systems. You can have multiple consumers for any given message channel, so you can perform a lot of processing in parallel to increase performance.
Messaging plays a key role in microservices architectures, where compact, purpose-built applications are almost constantly sending events and small chunks of data to one another. With Hazelcast as the foundation for messaging, developers have an easy-to-use platform to manage the logic of inter-service communications, and they benefit from simplified coding. A set of microservices can look like a data pipeline, where each microservice receives a message from the previous microservice in the workflow, does some processing, then passes the data to the next microservice. The message passing is done via Hazelcast, and with the use of separate topics, each microservice knows exactly where to get input data, and where to send output data.
High Speed, In-Memory Pub/Sub Messaging
Developers find it easy to leverage Hazelcast to build out messaging pipelines, even at scale with low latency. Applications can subscribe to multiple topics, and multiple applications may publish to the same topic. Hazelcast orders messages, so subscribers process messages in the order they are actually published.
If you are familiar with JMS topics, it will be extremely simple to come up to speed with Hazelcast for real-time scalable messaging.
The use of streaming technologies in microservices is an emerging trend that you should consider. And combining streaming with in-memory technologies lets you deploy and run your systems faster.
The first generation of microservices was envisioned as stateless request-response endpoints. But it's now clear that microservices must often maintain some state. Join us for this webinar where we will discuss why today's business solutions need a next-generation microservices architecture.
This whitepaper explains how a digital integration hub can speed up your applications like a cache, but also can add much more value.
Whether you're interested in learning the basics of in-memory systems, or you're looking for advanced, real-world production examples and best practices, we've got you covered.