Cloud-native data and compute platform
Clustering and discovery are built in. Simply start a new member and it will join the cluster, which will automatically rebalance data. Each member handles a portion of the primary and backup data. Hazelcast clusters can run anywhere that supports JVMs.
Hazelcast can scale horizontally as you add members or vertically by utilizing all of the available memory.
Hazelcast clusters can add capacity by starting more member processes. You can add members while the cluster is running, resulting in zero downtime. The cluster automatically rebalances data to ensure there is even use of the memory in each member.
Hazelcast ensures data safety by storing replicas on other members in the cluster. Every member of the cluster is responsible for a portion of primary and replica entries. There is no concept of master or replica processes. Hazelcast can intelligently place replicas on the safest member, on another physical machine, or even another rack.
Increase the speed of legacy data stores from seconds to microseconds.
Hazelcast stores frequently accessed data in a cluster clients near cache.
Scale by adding more members. Hazelcast automatically balances data/backups. All the time the cluster continues to serve data reads and writes.
Much more than an in-memory data store
Hazelcast can be used as an in-line database cache, so developers can continue to work with familiar data structure APIs in their own languages without having to resort to SQL or a NoSQL API. Hazelcast takes care of read-through and write-through to the database at the back-end. Read performance is improved to microseconds for cached data and writes to slow databases can be offloaded from the caller and saved to the database asynchronously. Hazelcast does not require the use of a cache aside pattern. Cache aside is required when using NoSQL solutions such as Redis, this generally means developers must code for cache concerns such as read-through on a cache miss and write-through for saves. Cache aside patterns are slower, as they require more network hops.
Hazelcast near caches can be used to store frequently read data within the application process itself. This means that any Java, .Net, NodeJS, Python, C++ or Go program can reduce its data lookup times from seconds over the network to microseconds. Hazelcast near caches provide their own eviction and memory management facilities so you can be sure your application is safe. Let Hazelcast take care of the caching flow - don’t write this logic into your application.
Microservices Caching and Coordination
Hazelcast clusters make excellent platforms for microservice messaging, coordination and distribution. Hazelcast can facilitate many microservice patterns such as saga, datastore per service, shared datastore and CQRS. Hazelcast’s concurrency primitives provide useful APIs to coordinate access to services and provide HA services. Hazelcast topics and queues can be used to share events across a cluster of Hazelcast enabled microservices. Hazelcast solves many of the platform challenges of microservices that would usually require separate database, messaging, and coordination software. Robust coordination primitives that can survive network partitions are crucial for microservices. Hazelcast concurrency packages are based on a CP subsystem that uses RAFT under the covers.
Hazelcast can deliver a CAAS (Cache-as-a-Service), typically used where various applications need to share similar business data. When not shared, a CAAS makes good sense to organizations that want to centralize and rationalize the number of data stores to which applications connect. A CAAS can (at the same time) sit over many legacy data stores such as mainframes, relational databases and NoSQL databases. Many different business areas and application teams make use of the same central CAAS. Because Hazelcast is schema-less and uses sensible defaults, teams can quickly save data to the CAAS without lengthy onboarding processes.
Web Session Clustering
A popular use case for Hazelcast is the caching of web sessions. Web and application servers can scale out to handle huge loads by adding devices such as a load balancer. This has a second effect of providing redundancy. However, for applications that use web sessions, this introduces a new problem. If a server goes down and the load balancer moves the user to a new server, the session is lost. Hazelcast stores the web session in its distributed cache and provides easy to use native adaptors for some of the most popular web servers such as Tomcat and Jetty or any other Java-based server via our generic filter-based adaptor.
NoSQL (Redis/MongoDB) Replacement
NoSQL solutions such as Redis and MongoDB are hugely popular and provide a good fit for many use cases. Consider using Hazelcast when you find NoSQL software is becoming hard to live with, especially when trying to scale and manage. Hazelcast is designed as a clustered and highly available system. Redis, for example, is not. To scale Hazelcast, you simply add a process to the cluster. There is no operational downtime or manual intervention (re-sharding) required. Also, consider Hazelcast when you require improved latency. Don’t take our word for it. Run your own benchmarks against Redis or MongoDB and find out for yourself. Also, when running the benchmarks, consider using features such as Hazelcast Near Cache.
Massively Parallel Processing
Hazelcast IMDG can run Java code within the cluster in the form of callables or runnables. These programs can be directed to the member of the cluster holding a certain key, or they can move their own processing data with them, as a true function. Because the processing can be split by key, it allows all members of the cluster to actively work on jobs in parallel. Along with Hazelcast IMDG, consider Hazelcast Jet for more advanced batch processing features in addition to its first-class support for stream processing.
One powerful capability offered by Hazelcast is broadcast messaging. This is inspired by JMS topics and offers a lighter weight alternative. You can publish events on a bus that can deliver to an arbitrary number of listeners. Hazelcast Topics is not a full JMS provider, but it is in use delivering millions of messages scalably at organizations such as Ericsson and NTT.
Applications can publish a message onto a topic to be distributed to all instances of the application that have subscribed to that topic. One of the primary benefits of this style of messaging is that it can reliably scale to many nodes, thus increasing the throughput of messages. Hazelcast lacks any single point of failure, which is not easy to achieve with pure JMS solutions.
Hazelcast provides a distribution mechanism for publishing messages that are delivered to multiple subscribers, also known as the publish/subscribe (pub/sub) messaging model. Publishers and subscriptions are cluster-wide.
The community has built up a large collection of resources that will allow newcomers to quickly harness the power of Hazelcast IMDG.
Hazelcast IMDG Overview
A high-level overview of Hazelcast IMDG technology and operations
Demo: Getting Started with Hazelcast IMDG
Start using Hazelcast IMDG right away with this quick demo.
Hazelcast IMDG Configuration
How to configure Hazelcast IMDG
Supercharging Open Source Hazelcast: Answering the Build vs Buy Question
In this webinar, we’ll discuss why the commercial version of the Hazelcast Platform makes sense for your real-time deployment.
RTSP Unconf: Real-Time Stream Processing Roundtable Featuring Industry Practitioners
Fawaz Ghali, Principal Data Science Architect & Head of Developer Relations hosted a roundtable discussion between real-time experts from partners, community members, academia, open-source and paid users to cover the topics of current trends of real-time stream processing; challenges of real-time stream processing; benchmarking real-time stream pro
RTSP Unconf: The Future of Real-Time Stream Processing
Chief Product Officer, Manish Devgan to share where data is processed and delivered in real-time and where you can process what’s happening right now, without added context. This session will highlight the critical capabilities you need in a data platform to build a scalable real-time solution.