Advanced CP Subsystem
Build applications that depend on strongly consistent data with in-memory speeds and mission-critical resilience.
Strongly consistent data in an ultra-fast and resilient platform
Ensure accurate data for your applications in a fast, fault-tolerant, distributed system by using the Hazelcast implementation of the Raft Consensus Algorithm.
- Get the correct up-to-date values even upon hardware failures that might disrupt the proper sequence of data updates
- Reduce the consistency/performance trade-off with in-memory speeds
- Add an additional level of fault tolerance by writing consistent values to disk for fast recovery
- Fast, resilient, and consistent data ensures no errors and surprises due to stale data
- Raft Consensus Algorithm ensures data is consistent across all participating members
Hazelcast sets industry standard for consistency and performance for data-intensive, mission-critical AI workloads.
Features
Strong Consistency
Depend on strongly consistent data as a more reliable alternative eventually consistent data.
High Performance
Get the highest performance in your applications so you don’t make a significant trade-off to gain the benefits of strong consistency.
Mission-Critical Resilience
Ensure the resilience you need to run mission-critical applications that rely on consistent data using replicas as well as durable on-disk storage “CP Subsystem Persistence”.
Simple API
Write applications that leverage a wide variety of strongly consistent data types, including key/value stores “maps”, in an easy-to-use API.
Hazelcast Training: Strong Data Consistency. Explore how the Hazelcast Platform CP Subsystem ensures guaranteed data consistency.
Related Resources
Take the next step
See how strong consistency, performance, resilience, and scale drive an AI-centric future.