Caching

Operating in today’s always-on, high-volume, high-speed, high-expectation world requires a different level of processing enablement. When microseconds can mean the difference between success and failure, Hazelcast in-memory solutions can deliver blinding speed with scalable and flexible data caching.

Introduction

New levels of performance delivered to the world’s most demanding companies

Caching puts actively used data in memory, where it can be accessed significantly more quickly. While this sounds simple, it can become very complex, as real-world systems and web applications are wildly diverse and constantly changing. Through meticulous engineering, deep caching expertise, and a focused commitment to customer needs, Hazelcast handles that diversity with a robust in-memory computing platform, delivering the benefits of distributed in-memory caching where high-speed, high-volume systems need it most.

Speed

Hazelcast’s relentless pursuit of speed has made our in-memory data store the fastest distributed cache available. As a fully in-memory data store, Hazelcast can transform and ingest data at blinding speeds, often shrinking milliseconds into microseconds. Because Hazelcast is built from the ground up as a distributed technology, it leverages the power of distributed processing while effectively eliminating the impact of network latency.

Flexibility

Hazelcast offers a wealth of development, deployment, and operational choices. Customers can take advantage of the powerful and easy-to-use JCache standard API through client-server or cluster-only architectures. Our platform also integrates seamlessly with popular frameworks and libraries like Spring, Memcached, and Hibernate.

Manageability

The Hazelcast Platform puts the power of cache management within reach and keeps the total cost of ownership low by focusing on ease of development, implementation, and operation. The platform is surprisingly simple to set up and configure, offering remarkable resilience and high availability.

Security

A groundswell of new companies accessing your back-end systems means more opportunity for fraud. Hazelcast enables fraud detection algorithms that easily exceed even the most stringent SLAs. This specific use case is one of our core competencies.

Features

Speed and Scalability

Speed in information processing is great, regardless of the industry or application. Speed at scale is a whole different level of opportunity. Speed at scale while maintaining stability? That’s what you need if you’re going to drive transformative change in your organization, and keep happy customers coming back.

The Hazelcast Platform delivers world-class, in-memory caching solutions, based on a distributed architecture that is wildly fast and seamlessly scalable.

Cache Access Patterns

Hazelcast enables caching when connected to a persistent data store such as a relational database. The most common access patterns are read-through, write-through, and write-behind.

Read-Through Cache

In a read-through pattern, applications request data directly from the caching system, and if the data exists in the cache, that data is returned to the application. If the data does not exist (a “cache miss”), then the system retrieves the data from the underlying backend store. The system then loads that data into the cache and returns the data to the backend store.

Hazelcast handles the entire process so the application does not need to coordinate reads from the underlying store upon cache misses. To establish the link between Hazelcast and the backend store, application developers write pluggable query code that is executed by Hazelcast in the event of a cache miss.

Read-through cache

Read-Through Cache: In a read-through cache pattern, applications request data directly from the caching system, and if the data exists in the cache, that data is returned to the application.

Write-Through Cache

In a write-through pattern, applications can directly update data in the cache, and whenever that is done, the updated data is synchronously and automatically persisted to the backend data store. This pattern is about ensuring the cache and backend store are synchronized and is not intended to address performance (performance is addressed with the write-behind cache pattern, described below) since the backend store is still the bottleneck in an update process.

Write-through cache

Write-Through Cache: In a write-through cache pattern, applications can directly update data in the cache, and whenever that is done, the updated data is synchronously and automatically persisted to the backend data store.

Write-Behind Cache

In a write-behind pattern, applications can update data in the cache similarly to the write-through cache pattern, except the automatic updates to the backend store are asynchronous. This means this pattern offers a performance advantage since updates to the data do not have to wait for writes to the backend store. Updates to data in the cache are acknowledged quickly since only in-memory data is updated, and Hazelcast will later push the data updates to the backend store.

Write-behind cache

Write-Behind Cache: In a write-behind cache pattern, applications can update data in the cache similarly to the write-through cache pattern, except the automatic updates to the backend store are asynchronous.

Near Cache

Near cache is highly recommended for data structures that are mostly read. When an object is fetched from a remote Hazelcast member, it is inserted into the local cache, so subsequent requests are handled by a local member retrieving from the local cache.

Near cache gives Hazelcast users a huge advantage over NoSQL solutions like Redis, Cassandra, MongoDB, etc., which do not have near caching capabilities. In benchmark comparisons, Hazelcast is already 56% faster than Redis without using near cache. Enabling near cache makes Hazelcast 5 times faster than Redis. If Hazelcast near cache features in client-server deployments are being used, microsecond response times can be achieved with the right serialization.

Near cache

Near Cache: Near cache is highly recommended for data structures that are mostly read. When an object is fetched from a remote Hazelcast member, it is inserted into the local cache, so subsequent requests are handled by a local member retrieving from the local cache.