This short video explains why companies use Hazelcast for business-critical applications based on ultra-fast in-memory and/or stream processing technologies.
Stream processing is a hot topic right now, especially for any organization looking to provide insights faster. But what does it mean for users of Java applications, microservices, and in-memory computing?
In this webinar, we will cover the evolution of stream processing and in-memory related to big data technologies and why it is the logical next step for in-memory processing projects.
Now, deploying Hazelcast-powered applications in a cloud-native way becomes even easier with the introduction of Hazelcast Cloud Enterprise, a fully-managed service built on the Enterprise edition of Hazelcast IMDG. Can't attend the live times? You should still register! We'll be sending out the recording after the webinar to all registrants.
The In-Memory Platform of Choice for Businesses Worldwide
Intel and Hazelcast jointly optimize in-memory computing solutions; Intel hardware uniquely accelerates Hazelcast software, making Intel and Hazelcast truly better together. Hazelcast’s strategic co-engineering and co-innovation collaboration with Intel is designed to accelerate the performance of real-time applications, artificial intelligence (AI), and the Internet of things (IoT) solutions for enterprises.
At the center of this initiative is Project Veyron, which is focused on accelerating Hazelcast technologies on Intel platforms, including the 2nd Gen Intel® Xeon® Scalable processor and Intel® Optane™ persistent memory (PMem). The combination of Intel scalable processors with memory persistence and Hazelcast in-memory solutions offers unprecedented speed, reliability, and scalability at significantly better price /performance ratios.
Hazelcast ran internal benchmarks to determine the performance characteristics of Optane PMem compared against DRAM. The Hazelcast benchmarks executed in various setups used a three-node cluster and a single-node as well. The members were running on dual-socket servers equipped with Intel Scalable Xeon CPUs with 1.5TB each of PMem. Twelve of the 24 total DIMM slots (6 channels per socket and 2 slots per channel) were filled with 128GB PMem DIMMs for a total 1.5TB of PMem. The other slots contained DRAM DIMMs, as the proper configuration of PMem requires a DRAM DIMM placed adjacent to a PMem DIMM, though the DRAM DIMM can have a much smaller storage capacity. The system was measured for throughput and latency using an equal combination of reads and writes using a load testing tool. The reads and writes were against a key-value store (“maps”) in which the values were data objects that ranged in size per each test run.
With the use of PMem App Direct Mode, Hazelcast was optimized to take advantage of the fast data access capabilities of PMem. Unlike other uses of PMem, App Direct Mode has a dedicated application programming interface (API) that Hazelcast incorporated in its software to achieve the fastest speeds on PMem.
The benchmarks showcase that Intel Optane is capable of achieving DRAM-like speeds in the distributed environment. As shown in the table below, the throughput performance of Optane PMem were very similar to that of DRAM. Even with data object sizes of 10KB, the throughput was similar, with the throughput for DRAM averaged 360,000 operations per second and Intel Optane exceeded 340,000 operations per second on a single node.
The total cost of ownership (TCO) advantage of using Intel Optane PMem instead of DRAM for storing in-memory data is significant. High-performance systems rely on in-memory data access as a much faster alternative to reads/writes on slower media, but performance gains are capped by the amount of memory available. More available memory leads to more data that is accessible at higher speeds. Plus, more memory per server in a clustered environment means fewer servers and thus less network traffic, further reducing latency.
Intel Optane DIMMs are approximately half the cost of DRAM, making Optane an attractive alternative to RAM, particularly in servers configured for 1 TB of volatile memory or more. Along with that cost advantage, benchmark testing has shown that access times for data in Intel Optane are comparable to DRAM, enabling equal performance with much lower cost.
Consider the prices for various DIMMs (rough market prices as of early 2021):
The table below shows the total cost for 1536 GB DRAM in a dual-CPU server configured for the highest performance by allocating total RAM across all channels in each CPU socket/slot.
Now consider the optimal configuration for Intel Optane at a comparable memory level as the configuration above. Some DRAM is required in the server, so we choose the smallest DRAM DIMMs (16 GB), coupled with Optane PMEM DIMMs (128 GB).
For a 1536 GB DRAM-only server, the memory cost is $54,360, compared to an Optane-enabled server with 1728 GB total memory, for $28,368. This represents 48% lower cost for the Optane configuration.
Intel/Hazelcast announcement regarding Project Veyron
In this joint white paper by Intel and Hazelcast, we cover new developments in both in-memory software enablement, as well as new developments with in-memory processors, and how the two solutions will jointly enable the next generation of business and consumer applications.
In this webinar, you'll learn how to enable optimal real-time decisions thanks to a low latency, distributed memory architecture for data and computations.
Research commissioned by Hazelcast sought to understand how organizations are responding to greater unpredictability among online customers. The online, invitation-only survey spoke to 629 businesses and IT decision-makers in the U.S., Europe, and Asia Pacific to find out.