Intel

The Computer Hardware Technologies of Choice for Businesses Worldwide.

About Intel and Hazelcast

Intel and Hazelcast jointly optimize high-performance computing solutions; Intel hardware uniquely accelerates Hazelcast software, making Intel and Hazelcast truly better together. The strategic co-engineering and co-innovation collaboration between Intel and Hazelcast is designed to accelerate the performance of real-time applications, artificial intelligence (AI), and the Internet of things (IoT) solutions for enterprises.

At the center of this initiative is Project Veyron, which is focused on accelerating Hazelcast technologies on Intel platforms, including the 2nd Gen Intel® Xeon® Scalable processor. The combination of Intel scalable processors and the Hazelcast real-time stream processing platform offers unprecedented speed, reliability, and scalability at significantly better price/performance ratios.

“The Hazelcast platform enables high-performance applications at extreme scale running best on Intel products. The Intel and Hazelcast collaboration will deliver platform innovations so that customers of Intel and Hazelcast will be positioned for greater success in the data-centric era.”

— Lisa Davis, Vice President and General Manager of Digital Transformation & Scale Solutions, Intel

Intel Optane PMem Performance

Hazelcast ran internal benchmarks to determine the performance characteristics of Optane PMem compared against DRAM. The Hazelcast benchmarks executed in various setups used a three-node cluster and a single-node as well. The members were running on dual-socket servers equipped with Intel Scalable Xeon CPUs with 1.5TB each of PMem. Twelve of the 24 total DIMM slots (6 channels per socket and 2 slots per channel) were filled with 128GB PMem DIMMs for a total 1.5TB of PMem. The other slots contained DRAM DIMMs, as the proper configuration of PMem requires a DRAM DIMM placed adjacent to a PMem DIMM, though the DRAM DIMM can have a much smaller storage capacity. The system was measured for throughput and latency using an equal combination of reads and writes using a load testing tool. The reads and writes were against a key-value store (“maps”) in which the values were data objects that ranged in size per each test run.

With the use of PMem App Direct Mode, Hazelcast was optimized to take advantage of the fast data access capabilities of PMem. Unlike other uses of PMem, App Direct Mode has a dedicated application programming interface (API) that Hazelcast incorporated in its software to achieve the fastest speeds on PMem.

The benchmarks showcase that Intel Optane is capable of achieving DRAM-like speeds in the distributed environment. As shown in the table below, the throughput performance of Optane PMem were very similar to that of DRAM. Even with data object sizes of 10KB, the throughput was similar, with the throughput for DRAM averaged 360,000 operations per second and Intel Optane exceeded 340,000 operations per second on a single node.

Data Object SizeDRAMHazelcast 4.1

128B

1.7M operation per second (ops)1.8M ops

1KB

1.5M ops1.5M ops

4KB

800K ops800K ops

10KB

360K ops340K ops

100KB

35K ops34K ops

Intel Optane PMem Cost of Ownership

The total cost of ownership (TCO) advantage of using Intel Optane PMem instead of DRAM for storing in-memory data is significant. High-performance systems rely on in-memory data access as a much faster alternative to reads/writes on slower media, but performance gains are capped by the amount of memory available. More available memory leads to more data that is accessible at higher speeds. Plus, more memory per server in a clustered environment means fewer servers and thus less network traffic, further reducing latency.

Intel Optane DIMMs are approximately half the cost of DRAM, making Optane an attractive alternative to RAM, particularly in servers configured for 1 TB of volatile memory or more. Along with that cost advantage, benchmark testing has shown that access times for data in Intel Optane are comparable to DRAM, enabling equal performance with much lower cost.

Consider the prices for various DIMMs (rough market prices as of early 2021):

DIMMCost per DIMM

16 GB DRAM

$565

64 GB DRAM

$2265

128 GB Optane

$1799

This table shows the total cost for 1536 GB DRAM in a dual-CPU server configured for the highest performance by allocating total RAM across all channels in each CPU socket/slot.

CPU1CPU 2

Channel

Slot 1 (DRAM)Slot 2 (DRAM)Slot 1 (DRAM)Slot 2 (DRAM)Totals

1

64 GB64 GB64 GB64 GB

2

64 GB64 GB64 GB64 GB

3

64 GB64 GB64 GB64 GB

4

64 GB64 GB64 GB64 GB

5

64 GB64 GB64 GB64 GB

6

64 GB64 GB64 GB64 GB

Total Capacity

384 GB 384 GB 384 GB 384 GB 1536 GB

Total Cost

$13,590 $13,590 $13,590$13,590 $54,360

Now consider the optimal configuration for Intel Optane at a comparable memory level as the configuration above. Some DRAM is required in the server, so we choose the smallest DRAM DIMMs (16 GB), coupled with Optane PMEM DIMMs (128 GB).

CPU1CPU 2

Channel

Slot 1 (DRAM)Slot 2 (PMEM)Slot 1 (DRAM)Slot 2 (PMEM)Totals

1

16 GB128 GB16 GB128 GB

2

16 GB128 GB16 GB128 GB

3

16 GB128 GB16 GB128 GB

4

16 GB128 GB16 GB128 GB

5

16 GB128 GB16 GB128 GB

6

16 GB128 GB16 GB128 GB

Total Capacity

96 GB768 GB96 GB768 GB1728 GB

Total Cost

$3390$10,794$3390$10,794$28,368

For a 1536 GB DRAM-only server, the memory cost is $54,360, compared to an Optane-enabled server with 1728 GB total memory, for $28,368.

This represents 48% lower cost for the Optane configuration.