Distributed Computing Simplified using Hazelcast

Distributed computing, also known as distributed processing, involves connecting numerous computer servers through a network to form a cluster. This cluster, known as a “distributed system,” enables the sharing of data and coordinating processing power. Distributed computing provides many benefits, such as:

  • Scalability utilizing a “scale-out architecture”
  • Enhanced performance leveraging parallelism
  • Increased resilience by employing redundancy
  • Cost-effectiveness by utilizing low-cost, commodity hardware

There are two main advantages of distributed computing: 1) Utilizing the collective processing capabilities of a clustered system, and 2) minimizing network hops by conducting computations on the cluster processing the data. In this blog post, we explore how Hazelcast simplifies distributed computing (both as a self-managed and managed service). Hazelcast offers three solutions for distributed computing, depending on the use case:

Option #1: Entry Processor

An entry processor is a functionality that executes your code on a map entry in a manner that ensures atomicity. So you can update, remove, and read map entries on cluster members (servers). This is a good option to perform bulk processing on an IMap

How to create Entry Processor?

public class IncrementingEntryProcessor implements EntryProcessor<Integer, Integer, Integer> {
    public Integer process( Map.Entry<Integer, Integer> entry ) {
       Integer value = entry.getValue();
        entry.setValue( value + 1 );
        return value + 1;
   }
    @Override
    public EntryProcessor<Integer, Integer, Integer> getBackupProcessor() {
        return IncrementingEntryProcessor.this;
   }
}

How to use Entry Processor?

IMap<Integer, Integer> map = hazelcastInstance.getMap( "myMap" );
for ( int i = 0; i < 100; i++ ) {
   map.put( i, i );
}
Map<Integer, Object> res = map.executeOnEntries( new IncrementingEntryProcessor() );

How to optimize Entry Processor performance?

  • Offloadable refers to the capability of transferring execution from the partition thread to an executor thread.
  • ReadOnly indicates the ability to refrain from acquiring a lock on the key.

You can learn more about Entry Processor in our documentation.

Option #2: Java Executor Service

Simply put, you can run your Java code on cluster members and obtain the resulting output. Java boasts a standout feature in its Executor framework, enabling the asynchronous execution of tasks, such as database queries, intricate calculations, and image rendering.  In the Java Executor framework, you implement tasks in two ways: Callable or Runnable. Similarly, using Hazelcast, you can implement tasks in two ways: Callable or Runnable.

How to implement a Callable task?

public class SumTask

        implements Callable<Integer>, Serializable, HazelcastInstanceAware {

    private transient HazelcastInstance hazelcastInstance;

    public void setHazelcastInstance( HazelcastInstance hazelcastInstance ) {

        this.hazelcastInstance = hazelcastInstance;

    }

    public Integer call() throws Exception {

        IMap<String, Integer> map = hazelcastInstance.getMap( "map" );

        int result = 0;

        for ( String key : map.localKeySet() ) {

            System.out.println( "Calculating for key: " + key );

            result += map.get( key );

        }

        System.out.println( "Local Result: " + result );

        return result;

    }

}

How to implement a Runnable task?

public class EchoTask implements Runnable, Serializable {

    private final String msg;

    public EchoTask( String msg ) {

        this.msg = msg;

    }

    @Override

    public void run() {

        try {

            Thread.sleep( 5000 );

        } catch ( InterruptedException e ) {

        }

        System.out.println( "echo:" + msg );

    }

}

How to Scale the Executor Service?

To scale up, you should improve the processing capacity of the cluster member (JVM). You can do this by increasing the pool-size property mentioned in Configuring Executor Service (i.e., increasing the thread count). However, please be aware of your member’s capacity. If you think it cannot handle an additional load caused by increasing the thread count, you may want to consider improving the member’s resources (CPU, memory, etc.). For example, set the pool size to 5 and run the above MasterMember. You will see that EchoTask is run as soon as it is produced. 

To scale out, add more members instead of increasing only one member’s capacity. You may want to expand your cluster by adding more physical or virtual machines. For example, in the EchoTask scenario in the Runnable section, you can create another Hazelcast instance that automatically gets involved in the executions started in MasterMember and start processing. You can read more about the Executor Service in our documentation

Option #3: Pipeline

Develop a data pipeline capable of swiftly executing batch or streaming processes across cluster participants. It consists of three elements: One or more sources, processing stages, and at least one sink. Depending on the data source, pipelines can be applied for various use cases, such as stream processing on a continuous stream of data (i.e., events) to provide results in real-time as the data is generated. Or, in the form of batch processing of a fixed amount of static data for routine tasks, such as generating daily reports. In Hazelcast, pipelines can be defined using either SQL or the Jet API. 

Below are a few more resources:

Finally, when should you choose Entry Processor, Executor Service, or Pipeline?

Opting for an entry processor is a suitable choice when conducting bulk processing on a map. Typically, this involves iterating through a loop of keys, retrieving the value with map.get(key), modifying the value, and ultimately reintegrating the entry into the map using map.put(key, value).

The executor service is well-suited for executing arbitrary Java code on your cluster members.

Pipelines are well-suited for scenarios where you need to process multiple entries (i.e., aggregations or joins) or when multiple computing steps need to be performed in parallel. With pipelines, you can update maps based on the results of your computation using an entry processor sink.

So here you have three options for distributed computing using Hazelcast. We look forward to your feedback and comments about this blog post! Share your experience with us in the Hazelcast community Slack and the Hazelcast GitHub repository. Hazelcast also runs a weekly live stream on Twitch, so give us a follow to get notified when we go live.