Microservices with Vert.x – a match made in heaven

Hazelcast Community | Aug 10, 2016

Saarth recently published a blog post entitled, “Microservices with Vert.x – a match made in heaven“. In the post you’ll learn how to set up a Microservices Architecture using Vert.x


Microservices offer a more modern and concrete interpretation of SOA (Service Oriented Architecture). The underlying principle remains the same – fundamental processes that communicate with each other over a network in order to fulfil a goal.

The key characteristics of the microservices architecture:

  1. Independent to develop, test and/or deploy, thus easy to refactor and replace.
  2. Technology agnostic (services can be built using different languages, databases and protocols).
  3. Organized around capabilities (billing, UI, business rules, etc.)
  4. Small services and lightweight protocols (enhanced cohesion and reduced coupling)
  5. Continuous delivery and module structure

By definition, there is no one right answer to building microservices in your application. There are several factors that go into deciding the right technology stack for a particular project/product. Ease of development, scalability, and performance are my top three categories, and today’s nomination in these categories is Vert.x.

Vert.x – What is it?

Vert.x is an event-driven, reactive framework that runs on the JVM (Java virtual machine). It is a general purpose framework (i.e. un-opinionated), however we’ll soon see how perfectly suited it is to building microservices.

Key characteristics of Vert.x:

Polyglot programming

Vert.x is truly polyglot. Currently it supports Java, Scala, Groovy, Python (Jython), JavaScript, Ruby (JRuby), Ceylon, and Clojure. Developers versed with one or more languages listed above can team together and develop in cohesion.

Event-driven, non-blocking

At Vert.x’s core is the reactor loop, aka the event loop. The event loop is similar to that in node.js. However, while the event loop in node.js is tied to one process, that in Vert.x is tied to a CPU core. Thus on a multi-core machine (which is pretty much the norm today), a Vert.x application takes advantage of multiple event loops running in parallel, out of the box.

Lightweight and Fast

Vert.x is very lightweight and as per benchmarks – very fast. The Vert.x project is modularized into multiple libraries that can be used as required (core, web, auth, etc.)

Monitoring and management, out of the box

Vert.x comes loaded with a lot of goodies such as HA (High availability), clustering (with Hazelcast), metrics (Dropwizard and Hawkular) and in-built health management. Vert.x will warn you if any of the operations are taking too long and are blocking the event loop (see Vert.x’s golden rule).

Golden Rule:

Do NOT block the event loop

Developing apps with Vert.x is easy and fun. The unit of work in Vert.x is called a “Verticle”. A verticle provides an actor-like deployment and concurrency model and can be used for anything – be it setting up an http server, initiating a database, or listen to a websocket bridge. Note that using a Verticle is optional, but recommended in Vert.x.

Since Vert.x is un-opinionated, it can be used in multiple ways inside of an application – as a framework for a reactive application or as a toolkit embedded inside an existing application. It is purely a design choice.

There are a plethora of easy-to-understand Vert.x examples here that are a good starting point for developers. This github project lists a bunch of useful extensions to the Vert.x framework – contributed by the active open source community.

Vert.x and Microservices

So Vert.x is really interesting, but why would I label it perfect for building microservices? Here’s why (for a more detailed dissection of the Vert.x microservices toolbox, check out this video by Clement Escoffier):

Independent processes

A Vert.x process can be deployed as a fatjar or can be run using the CLI (Command Line Interface). A combination of such deployments can be made to work together in a cluster.

Verticles – deploy, use and communicate

A verticle can be synonymous with one service (and is often is), or can also be made to work with other verticles to form a distributed service. From the official documentation –

“Verticles are chunks of code that get deployed and run by Vert.x. A Vert.x instance maintains N event loop threads (where N by default is core*2). Verticles can be written in any of the languages that Vert.x supports and a single application can include verticles written in multiple languages.”

Verticles primarily communicate with each other using multiple channels – HTTP, the distributed event bus (covered below), TCP, etc. Verticles can also share immutable data among each other using Vert.x shareddata feature.

Distributed Event Bus

The event bus is the nervous system of Vert.x (comes out of box). There is one event bus instance per Vert.x instance (JVM). For a distributed Vert.x application on clustered nodes, all nodes (and the verticles deployed on them) have access to the event bus and use it to communicate with each other. This makes publishing and listening to service endpoints super easy.

The event bus supports publish/subscribe, point to point, and request-response messaging.

Not only REST

The services supported are not limited to REST, although REST is a very popular architectural choice. HTTP endpoints, service proxies and message sources are some of the other types.

Service Discovery Mechanism

The Discovery service lets you find a service regardless of deployment environment (Dev, QA, UAT, etc.). The service could be a REST API, a proxy to a service, a data source or even a service not developed in Vert.x. The infrastructure of a discovery service essentially contains a service record and APIs to create and publish service records. The APIs in turn give out details about the service and the service object.

The discovery service supports bridging with external discovery mechanisms (Docker, Consul, etc.) and can use the event bus service notification.

Failover Isolation and Reaction

Circuit-breaker pattern helps avoid failure cascading. It also lets the application react/recover from failure states. This also works for microservices calling other microservices. The circuit-breaker can be configured with a timeout, fallback on failure behavior and the maximum number of failure triggers. This ensures that if a service goes down, the failure is handled in a predefined manner.


The Vert.x framework is very promising and has microservices in its DNA. It has already been adopted by several products/companies (VMWare, RedHat, Hulu). Vert.x’s Microservices Toolbox is evolving and the roadmap has even broader service support and developer appealing features.

I have used Vert.x for developing microservices and never looked back. If you haven’t decided on the tech stack already or even if you have, give Vert.x a shot.

Relevant Resources

View All Resources