How Streaming Microservices Can Advance Your Entire IT Environment

January 17, 2019

Streaming microservices. Sounds pretty cool, right? And like a lot of new technologies, it actually is, and it can have a huge effect on how your business operates. Before we get into the importance of streaming microservices, let’s make sure we’re clear on what we’re talking about.

Streaming refers to data entering a system at high speed, usually from multiple sources and in multiple formats. Tweets are a data stream. IoT sensors attached to robots in a factory stream performance data non-stop. Refineries using sensors to upload data on pipeline pressure status is a stream, and so on. Between billions of mobile users and IoT devices, there’s a whole lot of streaming going on.

Microservices are a way of building applications, not unlike the difference between designing and building a house from scratch versus a pre-fab, where different components can be pieced together to create different-looking (or purposed) structures. Microservices are small, reusable pieces of code that can be assembled and embedded into existing processes quickly and easily to support a specific task or sequence and – because of their modularity – receive incremental upgrades.

Combining a microservices architecture with high-volume streaming enables companies working with real-time data streams to build, test, and iterate component-based software solutions that are designed with non-stop data sources in mind. This enables a very fast cycle, while avoiding the overhead associated with large scale software deployments, which can often take years.

A technical example of streaming microservices would be a manufacturing company with sensors used to track system status for maintenance purposes. This is essentially an IoT use case where you have components from a variety of manufacturers that are not necessarily designed to work together that send in a range of data in a variety of formats. Assembling and embedding a series of lightweight software components that can process incoming data at high speed and be swapped out as other elements of the processing supply chain change is a great use case for streaming microservices.

From a business perspective this could apply to nearly any vertical where complex data sources are feeding an information supply chain, for example:

  • Manufacturing: The aforementioned robotics case. Factories are loaded with sensors attached to nearly anything because they are low cost and high value. Creating an adaptable microservices architecture lets you adjust to continuous change in a complex process environment.
  • Healthcare: Healthcare informatics contain a vast number of variables. Patient and clinical informatics alone generate data on a non-stop basis and a great deal comes from specialized IoT devices, most of which are attached (or inserted) into the patient, and this can be processed in-facility or remote.
  • Utilities: Energy grids are vast and subject to environmental effects that normally don’t impact IT systems (when was the last time a tree fell on your PC?). Utilities have been implementing telemetry devices (one of the early iterations of IoT devices) for years, and like the manufacturing example, are a hodgepodge of different vendors technologies that all should work in concert. Of course they don’t, but having the option of deploying microservices architecture allows the utility to adapts its IT system to constantly changing external data streams and (hopefully) upgrading provides a much more robust long-term solution to a critical part of our infrastructure.

So why this technology, and why now? Anywhere you go everyone has their face glued to their phone, every device in your home, office, car, coffee shop, retailer, etc. is connected and streams data and does it non-stop. Tons of variables and not a lot of constants in this model. In this context, crafting a systems architecture that adapts to a constantly changing environment is pretty much the only way to go, and in fact creates a more flexible and effective way to deal with a global information ecosystem.


About the Author

About the Author

Dan Ortega

Dan Ortega

Product Marketing

Dan has had more than 20 years of experience helping customers understand the business value of technologies. His domain expertise spans enterprise software, IoT, ITSM/ITOM, data analytics, mobility, business intelligence, SaaS, content management, predictive analytics, and information lifecycle management. Throughout his career, Dan has worked with companies ranging in size from start-up to Fortune 500 and enjoys sharing insights on business value creation through his contributions to the Hazelcast blog. Dan was born in New York, grew up in Mexico City, and returned to get his B.A. in Economics from the University of Michigan.

Latest Blogs

Right-Sizing Your On-Demand Infrastructure for Payment Processing in Any Cloud

High-Speed Transaction Processing with No Compromise on Fraud Detection

Open Banking and the Application of In-Memory Technologies

Open Banking and the Application of In-Memory Technologies

View all blogs by the author

Subscribe to the blog

Follow us on: