A Business Level View of Kubernetes, Cloud Migration and In-Memory Technologies

April 04, 2019

There’s a trio of buzzwords that are starting to spend more time in proximity to each other, and are also beginning to surface from the technical to the business world. Each by itself is critical, and when you combine all three, the business-level effect becomes very noticeable. These buzzwords and the business descriptors are:

Kubernetes: Which is Greek for Helmsman or Pilot, is a Google-originated (now open source) system that is designed to contain an integrated solution to automate the deployment, scaling, and management of applications into the cloud. The key words here are “contain” (that is, everything you need) and “automates,” (which can take migration of apps to the cloud from weeks to hours or even minutes). Which means faster time to market/value, plus all the benefits associated with cloud deployments.

Cloud Migration: This has been going on for a while, this is essentially enterprises with on-premises applications and processes moving to the cloud (of which there are multiple variants, (e.g., public, private, hybrid, multi). Once an app migrates to the cloud, it essentially becomes a service, and upgrades, expansion, and maintenance all become far easier and cost effective. Kubernetes is a very efficient way to migrate applications to the cloud.

In-Memory: This technology enables complex data processing in RAM (which has become terabyte big and inexpensive), removing the need for applications to traverse a network to access a database. This leads to a massive increase in speed, at scale, which is pretty perfect for today’s high-speed, online and increasingly cloud-native business.

The core technology enablers for industry drivers such as Digital Transformation or Continuous Intelligence are baseline requirements around speed, scalability, and stability. Blinding speed has always been the core value-add of in-memory technology (reducing network latency and mitigating disk latency from the transaction flow), and this is just as effective and relevant in the cloud as it in on-premises. Scalability is not only the core value-add of the cloud – it’s the whole point of moving applications to the cloud, and Kubernetes is the crucial enabler for this. However, it doesn’t matter how fast or scalable your system is if it’s unstable, which is why a distributed architecture that is scalable via the cloud, accelerated by in-memory, and quickly migrated and deployed via Kubernetes provides the optimal approach.

Whether this is on-premises or a cloud variant, the requirements are consistent since they affect the experience of the end-user, who doesn’t care about Kubernetes or cloud migrations, they want a fast and error-free user experience. Delivering in-memory technology in a Kubernetes wrapper significantly simplifies and accelerates the deployment of in-memory in the cloud. The business implications are more rapid prototyping, faster success or failure (the only thing worse than failing is failing slowly), and the ability to spin up or down new offerings quickly and efficiently. This combination of relatively mature and proven technologies provides precisely what is needed to drive both Digital Transformation and the need for Continuous Intelligence, and like most horizontal technologies, is broadly applicable, or more specifically, it applies to anyone requiring speed, scalability, and ease of use (which would pretty much be everybody). Three examples include:

Financial services companies, which have already begun the movement of applications to the cloud, but are also a substantial presence in a legacy system model, which is unlikely to change (sometimes due to compliance reasons, sometimes due to depreciation schedules). In this model, a hybrid approach is ideal; just because something can be moved to the cloud doesn’t mean it should be. Moving to the cloud is a genuine commitment on the part of IT and having the option to containerize a proposed service via Kubernetes lets banks (in all their variants) try out new services while minimizing disruption to existing production systems. Folding in-memory into the equation means that any service deployed will operate at speeds that enable applications performance at a significantly higher level, allowing capabilities such as running multiple fraud detection algorithms in the time it takes to swipe a card.

IoT applications, which are highly varied but generally require:

  • A light footprint – IoT deployment usually involves a large number of small devices working in unison, for example, real-time telemetry data gathered from refining or drilling operations for energy companies, so scattering them via a cloud infrastructure makes sense.
  • Fast performance – these are devices that need to operate in real-time to be effective, think of applications such as health-care diagnostics, or a set-top box that provides viewing habits and usage patterns that can be used to feed a real-time customer service app.
  • Ease of deployment – the whole point of an IoT device is that it’s small, cost-effective and easy to set up. Since this is also a cloud thing, it makes sense to use a Kubernetes framework to test out the software enablement of IoT deployments, since they are both driven by the same underlying requirement.

Mobility has a massive wave of incredibly cool applications headed our way courtesy of the emerging 5G network standards. This is a network that is 100x faster than what is currently in place, so when companies begin the move to a 5G-based cloud, it’s going to be not only a lot faster but a lot less crowded. These advancements are also a strong enabler for edge computing applications such as autonomous systems (driverless cars), remote haptic interfaces (full body suits for video gaming), remote robotics (remote surgery), and more. These are all mobile, IoT, and cloud-based, which means upgrades need to be rolled out and tested quickly (an ideal Kubernetes use), and require screaming fast processing at the edge (which in-memory delivers). Combining in-memory with a 100x faster network is going to change our lives in ways that are still hard to imagine, but won’t be 3-5 years from now.

There is a lot to consume here, and Hazelcast has made it easy, with the availability of extensive resources, training, and consulting services, all of which are accessible through our website here.

About the Author

About the Author

Dan Ortega

Dan Ortega

Product Marketing

Dan has had more than 20 years of experience helping customers understand the business value of technologies. His domain expertise spans enterprise software, IoT, ITSM/ITOM, data analytics, mobility, business intelligence, SaaS, content management, predictive analytics, and information lifecycle management. Throughout his career, Dan has worked with companies ranging in size from start-up to Fortune 500 and enjoys sharing insights on business value creation through his contributions to the Hazelcast blog. Dan was born in New York, grew up in Mexico City, and returned to get his B.A. in Economics from the University of Michigan.

Latest Blogs

Right-Sizing Your On-Demand Infrastructure for Payment Processing in Any Cloud

High-Speed Transaction Processing with No Compromise on Fraud Detection

Open Banking and the Application of In-Memory Technologies

Open Banking and the Application of In-Memory Technologies

View all blogs by the author

Subscribe to the blog

Follow us on: