Retail Success Requires Back-Office Performance and Agility
How a leading retailer leveraged open source in-memory technologies to accelerate financial applications
This leading retailer operates several types of stores in both brick-and-mortar and e-commerce models. It has a global presence with billions of dollars of revenue. The company has a massive, distributed IT team that helps it compete in today’s data-driven economy. It leverages a wide variety of technologies to keep operations running smoothly to serve its hundreds of millions of customers.
The platform team, consisting of nearly 150 technical professionals, is distributed between the United States and India. They are a complete agile shop, so the team includes developers, product owners, scrum masters, etc. The mission of the team is to create and maintain centralized middleware solutions that can be used by other IT groups for systems integration. These other groups work with HR systems, real estate data, financial data, and other back-office business information, and they build integration solutions on top of the platform. While the platform team is not directly responsible for the applications that drive revenue and profit, they provide the core component that makes those applications successful by ensuring data operations are fast, reliable, and easy to implement. All business data essentially flows through the platform team, which is the gatekeeper to all systems, including the ERP systems (SAP and Workday).
Part of the team’s responsibilities is to explore new open source technologies. They test a lot of open source products, attend conferences, and talk to other industry professionals to learn more about what could work well for them. Like many businesses today, they always look for technologies that are not only easy to get started with, but also easy to implement, deploy, and maintain.
Some of the most sophisticated IT teams belong to big-name retailers that sell millions, and even billions of dollars of goods each year. These companies are in a highly competitive industry that requires them to be innovative and agile at every level. While the competitive advantage that these companies can gain is commonly related to customer-facing functions, that is not the only area where they can excel. Significant competitive advantage can be gained by optimizing the back-office functions that represent the core business operations that all companies have. By accelerating access to data and accelerating their ability to stand up new applications, they can continue improving internal processes that lead to greater efficiency and ultimately more revenue and profit.
A company of this size has technology challenges that can be unique on multiple levels. As a simple example, consider the fact that their billions in annual revenue translates to thousands of dollars per second. This is a scenario where an instant of time is worth a significant amount of money. Tracking, maintaining, and analyzing all of the generated data is an enormous undertaking, and must be handled with extremely efficient integrations across the many systems of record to enable business agility.
To successfully handle such load when integrating disparate systems, the platform team set out to build a centralized cache-as-a-service as the integration point between the various data management stakeholders. Without this centralized service, each team would have to build their own platform, which creates unnecessary redundancy and excessive overhead. Standing up new applications would take months, which was not acceptable in such a high-load business that processes millions of transactions per month.
The company had been actively looking for an in-memory technology to power its distributed cache-as-a-service, with a preference towards open source. Required capabilities included security and business continuity, and they believed features like near-cache (an optimization feature that places data close to the computation to reduce network hops and thus greatly reduce latency) would be necessary to get the performance they needed.
After exploring several popular alternatives, the team decided to pilot the open source version of Hazelcast, driven primarily by the product’s ease of use, developer and community support, and documentation. The availability of Hazelcast as an open source technology made it easy to prove out its capabilities with no up-front financial commitment. And while the open source version did not include the security and disaster recovery features they wanted, it had enough functionality to get started. After running for six months and validating that they had the right technology, they decided to migrate to Hazelcast Enterprise, driven by the need for enterprise features like security.
In the solution, the main inputs to the in-memory data repository include Apache Kafka and IBM MQ. These messaging layers feed data into the Hazelcast cluster. Data is then accessed via APIs that the platform team built and is used by other in-house developers to write their own internally-facing services and applications on back-office business data. The APIs replaced a series of batch jobs and processes that were previously used. Now they could leverage a microservices-based architecture that provides the simplicity to promote agility among the development teams.
The microservices are exposed through an API gateway that can provide more robust services composed of fine-grained microservices. Within this shared service layer, Hazelcast IMDG provides a data plane where data from a variety of sources (including Kafka, databases, flat files, and message queues) can be loaded into maps for secure, low-latency access. For resiliency, the IMDG cluster is replicated across multiple data centers, so should any single data center fail entirely, the other data centers can quickly fill in.
“The problems we are trying to solve are constantly changing, so the Hazelcast solution has been very useful in that regard; it’s highly adaptable.”— Anonymous Principal Software Engineer/Senior Architect
With the centralized in-memory service based on Hazelcast IMDG Enterprise that has been running in production for two years, stakeholders realized significant gains. Standing up new services previously took two months as a result of having to write their own underlying infrastructure. Now they could stand up services in two days. This not only gave IT developers the ability to react quickly, but it also accelerated their data access, enabling the business teams who consume the financial data to react faster.
The platform team liked Hazelcast IMDG for its ease of use, as well as its security features, monitoring tools, and dashboarding capabilities. They also valued the documentation and the technical support.
The Hazelcast solution reduced new services development time from 2 months to 2 days.
The company is looking to expand Hazelcast usage in other divisions within the company. Other in-memory technologies are also popular within the company, but there is a lot of potential for Hazelcast because of the proven advantages around ease of use, performance, security, and reliability.
With the microservices architecture they built, if they decide to migrate to the cloud in the future, the lift-and-shift effort should be fairly easy to accomplish, especially if they turn to Hazelcast Cloud Enterprise (the Hazelcast managed service in the cloud). This would allow them to easily take advantage of cloud capabilities should they go that route. Hazelcast features like WAN Replication provide added benefits like disaster recovery, as well as multi-cloud deployment to mitigate risks of cloud vendor lock-in.