Live Virtual Workshop: Architecting Payment Systems for Real-time Performance

  • July 14, 2026
  • 8 AM PDT / 11 PM EDT / 4 PM BST / 5 PM CEST
  • Zoom Links: Provided upon registration

Join our interactive, virtual workshop designed for developers and architects who want to build and evolve high-performance, distributed payment systems without sacrificing consistency, latency, or observability.

What You’ll Learn:

  • Why Traditional Payment Architectures Break Down – Understand why batch processing, centralized relational state, and tightly coupled cores fail under real-time transaction loads.
  • Designing for Real-Time at Scale – Learn how to architect payment systems that handle high transaction volumes with predictable low latency.
  • ISO 20022 Reference Architecture – Explore a live, ISO 20022–based reference implementation that demonstrates modern payment flows in action.
  • Cache-First, Event-Driven Payments – See how distributed in-memory data and streaming pipelines enable instant settlement, real-time fraud detection and liquidity management
  • Balancing Streaming and Transactional Data – Learn how to coordinate distributed state while maintaining consistency and performance.
  • Built-In Observability – Gain real-time insight using Prometheus, Grafana, and SQL across live and historical data.
  • Architectural Walkthrough & Expert Insights – Connect proven design patterns directly to measurable performance and scalability gains.

Duration: 90 minutes

To maximize interaction and hands-on learning, this workshop is offered live only and will not be recorded.

Register Now

Hazelcast

Microservices at Scale: Solving State and Latency Challenges Without Added Complexity

Practical patterns for building high-performance distributed systems

  • March 5, 2026
  • 8 AM PST / 11 PM EST / 4 PM GMT
  • 60 minutes
  • Zoom Links: Provided upon registration

Microservices architectures promise speed and flexibility - until scale introduces new challenges. Databases become bottlenecks. Cache invalidation grows fragile. Latency compounds with every service call. Distributed state becomes harder to manage as systems evolve.

In this live webinar, we’ll explore the real-world problems that surface when microservices move from design to production. We’ll examine practical patterns for managing state, reducing latency, handling event streams, and keeping distributed systems performant - without increasing architectural complexity. Because solving distributed state at scale requires more than basic caching.

You’ll see how developers and architects can apply these approaches using Java native tooling such as Hazelcast to simplify data access, stream processing, and coordination across microservices, with concrete examples and live demonstrations drawn from production-style environments.

What You’ll Learn:

  • Why does state management and latency become issues as microservices scale
  • Common performance bottlenecks in event driven architectures and distributed systems
  • Practical patterns for handling distributed state and data access
  • How modern distributed tooling can reduce complexity without sacrificing performance, data consistency, and resiliency

Watch now to learn how to solve state and latency challenges in microservices - without adding more complexity to your architecture.

Watch Now

Join our interactive, virtual workshop designed for developers and architects who want to deepen their understanding of building high-performance, distributed data systems with Hazelcast.

What You’ll Learn:

  • Hazelcast Fundamentals & Real-World Applications – Gain a high-level understanding of the platform and explore practical use cases across industries.
  • AI Data Structures – Leverage vector search for fast similarity lookups in RAG systems.
  • Availability vs. Consistency – Learn how to apply AP and CP data structures and understand trade-offs in distributed environments.
  • Caching & Streaming – Build low-latency, event-driven applications with Hazelcast’s in-memory distributed caching, compute, and stream processing capabilities.
  • Operational Resilience & Geo-Replication – Design for high availability with strategies for fault tolerance and multi-region deployments.
  • Hands-on Lab + Expert Q&A – Apply what you’ve learned by building a real-time payment system and engage with Hazelcast experts

Earn a Hazelcast Digital Badge!

Complete the training to receive a Hazelcast Digital Badge to showcase on LinkedIn or your resume.

Duration: 90 minutes

To maximize interaction and hands-on learning, this workshop is offered live only and will not be recorded.

Most payment systems weren’t built for real-time. They rely on batches, relational state, and tightly coupled cores.

Even “modernized” ones often fail under load—showing latency spikes, stale data, or scaling limits.

In this 90-minute workshop, Hazelcast architects walk through how to design and evolve payment systems to handle real-time transaction volumes at scale. The session uses a live ISO 20022–based reference implementation to show how distributed in-memory data and streaming pipelines remove bottlenecks without sacrificing consistency or operational visibility.

You’ll see how a cache-first, event-driven architecture supports instant settlement, fraud detection, and liquidity management while integrating with existing systems. Topics include balancing streaming and transactional data, coordinating state, and maintaining low latency.

Observability is built in, with Prometheus, Grafana, and SQL over live and static data for real-time insight.

The session ends with an architectural walkthrough linking design patterns to real performance gains.

Hazelcastfeaturing Forrester

Modernizing Real-Time Data to Power AI and Apps

Enterprises need faster decisions and better customer experiences, yet shifting priorities, data silos, legacy systems, and unclear ROI keep slowing progress. The critical enabler is real-time data—delivered with low latency, high reliability, and scalability—to fuel both AI, and applications.

Join guest speaker Noel Yuhanna (VP & Principal Analyst, Forrester), and Ashish Sahu (Head of Product Marketing, Hazelcast) for a presentation and demo on making real-time data practical. We’ll cut through buzzwords, share field-tested patterns, and outline modernization approaches that shrink time-to-action from minutes to milliseconds—without risky big-bang rewrites.

We’ll address:

  • How the market and vendors define “real-time” today
  • The most common bottlenecks to extracting real-time value from data
  • Where real-time data fits in modern AI, and applications—and how this differs from past approaches
  • How to deliver a real-time experience across continents while managing sovereignty, consistency, and control

Who should attend: Data, platform, and application leaders, and architects seeking actionable guidance to modernize data architectures and reliably power AI and high-impact apps with real-time data.

Watch Now

Futureproofing Digital Banking: How ING Türkiye Scaled Its Microservices with Hazelcast

25 million

daily transactions processed

Challenge

  • ING Türkiye’s migration from monolithic C# applications to 300 Java-based microservices caused performance bottlenecks in its centralized Oracle database.
  • The growing volume of over 25 million daily transactions introduced latency, limiting scalability and responsiveness.
  • The bank needed a modern, future-proof infrastructure that could support rapid innovation while maintaining high reliability and compliance.

Solution

  • ING Türkiye implemented Hazelcast Platform Enterprise Edition as a centralized, distributed in-memory caching and computing layer.
  • The platform decoupled microservices from the Oracle database, enabling high-speed data access, reduced latency, and resilient session management.
  • Hazelcast’s features—such as distributed IMaps, FencedLock for consistency, and dynamic configuration via message bus—optimized both performance and operational control.

Customer Success

  • Superior Experience: Customers now enjoy fast, seamless, and reliable mobile banking with drastically reduced latency.
  • Scalability & Agility: The new architecture easily handles growing transaction volumes, supporting independent scaling of microservices.
  • Operational Efficiency: The solution lowered total cost of ownership, empowered developers, and ensured all services meet SLA requirements.

“Hazelcast Enterprise enabled us to modernize our core banking systems from a monolith to microservices—without disruption. By decoupling critical reads from Oracle and serving data at in-memory speed, we eliminated latency and unlocked independent scaling across 300 microservices, turning our architecture into a growth engine that keeps the digital banking experience fast even at over 25 million daily transactions—and gives our teams a platform they can build on quickly.”

— Doğukan Guran, Chapter Lead of Core Frameworks, & Platform, and BPM, ING Türkiye

Industry

Financial Services

Year Founded

1984

Product

Hazelcast Platform Enterprise Edition

Result

Independent scaling across 300 microservices

Overview

ING Türkiye, a leader in digital banking innovation, embarked on a strategic modernization initiative to migrate monolithic applications to a Java-based microservices architecture. This transition, aimed at increasing agility and scalability, introduced significant performance challenges with their existing Oracle database infrastructure. By implementing Hazelcast Platform as a high-speed, centralized caching layer, ING Türkiye successfully eliminated system latency, enabled massive scalability for over 25 million daily transactions, and significantly enhanced its mobile banking user experience, all while achieving a lower total cost of ownership (TCO).

Business Challenge

As a digital-first institution, ING Türkiye faced rapidly increasing traffic to its mobile and online banking services. Their primary business challenge was to enhance the performance, scalability, and operational efficiency of their systems to support this growth. They needed to process over 25 million business transactions daily with minimal latency to maintain a competitive edge and deliver a superior, responsive customer experience. The goal was to build a future-proof infrastructure that could scale effortlessly with evolving customer demands and accelerate the delivery of new banking services.

Technical Challenge

The strategic migration from C# monolithic applications to 300 Java microservices deployed in Red Hat containers was designed to boost scalability. However, it created a new set of technical hurdles. The 55 development teams found that the new architecture funneled an overwhelming number of requests to the centralized Oracle database, which was used for authorization, session management, and fast data reads. This resulted in significant performance bottlenecks, including high database access latency and slow write-back operations. The Oracle database became the primary limiting factor, preventing the microservices from realizing their full potential for speed and independent scaling.

Technical Solution

To decouple their microservices from the database and resolve the performance issues, ING Türkiye's development teams benchmarked leading caching solutions, including Hazelcast and Redis. They selected Hazelcast Platform Enterprise Edition based on its superior performance at linear scale, ease of implementation, high-quality documentation, seamless integration with OpenShift, and a significantly lighter resource footprint. Hazelcast was implemented as a centralized, distributed in-memory computing and caching layer, providing the high-speed data access required by the 300 microservices. This allowed them to abstract away the Oracle database dependency for read-heavy operations and provide a resilient foundation for their applications.

“After benchmarking leading caching options—including Redis—we chose Hazelcast Enterprise for its superior performance at linear scale, developer experience, seamless OpenShift integration, and lighter footprint. We implemented Hazelcast as a centralized, distributed cache to give our 300 microservices high-speed data access and abstract read-heavy operations away from Oracle. With read-through caching and the CP Subsystem, we resolved our latency hotspots, guaranteeing the strict consistency our banking flows require—without custom code.”

— Efecan Ahmetoglu, Senior Software Design Engineer, ING Türkiye

Solution Architecture

Hazelcast was deployed on VMware virtual machines, creating a central, high-speed data fabric accessible to every microservice. This architecture immediately improved data access patterns and system responsiveness.

Key implementation patterns include:

  • Stateful Session Management: To meet banking audit and traceability requirements, the system uses Hazelcast's distributed IMap to store and track user context. A unique UUID is generated at login and persisted in a Hazelcast map, allowing for seamless context tracking across the distributed environment until the user logs out.
  • High-Consistency Locking: To manage tokens from external institutions that become invalid after a single use, ING Türkiye leverages Hazelcast's FencedLock. This feature of the CP Subsystem provides strict consistency, preventing race conditions and ensuring that critical operations are performed exactly once across different pods.1
  • Dynamic Configuration via Message Bus: To optimize resource use, the team utilized Hazelcast's distributed ITopic to implement the Spring Cloud Bus component. This allows for dynamic, runtime configuration changes—such as adjusting log levels or starting/stopping Kafka listeners—without requiring application restarts or deployments.2
  • Database Offloading with Read-Through Caching: The read-through pattern was implemented to significantly reduce the load on the Oracle database. Microservices query Hazelcast directly for data; if the data is not in the cache, Hazelcast automatically fetches it from the database, effectively abstracting the database away from the applications.3 The team also plans to adopt the write-through pattern to further enhance efficiency.

Business Impact

The adoption of Hazelcast Platform delivered significant and measurable business value, reinforcing ING Türkiye’s position as a digital banking leader.

  • Superior Customer Experience: By drastically reducing system latency, ING Türkiye now offers a consistently fast, reliable, and seamless mobile banking application for its millions of customers.
  • Enhanced Scalability and Agility: The infrastructure now supports growing transaction volumes with ease and provides capacity for future growth. Microservices can be scaled independently without performance degradation, increasing business agility.
  • Improved Operational Efficiency: All services are delivered within their SLAs. Development teams are empowered with a robust platform that simplifies data management and accelerates the development lifecycle.
  • Lower Total Cost of Ownership (TCO): Hazelcast reduced the dependency on the expensive Oracle database infrastructure and enabled more streamlined resource utilization, leading to significant cost optimization.

Lessons Learned

  1. Adopt an Incremental and Phased Approach:
    • Avoid "Big-Bang" Replacements: A "big-bang" replacement of core banking systems is often out of the question due to the high risk of disruption to mission-critical operations. Instead, an iterative, phased model is recommended, allowing for progress at each step while protecting existing assets.
  2. Keep Security, Compliance, and Data Governance into consideration:
    • Regulatory Compliance: New regulations, such as ISO 20022 and PSD2, demand faster data processing, real-time reporting, and greater data retention.
  3. Focus on Developer Experience and Innovation:
    • Accelerated Development Cycles: Modern databases and microservices architectures enable rapid iteration of applications, faster time-to-market for new products and services, and improved developer productivity.
  4. Reduce Complexity and TCO:
    • Modern platforms aim to consolidate multiple tools and layers into a single, integrated platform, reducing architectural complexity, operational overhead, and Total Cost of Ownership.

Summary

By implementing Hazelcast Enterprise, ING Türkiye successfully resolved the critical performance and scalability challenges within its new microservices environment. The platform’s robust, high-performance caching capabilities have future-proofed their digital banking services, ensuring they can continue to innovate and meet the demands of an expanding customer base. Looking ahead, ING Türkiye plans to expand its use of Hazelcast by transitioning batch processes to event-driven systems and exploring active-active clusters for even greater redundancy, ensuring Hazelcast remains a central pillar of its technology strategy.

1 Navigating Consistency in Distributed Systems: Choosing the Right Trade-Offs

2 Dynamic Configuration: No Restarts Required

3 Database Caching - In-Memory Access to Frequently Used Data

Unlocking Lower Latencies, Reducing Costs, and Enabling Agentic Architectures

Unlock faster, more scalable Java applications with this expert guide to Java caching. In this RefCard, Hazelcast Architect Granville Barnett introduces the fundamentals of caching and offers a deep dive into the JCache API (JSR 107)—Java’s standard caching specification.

Explore real-world examples of how to:

  • Reduce latency and operational costs with intelligent caching strategies
  • Use JCache for provider-independent, specification-compliant caching
  • Optimize deployments with embedded, client-server, or hybrid caching architectures (see diagram below)
  • Implement cache expiry, event listeners, entry processors, and JMX management
  • Avoid vendor lock-in while building resilient, agentic architectures

Caching Deployment Strategies

Embedded

Embedded

Client-Server

Client-Server

Embedded/Client-Server

Embedded/Client-Server

These architectures can be combined to meet your latency, scalability, and resilience goals—whether you’re caching session data, accelerating distributed systems, or enhancing AI inference pipelines, this guide shows you how to do it right, fast, and at scale.

Download the RefCard to accelerate your Java app performance with best practices that go beyond basic caching.

In modern Java applications, distributed systems are everywhere, and so are failure modes. But how do you know when your cluster is fragile, or if it’s on the brink of breaking?

This talk dives into practical observability and resiliency techniques for distributed Java environments. We’ll highlight key patterns, failure signals, and metrics that matter, backed by a live demo using Hazelcast, Chaos-mesh, Prometheus, and Grafana.

You’ll learn:

  • Core Patterns – Leader election, partitioning, replication
  • Metrics That Matter – Backup count, member count, JVM health, Golden Signals
  • Failure-Aware Design – Resilience patterns, chaos testing principles
  • Live Demo – Deploy a working cluster, simulate node failure, and explore metrics to observe how data integrity holds as the system nears its fault tolerance threshold

Ideal for Java developers, architects, and SREs, this session blends theory, tools, and real-world failure scenarios to help you build distributed systems that stay online—even when things go wrong.

As demand for instant access, personalization, and rapid responsiveness rises, traditional databases fall short. This webinar explores how teams are adopting modern architectures for real-time systems—leveraging distributed data layers, caching, and new consistency models.

Explore the move from monolithic to distributed architectures built for low-latency, high-throughput workloads.

Learn how event-driven design, streaming, and in-memory processing enhance speed and reliability—plus tips for reducing bottlenecks, balancing consistency and availability with CQRS and event sourcing, and evolving systems without major rewrites.

Whether you’re an enterprise architect, backend engineer, or system designer, you’ll gain insights into building data architectures that go beyond traditional database limits.

Key Takeaways:

  • Real-time architecture patterns for low latency and high throughput
  • Strategies for decoupling and eventual consistency
  • The role of distributed layers, caching, and in-memory tech
  • Incremental modernization of legacy systems
  • Insights and anti-patterns from real-world systems