Operationalizing Machine Learning with Java Microservices and Stream Processing
Are you ready to take your algorithms to the next steps and get them working on real-world data in real-time? We will walk through an architecture for taking a machine learning model into deployment for inference within an open source platform designed for extremely high throughput and low latency.
We’ll demonstrate a working example of a machine learning model being used on streaming data within the Hazelcast In-Memory Computing Platform, a powerful technology for distributed in-memory processing. We will also touch on important considerations to ensure maximum flexibility for deployments that need the flexibility to run either on-premises or in the cloud.