What Is an Inference Runner?
Relevant Resources

Spotlight on Stream Processing and Machine Learning
David Brimley, Financial Services Industry Consultant, Hazelcast, speaks to FinextraTV about what financial services firms are doing with machine learning and what firms should consider as they progress through their machine learning journey. He explains how streaming data fits in financial services, how firms can ease into streaming without going through a complete re-architecture of their systems and how financial services technologists need to keep an eye on developments in In-memory computing, Cloud and Containerization.”

Tech Talk: Machine Learning at Scale Using Distributed Stream Processing
In this talk, Marko will show one approach which allows you to write a low-latency, auto-parallelized and distributed stream processing pipeline in Java that seamlessly integrates with a data scientist’s work taken in almost unchanged form from their Python development environment. The talk includes a live demo using the command line and going through some Python and Java code snippets.

Key Considerations for Optimal Machine Learning Deployments
Machine learning (ML) is being used almost everywhere, but the ubiquity has not been equated with simplicity. If you solely consider the operationalization aspect of ML, you know that deploying your models into production, especially in real-time environments, can be inefficient and time-consuming. Common approaches may not perform and scale to the levels needed. These challenges are especially true for businesses that have not properly planned out their data science initiatives.

Operationalizing Machine Learning with Java Microservices and Stream Processing
Are you ready to take your algorithms to the next steps and get them working on real-world data in real-time? We will walk through an architecture for taking a machine learning model into deployment for inference within an open source platform designed for extremely high throughput and low latency.