Pricing
Chat
Contact

Machine Learning Inference at Scale with Python and Stream Processing

Webinar

There is frequently an “impedance mismatch” between developing and training a machine learning model (a data scientist’s job) and then deploying that model to perform at scale in a production environment (a data engineer’s job). How do you make a trained prediction model usable in real time, while the user is interacting with your software? What does it take to go from fast trial-and-error runs on historical data to models that perform at production scale, in real time?

In this talk we will show you how to write a low-latency, high throughput distributed stream processing pipeline (in Java), using a model developed in Python.

Presented By:

Mike Yawn
Mike Yawn
Senior Solution Architect
Hazelcast

Mike Yawn is a Senior Solutions Architect with Hazelcast, the provider of the leading operational In-Memory Computing Platform.  In that role, he provides pre-sales consulting on Hazelcast IMDG, Hazelcast Jet, and Hazelcast Cloud solutions to commercial customers.  Prior to joining Hazelcast, Mike performed a number of consulting and R&D functions with HP, eBay, Oracle, and EMC, supporting customers in manufacturing, banking, healthcare, and other industries.

Loading