Pricing
Chat
Contact

Applied Machine Learning in Real-Time with Distributed, Scalable, In-Memory Technology

Webinar

Brought to you by:

   

Machine learning (ML) brings exciting new opportunities, but applying the technology in production workloads has been cumbersome, time consuming, and error prone. In parallel, data generation patterns have evolved, generating streams of discrete events that require high-speed processing at extremely low response latencies. Enabling these capabilities requires a scalable application of high-performance stream processing, distributed application of ML technology, and dynamically scalable hardware resources.

In this webinar, learn how the Hazelcast In-Memory Computing Platform enables the application of ML (Java, Python, C++) algorithms on real-time data streams with a distributed, cooperative, low latency architecture.  Additionally, we’ll examine how Intel’s new 2nd generation processors coupled with Intel Optane memory capabilities are expanding the possibilities for in-memory platform applications.

Presented By:

Scott McMahon
Scott McMahon
Technical Director & Team Lead, Americas
Hazelcast

Scott McMahon is the Technical Director & Team Lead, America st at Hazelcast® with over 20 years of software development and enterprise consulting experience. Before specializing in Hazelcast In-Memory Data Grid technology he built big data analytics platforms and business process management systems for many of the world’s leading corporations. He currently lives in Portland, Oregon, and when not working on computer systems, he enjoys getting outdoors and having fun with his family.

Mel Beckman
Mel Beckman
Contributing Editor
ITPro Today

Mel Beckman has written computer-related features and product reviews for 20+ years. His focus areas include IT, data centers, networking, and communications. As a global thought leader, Mel has presented hundreds of seminars on computing technology throughout the US, Europe, and Asia.

Loading