Optimizing your machine learning pipeline with Druid

Presented by

Vijay Narayanan | Senior Sales Engineer | Imply

About this talk

Organizations are applying hundreds to thousands of automated workflows and machine learning models to automatically determine whether to block, accept or watch a specific transaction or an event. Even a slight change in one of these workflows may impact decision rates or introduce anomalies to your Ml models. These anomalies in decision rates can be caused by internal changes in models and system components. It can also be caused by changes on the customers’ side: integration or decision behavior. Sometimes a change in decision rates is desirable – such as when there’s a fraud attack, entering into a new market, or a seasonal event, but sometimes it doesn’t. Therefore, it is essential to immediately identify and triage changes in decision rates to ensure users get accurate results. Imply provides a highly scalable platform that enables real-time monitoring of ML models by allowing users to identify anomalies and generate automated alerts in real-time accurately. Key takeaways: - How you can optimize your machine learning using Imply: - Monitor machine learning as an application using open telemetry and Imply pivot - Improve model accuracy and training time using Imply as a source of training data - Use Imply in a model prediction pipeline

Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (33)
Subscribers (528)
Imply, founded by the original creators of Apache Druid®, develops an innovative database purpose-built for modern analytics applications. Imply is driving a new era in data analytics, where interactive queries, real-time and historical data at unlimited scale, combine with the best price/performance, to realize the full potential of data.