Samit Thange, Domino & Donald Miner, Miner & Kasch
Many data scientists and their organizations may have hundreds of models running in production, interacting with the real world, and are not keeping track of how their models are performing on live data. Bias and variance can creep into models over time, and we should know when that happens. The world changes, often slowly, and most models perform worse as time goes on. Ensuring everything is working well is a huge undertaking, and unfortunately, many organizations are simply ignoring the problem. Donald Miner, drawing upon his prior experience as a data scientist, engineer, and CTO, details the tracking of machine learning models in production to ensure model reliability, consistency, and performance into the future.
In this webinar Miner covers:
-why you should invest time in monitoring your machine learning models.
-real-world anecdotes about some of the dangers of not paying attention to how a model’s performance can change over time.
-metrics you should be gathering for each model and what they tell you with a list of “vitals,” what value they provide, and how to measure them.
-vitals that include classification label distribution over time, distribution of regression results, measurement of bias, measurement of variance, change in output from previous models, and changes in accuracy over time.
-implementation strategies to keep watch on model drift over time.