Utilization of artificial intelligence (AI) and machine learning models have become a common practice in many aspects of the economy. Furthermore, more sections of the economy will start to embrace automation and data-driven decision making over the coming years. While these predictive systems can be quite accurate, they have been treated as inscrutable black boxes in the past, that produce only numeric predictions with no accompanying explanations. Unfortunately, recent studies and recent events have drawn attention to mathematical and sociological flaws in prominent weak AI and ML systems, but practitioners usually don’t have the right tools to pry open machine learning black-boxes and debug them.
This presentation goes over how one can use Driverless AI to increase transparency, accountability, and trustworthiness in machine learning models. If you are a data scientist or analyst and you want to explain a machine learning model to your customers or managers (or if you have concerns about documentation, validation, or regulatory requirements), then this presentation is for you!
What you will learn:
- How to build interpretable models in Driverless AI
- How to explain models in Driverless AI
- How to evaluate fairness of models in Driverless AI
- How to debug models in Driverless AI
This webinar will be a deep dive into responsible machine learning. Please watch the webinars below to get an introduction to responsible machine learning:
- Fairness in AI and Machine Learning: https://www.h2o.ai/webinars/?commid=382828
- Towards Responsible AI: https://www.h2o.ai/webinars/?commid=387075
- Key Terms and Ideas in Responsible AI: https://www.h2o.ai/webinars/?commid=395829