As fields like explainable AI and ethical AI have continued to develop in academia and industry, we have seen a litany of new methodologies that can be applied to improve our ability to trust and understand our machine learning and deep learning models. As a result of this, we’ve seen several buzzwords emerge, such as responsible ai, explainable ai (XAI), machine learning interpretability (MLI), and ethical ai.
In this webinar, we will look to explore and define these newish terms as H2O.ai sees them in hopes of fostering discussions between machine learning practitioners and researchers, and all the diverse types of professionals (e.g., social scientists, lawyers, risk specialists) it takes to make machine projects successful. We’ll close by discussing responsible machine learning as an umbrella term and by asking for your feedback.
What you'll learn:
- New methodologies to improve our ability to trust and understand our machine learning and deep learning models
- New terms and ideas emerging out of the explainable AI and ethical AI fields
- The concept of Responsible AI as an umbrella term for these new terms and ideas
Benjamin Cox, Product Marketing Manager at H2O.ai
Patrick Hall, Advisory Consultant at H2O.ai