Key Terms and Ideas in Responsible AI

Presented by

Benjamin Cox, Product Marketing Manager at H2O.ai and Patrick Hall, Advisory Consultant at H2O.ai

About this talk

As fields like explainable AI and ethical AI have continued to develop in academia and industry, we have seen a litany of new methodologies that can be applied to improve our ability to trust and understand our machine learning and deep learning models. As a result of this, we’ve seen several buzzwords emerge, such as responsible ai, explainable ai (XAI), machine learning interpretability (MLI), and ethical ai. In this webinar, we will look to explore and define these newish terms as H2O.ai sees them in hopes of fostering discussions between machine learning practitioners and researchers, and all the diverse types of professionals (e.g., social scientists, lawyers, risk specialists) it takes to make machine projects successful. We’ll close by discussing responsible machine learning as an umbrella term and by asking for your feedback. What you'll learn: - New methodologies to improve our ability to trust and understand our machine learning and deep learning models - New terms and ideas emerging out of the explainable AI and ethical AI fields - The concept of Responsible AI as an umbrella term for these new terms and ideas Presenters: Benjamin Cox, Product Marketing Manager at H2O.ai Patrick Hall, Advisory Consultant at H2O.ai
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (57)
Subscribers (19206)
H2O.ai is the maker of H2O, the world's best machine learning platform and Driverless AI, which automates machine learning. H2O is used by over 200,000 data scientists and more than 18,000 organizations globally. H2O Driverless AI does auto feature engineering and can achieve 40x speed-ups on GPUs.