AI and Machine Learning are front and center in the news on a daily basis. The initial reaction to "explaining" or understanding a model that was created has been centered around the concept of Explainable AI which is the technology answer to understand and trust a model with advanced techniques such as Lime, Shapley, Disparate Impact Analysis and more.
H2O.ai has been innovating in the area of explainable AI for the last three years. However, over the last year, it has become clear that technology-driven Explainable AI is not enough.
Companies, researchers and regulators would agree that Responsible AI encompasses not just the ability to understand and trust a model, but includes the ability to address ethics in AI, regulation in AI, and the human side of how we move forward with AI, well, in a responsible way.
Tune into this webinar to learn about the factors that make up Responsible AI and how H2O.ai can help.