Fairness in AI and Machine Learning

Presented by

Navdeep Gill, H2O.ai

About this talk

This webinar introduces methods that can uncover discrimination in your data and predictive models, including the adverse impact ratio (AIR), false positive and false negative rates, marginal effects, and standardized mean difference. Once discrimination is identified in a model, new models with less discrimination can usually be found, typically by more judicious feature selection or by tweaking hyperparameters. Mitigating discrimination in ML is important for both consumers and operators of ML. Consumers of ML deserve equitable decisions and predictions and operators of ML want to avoid reputational and regulatory damages. If you are a data scientist or analyst working on decisions that affect people's lives, then this presentation is for you!
Related topics:

More from this channel

Upcoming talks (0)
On-demand talks (57)
Subscribers (19207)
H2O.ai is the maker of H2O, the world's best machine learning platform and Driverless AI, which automates machine learning. H2O is used by over 200,000 data scientists and more than 18,000 organizations globally. H2O Driverless AI does auto feature engineering and can achieve 40x speed-ups on GPUs.