This webinar introduces methods that can uncover discrimination in your data and predictive models, including the adverse impact ratio (AIR), false positive and false negative rates, marginal effects, and standardized mean difference. Once discrimination is identified in a model, new models with less discrimination can usually be found, typically by more judicious feature selection or by tweaking hyperparameters. Mitigating discrimination in ML is important for both consumers and operators of ML. Consumers of ML deserve equitable decisions and predictions and operators of ML want to avoid reputational and regulatory damages.
If you are a data scientist or analyst working on decisions that affect people's lives, then this presentation is for you!