How To Prevent Bias in Machine Learning

Logo
Presented by

Diana Kelley, Microsoft I Deveeshree Nayak, University of Washington, Tacoma I Marcae Bryant-Omosor, USAA

About this talk

Machine Learning is not immune to bias. In fact, often times it can actually amplify bias. As organizations are increasingly turning to ML algorithms to review vast amounts of data, achieve new efficiencies and help make life-changing decisions, ensuring that bias does not creep in ML algorithms is now more important than ever. So, how can we protect ML systems from the “garbage in, garbage out” syndrome? If undetected or left unchecked, feeding "garbage" biased data to self-learning systems can lead to unintended and potentially dangerous outcomes. Join us as we discuss bias in Machine Learning. Learn about the risk of ML bias, how to detect it and how to prevent it.
Related topics:

More from this channel

Upcoming talks (9)
On-demand talks (51)
Subscribers (23670)
As we realize the transformative power of the cloud, AI and machine learning, has our culture of responsibility and ethics kept pace? How do we harness our new technological capabilities to the understanding of how to use them well?