A wide variety of historical and socio-economic factors have led to widespread disparities across races in both access to and consumption of healthcare resources in the USA. Despite the best of intentions, racial biases have also crept into the behavior of machine learning based automated decision making systems that are trained on historical data and are being increasingly deployed in the healthcare industry. The adoption and deployment of such biased AI solutions pose significant business risks, as well as harm to the individuals who are affected by their decisions.
In this webinar, we show how different aspects of our Trusted AI Approach can help to detect and such biases and help mitigate the associated risks, leading to solutions that are more socially responsible while retaining business value.
What you will learn from the webinar:
- pervasive nature of bias in healthcare decision making
- risk of employing biased AI solutions, with examples
- how to design more reliable and trustworthy AI
- bias detection and mitigation exemplified.
About the presenter:
Dr. Joydeep Ghosh (PhD ’88) is the Chief Scientific Officer at CognitiveScale, responsible for shaping corporate vision, influencing technology strategy, overseeing algorithmic science, and positioning the company for growth in the coming months and years. Dr. Ghosh is also the Schlumberger Centennial Chair Professor at the University of Texas (UT), Austin with appointments across multiple colleges involved in the theory, design and application of AI-related technologies and systems. He is the founder-director of UT-MINDS, considered among the top academic groups worldwide researching full-stack Machine Intelligence and Decision Systems.