With black-box AI, people are refused or given loans, accepted or denied university admission, offered a lower or higher price on car insurance, and more, all at the hands of AI systems that usually offer no explanations. In many cases, humans who work for those companies can’t even explain the decisions.
That’s why white-box AI is now getting heaps of attention. But what does it mean in practice? And how can businesses start moving away from black-box systems to more explainable AI?
We’ll delve into the three key components needed for white-box AI success: more collaborative data science, involving all teams from lines of business through IT; trust in data at all levels, including tools that
can be used to increase transparency in data processes; and the role of education and the democratization of data.
And we’ll address why white-box AI brings business value in the first place and how it’s a necessary evolution for AI. Not only do customers care about explainable results of AI systems, but internally, white-
box AI is less risky. Don’t miss this VB Live event on how to move towards explainable AI.
REGISTER FOR FREE
Key Takeaways:
+ How to make the data science process collaborative across the organization
+ How to establish trust from the data all the way through the model
+ How to move your business toward data democratization
Speakers:
+ Triveni Gandhi, Data Scientist, Dataiku
+ David Fagnan, Director, Applied Science, Zillow Offers
+ Rumman Chowdhury, Global Lead for Responsible AI, Accenture Applied Intelligence
+ Seth Colaner, AI Editor, VentureBeat