Vijay Arungurikai, Red Hat | Sinead Williamson, Cognitive Scale | Luke Twardowski, Cognitive Scale
In the modern AI world, it's no longer good enough for algorithms to achieve state-of-the-art predictive performance. Now more than ever, users must trust that models are not disproportionately disadvantaging groups, data scientists need tools to help measure AI trust, and end users need to understand why a model made the prediction/s it did. To address these problems, scientists at CognitiveScale have developed Cortex Certifai, a collection of tools that measure the fairness, robustness, and transparency of black box algorithms. CognitiveScale has joined forces with Red Hat to provide Cortex Certifai's self-service toolbox on the Red Hat OpenShift® Kubernetes Platform. These tools monitor, track and compare algorithms to provide trusted AI solutions. In this talk, we will introduce you to the measures of fairness, explainability, and robustness that Certifai uses to assess black-box algorithms. We will show how you can use Certifai to increase trust in your machine learning models, by evaluating fairness, explainability, robustness and accuracy. We will demonstrate how Cortex Certifai can fit into your model development and deployment pipelines, allowing you to easily assess the trustworthiness of your organizations algorithms. Certifai will allow you to accelerate delivering AI/ML workflows by identifying and resolving model problems before pushing to production.