IBMDeveloperMEA / AI-Integrity-Improving-AI-models-with-Cortex-Certifai

Explainability of AI models is a difficult task which is made simpler by Cortex Certifai. It evaluates AI models for robustness, fairness, and explainability, and allows users to compare different models or model versions for these qualities. Certifai can be applied to any black-box model including machine learning models, predictive models and works with a variety of input datasets.

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

IBMDeveloperMEA/AI-Integrity-Improving-AI-models-with-Cortex-Certifai Issues

No issues in this repository yet.