How do you make AI Explainable & Transparent?

11+

Hours of content

46

Speakers

3

Interviews

Overview

Can you trust your AI?

Explainable AI is an important and growing field in machine learning, aiming to address how black box decisions of AI systems are made. We must strive to ensure we can detect any flaws in models used, as well as data biases, to facilitate user trust.

How can we increase the visibility and knowledge of how AI systems make decisions?

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more