Can you trust your AI?
Explainable AI is an important and growing field in machine learning, aiming to address how black box decisions of AI systems are made. We must strive to ensure we can detect any flaws in models used, as well as data biases, to facilitate user trust.
How can we increase the visibility and knowledge of how AI systems make decisions?