The Future of Explainable AI: What is the Business Impact of XAI, Accountability, and Transparency?

AI is increasingly being applied to business-critical use cases across industries - as AI goes from a technology on the fringe to mainstream, the importance of deploying AI responsibly has reached a crescendo. Explainability is the most effective way to ensure AI solutions are transparent, accountable, responsible, fair, and ethical across use cases and industries. In this panel discussion, our panel of industry and research experts will shed light on the state of Explainable AI today, and key considerations to ensure success moving forward.

Our panel of experts will discuss:

- The latest in XAI research: updates from experts in the XAI research field on the state of Explainable AI

- Industry trends and applications: updates from AI business leaders on applications for XAI in-industry and examples of how they approached XAI at their organizations

- Where XAI is headed: discussion around what the future holds for XAI

Merve Hickok, Founder at AIEthicist

Merve Hickok is the founder of AIEthicist and Lighthouse Career Consulting. She is an independent consultant & trainer focused on capacity building in ethical and responsible AI, governance of AI systems. Merve is a Senior Researcher Center for AI & Digital Policy; founding editorial board member of Springer Nature AI & Ethics journal: one of 100 Brilliant Women in AI Ethics 2021; Fellow at ForHumanity Center; a regional lead for Women in AI Ethics Collective; and a member in a number of IEEE & IEC work groups that set global standards for autonomous systems.

Mary Reagan, Data Scientist at Fiddler

Mary is currently a Data Scientist at Fiddler. Mary completed her PhD in Mineral Physics from Stanford University. Her thesis focused on understanding the effects of high-pressures and temperatures on iron compound’s spin state, deformation, and isotope fractionation. She joins us from DataKind, where she partnered with the NGO, “Humans Against Trafficking”. There, she worked on developing a ML model that identifies teens who are vulnerable to being groomed for trafficking through social media.

Sara Hooker, Artificial Intelligence Resident at Google Brain

Sara Hooker is Artificial Intelligence Resident at Google Brain doing deep learning research on model compression and reliable explanations of model predictions for black-box models. Her main research interests gravitate towards interpretability, model compression and security. In 2014, she founded Delta Analytics, a non-profit dedicated to bringing technical capacity to help non-profits across the world use machine learning for good. She spent her childhood in Africa, growing up in South Africa, Swaziland, Mozambique, Lesotho and Kenya. Her family now lives in Monrovia, Liberia.

Narine Kokhlikyan, Research Scientist at Facebook

Narine is a Research Scientist at Facebook AI focusing on explainable AI. She is the main creator of Captum, the PyTorch library for model interpretability. Narine studied at the Karlsruhe Institute of Technology in Germany and was a Research Visitor at Carnegie Mellon University. Her research focuses on explainable AI, cognitive systems, and natural language processing. She is also an enthusiastic contributor of open source software packages such as scikit-learn and Apache Spark.

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more