This video is part of the AI for Government Summit, Toronto, 2018 Event. If you would like to access all of the videos please click here.

AI Policy & Ethical Dilemmas

What are the major policy and ethical risks of AI systems? How is AI development unique and what are the top challenges in governing it? How should diverse stakeholders trade off beneficial AI innovation (e.g. AI for UN SDGs) with privacy, safety, and other ethical risks? An AI system does not exist in a vacuum, it reflects society in a complex ‘socio-technical’ system. Governance of AI requires careful balancing of risks and benefits, consideration of cultural differences in ‘risk appetite’ for innovation, and global and multi-stakeholder coordination. This presentation will challenge your views of AI ethics and will raise key governance and policy challenges, and potential solutions, for governing the rise of AI to be broadly beneficial for society.

Yolanda Lannquist, AI Policy Researcher at The AI Initiative at The Future Society

Yolanda is an AI Policy Researcher at The AI Initiative of The Future Society, a think-and-do-tank incubated at Harvard Kennedy School of Government. Her research focuses on governance and policy to shape the rise of AI to benefit society broadly while mitigating societal risks, including algorithmic bias, fairness, privacy, cybersecurity, safety, and impact on employment and inclusion. Yolanda has a Master in Public Policy from Harvard University (Harvard Kennedy School) and a Bachelor’s in Economics from Columbia University in New York City. Yolanda previously advised Fortune 500 multinationals on innovation and market entry strategy as a business consultant in Copenhagen. She also worked on digital trade and regulation at the U.S. Embassy in Paris and authored several publications on global economic and labor trends at The Conference Board in New York.

Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more