Can Internationally Accepted Principles Yield Trustworthy AI?

Thursday June 4, 2020

11:00AM – 12:00PM EDT

When you use spell-check, shop on Amazon, or find a movie on Netflix, you are using AI. While AI may improve our quality and standard of living, use of poorly designed AI may undermine human autonomy, reduce employment, and yield discriminatory outcomes.  To forestall such potential  negative spillovers, in 2019, the 37 members of the OECD (and 7 non-members) approved Principles on Artificial Intelligence, the first internationally accepted principles for AI. The principles include recommendations for policymakers and all stakeholders.  

The OECD is not the only body working on such principles. The members of the G-7 are also working on mutually agreed principles to govern trustworthy explainable AI. 

For this webinar, on Thursday June 4 at 11:00AM – Noon EDT, we will explore these principles, focusing in particular on those at the OECD, which our speakers helped design. We will discuss whether these principles can help all stakeholders. Moreover, we will examine whether such principles should evolve into an internationally shared rules-based system, given the wide diversity in national capacity to produce and govern AI. We will begin with a moderated discussion and then move on to your questions. Please join us.   Please note some of our speakers have changed. 

Speakers:

– Ryan Budish, Assistant Research Director, Berkman Klein Center for Internet and Society, Harvard University

– Adam Murray, U.S. diplomat in the Office of International Communications and Information Policy at the Department of State.

– Nicolas Miailhe, Founder and President, The Future Society