AI Policy Congress – Part 1 Governance Challenges

Ai policy congress 3

What Is the MIT AI Policy Congress?

On January 15, 2019, the MIT Internet Policy Research Initiative (IPRI) and Quest for Intelligence (QI) hosted the first MIT AI Policy Congress. The conference brought together global policymakers, technical experts, and industry executives to discuss the impact of AI across sectors, with panels on transportation and safety, manufacturing and labor, healthcare, criminal justice and fairness, national security and defense, and international perspectives.

Each panel sought to align the benefits of AI with the obligations of public trust and help determine how to address challenges, including:

  • Transportation and safety: What is the right threshold for safety risks, who decides, and how do we build and validate these systems in the meantime?
  • Manufacturing and labor: Where is displacement happening? How do we address the issue today, and how do we plan for tomorrow?
  • Healthcare: How do we realize the benefits AI can offer to medical decision-making while keeping in mind important goals such as safety, reliability, and equity?
  • Criminal justice and fairness: How can we detect and control the proliferation of biases in new AI systems that currently reflect long-standing social and legal inequities?
  • National security and defense: Can we have arms control regimes on technologies that are partially open-sourced?
  • International perspectives: How do international organizations build consensus on issues that are still unresolved at the national level?
  • Toward the governance of AI systems: What are the concrete steps that policymakers, technical researchers, companies, and individual citizens should take to make this agenda real?

 

The Global Challenges of AI Governance

As Daniel J. Weitzner, founding director of IPRI, outlined in his opening remarks, the AI governance challenge is overarching. We are facing the need to find the right mix of innovations to bring broad economic and social benefits along with tools and laws that advance our values of fairness, human rights, and economic and social inclusions. The groundwork we lay today will be the foundation for our future. Stakeholders around the world have recognized this and started to take action through national strategies, principles, capacity-building programs, and research investments.

Through the AI Policy Congress, MIT facilitated international engagement on AI policy principles. Members of the Organisation for Economic Co-operation and Development (OECD) Artificial Intelligence Expert Group (AIGO) presented conference attendees with their draft AI policy principles, of which the group plans to release a final version in Dubai in February 2019.

While principles are a necessary start, the next step is to move onto concrete solutions, for which sector-specific and cross-pollinated conversations are needed. In outlining the concept for the AI Policy Congress and its agenda, Dr. R. David Edelman, Director of the IPRI Project on Technology, the Economy, & National Security (TENS), reminded attendees that no one stakeholder group holds a monopoly on moving this agenda forward: researchers and developers need to design trustworthy AI systems, policymakers need to act based on sound technical insight, and civil society advocates need to continue to check if public expectations are being met. There is also no reason to accept technology as immutable; how it develops is up to the societies it intends to serve.

Such a dynamic of invention and governance is, at its best, mutually reinforcing; we cannot pretend that simply announcing rules will, in itself, do the hard work of actually shaping the technology we use.

During his presentation on AI’s technical capabilities, Professor Antonio Torralba, director of QI, focused primarily on the AI challenges around user trust and access to data in a rapidly growing environment. To give the audience a sense of the pace, Torralba explained that while deep neural networks only started performing at a notable level a few years ago, from the moment they were deployed, error rates for machine learning tasks have decreased. By 2015, an AI system outperformed average human performance on image classification. The amount of data used today to train AI systems is equivalent to the cumulative amount of data a toddler has been exposed to in its lifetime.

Given the pace of advancements in AI and machine learning, there is great potential for impact across sectors, but access to, and availability of, data remains challenging. For some areas of research, such as Natural Language Processing (NLP), there is more data made available than for others, such as healthcare. In part, researchers may have trouble accessing the data due to existing legal and regulatory restrictions; healthcare data is more sensitive and protected than language data. Advances by sector are related to the amount of data accessible and available.

Challenges still exist even after data are made available. Data collection problems cause issues with bias, and these issues reduce user trust and explainability. Understanding rationale is an important part of establishing user trust.

Today, for explainability to work, the representation of the data itself may have to be the explanation. This leaves researchers and policymakers grappling with balancing descriptive and normative views that can emerge from data collection. And while researchers are also working on ways to dissect AI systems to make them more meaningful, we may be faced, at least for now, with a trade-off between accuracy and explainability.

Looking to the Future

Throughout the day, the MIT AI Policy Congress sought to align the benefits of AI with the obligations of public trust and explored how we should govern AI systems — and how we should design AI systems to meet society’s needs, domestically and internationally. As MIT launches its Stephen A. Schwarzman College of Computing, IPRI will contribute, in part, by memorializing and continuing these conversations.