Suresh Venkatasubramanian - Moles, Turtles, and Snakes: On what it means to do practical AI governance research
Speaker
Suresh Venkatasubramanian
Brown University
Host
Crystal Lee
Abstract:
Over the last decade or so, we've built an impressive list of examples of AI gone wrong, and a fairly comprehensive list of reasons why. Critique of technological systems, especially those based on ML and AI, are a common and arguably necessary counterweight to the hype around AI. But I'd argue that perhaps our desire to critique has gone a little too far, in that we seem unwilling to answer the question "if not this, then what" with anything but "nothing".
I think we can do better than that, while still not falling into the trap of technosolutionism. We're at a moment where the door has been opened to provide methods, tools, and general sociotechnical systems - for auditing, for measurement, and for mitigation. These will necessarily be imperfect, and will have to iterated over and improved, again and again. But they can help us reimagine more expansively what's possible, and more importantly help show policymakers what's possible, when thinking about the next wave of AI governance work. I'll illustrate this with a few examples from my own recent research.
Bio:
Suresh Venkatasubramanian directs the Center for Technological Responsibility, Reimagination, and Redesign (CNTR) with the Data Science Institute at Brown University, and is a Professor of Computer Science and Data Science. He recently finished a stint in the Biden-Harris administration, where he served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy. In that capacity, he helped co-author the Blueprint for an AI Bill of Rights.
Prior to Brown University, Suresh was at the University of Utah, where as an assistant professor he was the John and Marva Warnock Assistant Professor. He has received a CAREER award from the NSF for his work in the geometry of probability, a test-of-time award at ICDE 2017 for his work in privacy, and a KAIS Journal award for his work on auditing black-box models. His research on algorithmic fairness has received press coverage across the globe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. He is a past member of the Computing Community Consortium Council of the CRA, spent 4 years (2017-2021) as a member of the board of the ACLU in Utah, and is a past member of New York City’s Failure to Appear Tool (FTA) Research Advisory Council, the Research Advisory Council for the First Judicial District of Pennsylvania and the Utah State Auditor's Commission on protecting privacy and preventing discrimination. He was recently named by Fast Company to their AI20 list of thinkers shaping the world of generative AI.
This talk will also be streamed over Zoom: https://mit.zoom.us/j/94023976132.
Over the last decade or so, we've built an impressive list of examples of AI gone wrong, and a fairly comprehensive list of reasons why. Critique of technological systems, especially those based on ML and AI, are a common and arguably necessary counterweight to the hype around AI. But I'd argue that perhaps our desire to critique has gone a little too far, in that we seem unwilling to answer the question "if not this, then what" with anything but "nothing".
I think we can do better than that, while still not falling into the trap of technosolutionism. We're at a moment where the door has been opened to provide methods, tools, and general sociotechnical systems - for auditing, for measurement, and for mitigation. These will necessarily be imperfect, and will have to iterated over and improved, again and again. But they can help us reimagine more expansively what's possible, and more importantly help show policymakers what's possible, when thinking about the next wave of AI governance work. I'll illustrate this with a few examples from my own recent research.
Bio:
Suresh Venkatasubramanian directs the Center for Technological Responsibility, Reimagination, and Redesign (CNTR) with the Data Science Institute at Brown University, and is a Professor of Computer Science and Data Science. He recently finished a stint in the Biden-Harris administration, where he served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy. In that capacity, he helped co-author the Blueprint for an AI Bill of Rights.
Prior to Brown University, Suresh was at the University of Utah, where as an assistant professor he was the John and Marva Warnock Assistant Professor. He has received a CAREER award from the NSF for his work in the geometry of probability, a test-of-time award at ICDE 2017 for his work in privacy, and a KAIS Journal award for his work on auditing black-box models. His research on algorithmic fairness has received press coverage across the globe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. He is a past member of the Computing Community Consortium Council of the CRA, spent 4 years (2017-2021) as a member of the board of the ACLU in Utah, and is a past member of New York City’s Failure to Appear Tool (FTA) Research Advisory Council, the Research Advisory Council for the First Judicial District of Pennsylvania and the Utah State Auditor's Commission on protecting privacy and preventing discrimination. He was recently named by Fast Company to their AI20 list of thinkers shaping the world of generative AI.
This talk will also be streamed over Zoom: https://mit.zoom.us/j/94023976132.