Artificial Intelligence and the “Barrier of Meaning”
Speaker
Melanie Mitchell
Portland State University, Santa Fe Institute
Host
Una-May O'Reilly
ALFA Group, CSAIL
Abstract:
In 1986, the mathematician and philosopher Gian-Carlo Rota wrote, “I wonder whether or when artificial intelligence will ever crash the barrier of meaning.” Here, the phrase “barrier of meaning” refers to a belief about humans versus machines: humans are able to “actually understand” the situations they encounter, whereas AI systems (at least current ones) do not possess such understanding. The internal representations learned by (or programmed into) AI systems do not capture the rich “meanings” that humans bring to bear in perception, language, and reasoning.
In this talk I will assess the state of the art of artificial intelligence in several domains, and describe some of their current limitations and vulnerabilities, which can be accounted for by a lack of true understanding of the domains they work in. I will explore the following questions: (1) To be reliable in human domains, what do AI systems actually need to “understand”? (2) Which domains require human-like understanding? And (3) What does such understanding entail?
Speaker Biography:
Melanie Mitchell is Professor of Computer Science at Portland State University and External Professor at the Santa Fe Institute. Her research interests include artificial intelligence, machine learning, and complex systems, and she is the author of numerous scholarly papers and several books in these fields. Her general-audience book, Complexity: A Guided Tour, won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her newest book, Artificial Intelligence: A Guide for Thinking Humans, was recently published by Farrar, Straus, and Giroux.
Melanie can be contacted via her website, melaniemitchell.me, and on Twitter, @melmitchell1.
In 1986, the mathematician and philosopher Gian-Carlo Rota wrote, “I wonder whether or when artificial intelligence will ever crash the barrier of meaning.” Here, the phrase “barrier of meaning” refers to a belief about humans versus machines: humans are able to “actually understand” the situations they encounter, whereas AI systems (at least current ones) do not possess such understanding. The internal representations learned by (or programmed into) AI systems do not capture the rich “meanings” that humans bring to bear in perception, language, and reasoning.
In this talk I will assess the state of the art of artificial intelligence in several domains, and describe some of their current limitations and vulnerabilities, which can be accounted for by a lack of true understanding of the domains they work in. I will explore the following questions: (1) To be reliable in human domains, what do AI systems actually need to “understand”? (2) Which domains require human-like understanding? And (3) What does such understanding entail?
Speaker Biography:
Melanie Mitchell is Professor of Computer Science at Portland State University and External Professor at the Santa Fe Institute. Her research interests include artificial intelligence, machine learning, and complex systems, and she is the author of numerous scholarly papers and several books in these fields. Her general-audience book, Complexity: A Guided Tour, won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her newest book, Artificial Intelligence: A Guide for Thinking Humans, was recently published by Farrar, Straus, and Giroux.
Melanie can be contacted via her website, melaniemitchell.me, and on Twitter, @melmitchell1.