Dangerous Ideas Seminar
Speaker: Ian Eslick and Push Singh , MIT Media LabContact:
Date: February 24 2005
Time: 1:00PM to 2:00PM
Location: D449 Kiva
Host: Metin Sezgin
Tevfik Metin Sezgin, 617-253-2663, firstname.lastname@example.orgRelevant URL: http://projects.csail.mit.edu/dangerous-ideas/dangerous/www/
Humans as a domain for AI
How you, too, can help give computers common sense
Historically, disciplines such as machine vision and computer graphics have gone from recognizing edges and rendering teacups towards more human-oriented tasks such as recognizing faces and rendering realistic-looking people. We predict that work on general inference will follow a similar path, from narrowly-purposed "expert systems" to large-scale, broad-spectrum inference about human psychology and about the everyday human world.
Most recent work on giving computers such "common sense" has focused on the use of formal logic and manual knowledge engineering. Instead, can we try a wider range of approaches? Can we build commonsense models using graphical models learned from sensory data or the web? Can we adopt a Wikipedia-like divide-and-conquer strategy? Can we use case-based reasoning, genetic programming, or reinforcement learning?
The jury is still out about what methods people themselves use, but in building "human aware" systems we can look to a broader array of strategies that, together, could far exceed our own capabilities at commonsense thinking -- leading, perhaps, to systems that understand people better than people understand themselves.
In this talk we will argue that commonsense AI should be considered a kind of general domain, like machine vision or computer graphics, one that all AI researchers can apply their favored methods to. We will discuss some of the characteristics of this domain, what the problems are, and what needs to be done to make progress.
See other events that are part of Dangerous Ideas Seminar Series Spring 2005
See other events happening in February 2005