Speaker: Deb K. Roy , Media Lab, MIT
Date: August 15 2003
We are able to use and understand words because we have experienced the physical world in which their meanings are grounded, and we have learned the conventions by which words map into these experiences. Computers, on the other hand, are trapped in sensory deprivation tanks, cut off from the physical world. Current language processing systems represent words as ungrounded symbols that have meaning solely due to their relations to other symbols. Consequently, computers can't learn words from human actions nor can they rely on input from the world to understand what human users want.
Our long term goal is to create machines that ground the meaning of words in the physical world. To achieve this, elements of embodiment including perception, action, planning, and attention must be addressed in a computational framework and brought to bear on the design of linguistic representations and algorithms for language learning and use. We are also interested in using computational models as a source of predictions and possible accounts for a number of cognitive phenomena including aspects of children's language acquisition, concept formation, and attention.
As a vehicle for this research, our group develops conversational robots that are designed to communicate with human partners using natural language. I will present recent results in several areas related to this effort including sensorimotor representations used to ground words, mental models and mental imagery for situated dialog, perceptually-grounded word and grammar learning algorithms, and models of attention for language processing and learning.
See other events that are part of Brains and Machines Seminar Series Fall 2003
See other events happening in August 2003