Our goal is to develop computational learning models for agents that intelligently use various types of human feedback to help an agent adapt and transfer knowledge to new situations.

Currently, robots are limited in their ability to transfer knowledge and generalize across a wide range of tasks, but people are very adaptable and can easily detect similarities across many situations. Using human feedback to guide the transfer process can speed up learning significantly. In addition, a robot can learn human preferences based on the provided feedback. We are investigating various approaches for incorporating feedback into machine learning systems that allow people to more naturally guide the agent. For example, people often use analogical reasoning to map a new situation to a previously seen scenario to quickly generalize. Learning agents can significantly benefit from such high-level feedback to more quickly learn in new tasks. This type of interactive learning system can improve the ability of service robots, assistive robots, smart home devices, and other systems to leverage human insight to perform well in unpredicted situations.

Research Areas

Impact Areas