Multi-agent learning: Regret, filters, and ad-hoc networking
Speaker: Yu-Han Chang, Ph.D. candidate , MIT CSAILContact:
Date: May 4 2005
Time: 2:00PM to 4:00PM
Location: Kiva, 32-G449
Host: Professsor Leslie Kaelbling, MIT CSAIL
Teresa Cataldo, 2-5005, firstname.lastname@example.orgRelevant URL:
Systems involving multiple autonomous entities are becoming more and more prominent. Sensor networks, teams of robotic vehicles, and software agents are just a few examples. In order to design these systems, we need methods that allow our agents to autonomously learn and adapt to the changing environments they find themselves in. This thesis explores ideas from game theory, online prediction, and reinforcement learning, tying them together to work on problems in multi-agent learning.
In particular, in this talk, I will discuss hedged learning, a combination of regret-minimizing algorithms with learning techniques, which allows a more flexible and robust approach for behaving in competitive situations. I will also touch upon the use of reward filtering, which may be useful in cooperative situations where agent communication is limited and yet the agents seek to optimize some global reward signal, and provide some results from applying this work to a mobile ad-hoc networking domain.
See other events that are part of
See other events happening in May 2005