What if machines could understand our minds? What if AI systems could infer our goals from our physical and virtual actions, estimate our strengths over time, identify points in time when our cognitive loads and emotional states change, and predict what we will do next — all in order to support us? Integrating virtual assistants and robots with these capabilities into our lives would make our lives easier for all sorts of applications, from daily chores to emergency situations.
Our project combines the power of artificial intelligence and virtual assistant technology to optimize communication between individuals and teams. Working with DARPA as part of the Artificial Social Intelligence for Successful Teams (ASIST) project, we are developing a socially intelligent team coacher agent that observes the actions of each human team member and gives advice on their plans and collaborations.
In the first phase of the project, the research goal is to develop algorithms that infer the mental states (goals, strategies, beliefs about the world, cognitive loads) and future actions of the human players in real time by observing their actions in the environment. We believe there is a generalizable framework to do this using a combination of probabilistic generative models and symbolic systems.
The evaluation domain that we use is set to be urban search-and-rescue missions in Minecraft as a modeling environment. Typically, urban search and rescue (USAR) involves a number of teams (e.g., local firefighters and teachers) that are assembled ad hoc. Each team member has different skillsets, tools, and goals, and they need to work together quickly and safely to remove victims from a building in an emergency event, like that of an earthquake. Oftentimes, their mission is complicated by unpredictable factors: debris that must be stabilized before it can be removed, hazards (such as glass, downed power lines, fires, and flooding) that could compromise the safety of victims and rescue team members alike, victims that cannot be moved easily due to the nature of their injuries, severity of the injuries, and more. Triage systems can help to prioritize those who need medical aid the most, but with each rescuer having different assignments and areas of expertise, the coordination of these and other systems during a mission is not always optimally efficient.
That’s where the coacher agent comes in. If you and your team member each have an AI system that understands us and can communicate with each other, the system could explain any delays in communication or arrival. For example, if you asked your teammate to come and help you and she is not responding, the AI system could explain right away that your teammate will need to take two minutes to help a victim and has also taken off her communication equipment temporarily in order to operate efficiently. Another scenario might involve search-and-rescue teams with different styles that might be advantageous. For example, if the building you enter is dark and disorienting, the AI system can help find the team that has the schematic search style.
The coacher agents can learn from these missions and adapt, developing new strategies based on what it observes from human behavior. Modeling human behavior in such missions can be challenging in terms of changing strategy robustly and in real time, as well as giving our agents human-like inference about other humans’ mental states. We are working on going beyond more rigid sets of parameters for human behavior and learn them from the human data that we can collect. As we input more human knowledge into the system, the system can learn more freely and better serve the purpose of the missions. We are also developing new algorithms and heuristics for real-time inference.