Project
Uhura: Personal Assistant that Manages Risk
Many of today’s virtual assistants allow only simple information retrieval tasks, and they’re not targeted at drivers’ personal needs. Uhura is an on-board personal assistant that can listen to and help elicit what the user wants to do. Then, it will fill out the details of the travel plan. If the goals are deemed infeasible, Uhura will negotiate with the user to relax some constraints and maximize the user’s needs.
Traditionally, many AI algorithms would simply fail if the goals are deemed infeasible, but the systems we are developing are able to inquire about the problems with the requirements, ask what isn’t feasible, and come up with possible solutions that the user would find acceptable. By framing it as a false-diagnosis problem, the assistant will be able to negotiate with the user and suggest alternative options.
For example, the assistant can relax temporal constraints, asking the user if he or she is okay with being 10 minutes late. It can also suggest alternative locations or restaurants if the user prefers, say, Chinese food. These negotiations can also take the form of making a tradeoff with the user, asking and assessing how much risk the user is willing to take and how strictly the constraints should be satisfied. When the plan is set out to execute, Uhura will then proactively plan for contingencies.
Further, the distributed version of Uhura builds on top of this behavior of each assistant negotiating with its user by also trying to represent each user whenever there is a conflict happening with multiple users. Scheduling conflicts, for example, often arise because people’s availability is different. At that point, the agents can represent the user in negotiating a good solution for everyone involved.
It can be challenging for AI to plan and schedule for teams because the design has to be so human-centric. Traditional AI algorithms focus on optimizing tasks and enforcing optimality to come up with the best solution — on a team of robots, you could enforce the solution, but you can’t force the solution onto people, who have their own changing requirements and preferences that we want to respect. The AI must adapt to any updates along the way. Coming up with suggestions also proves challenging in the sense that we need to know a user’s preferences, requirements, schedule, and more. If these are unknown, you have to plan based on prior distributions of what you think they are. Suggestions must then be refined until the system respects what the user actually needs.
Assuming that you know a user’s preferences, schedule, and requirements, how do you make sure, in a group setting, all users reach a consensus on how they’re going to execute? If there is conflict, what suggestions do you make? These are the problems that we are currently working on for this project.
Ultimately, our goal with Uhura is to develop user-friendly virtual agents that help humans in their team collaborations, whatever those may be. Inferring humans’ mental models and knowledge can help us go beyond temporal coordination for daily life applications, and apply such systems in more mission-critical scenarios such as search-and-rescue and science deployments, thereby managing users’ cognitive loads and helping them reach fast consensus.
Contact us
If you would like to contact us about our work, please refer to our members below and reach out to one of the group leads directly.
Last updated Apr 28 '20