SLS: Cross-domain language understanding and dialogue failure recovery
We have been investigating the deployment of smartphone-based spoken dialogue applications (on Android and iPhone/iPad devices) in several different domains such as movies, restaurants, flights, and weather, with each domain being a stand-alone application. Now a major challenge is the development of cross-domain speech interfaces, which could analyze the context of the conversation automatically and redirect the user to the corresponding dialogue path. A potentially useful application is for the vehicle environment, where the app navigation heavily relies on the speech input from users rather than gestures like typing or clicking.
The student will help us explore incorporating the conversational context into language understanding. We will investigate a topic detection approach that acts as a form of moderator to figure out which app is the appropriate one to field an incoming spoken query. We will also investigate dialogue recovery mechanisms and dialogue state tracking strategies in the presence of speech technology failure.
We’re looking for someone who has Java expertise and experience with machine learning. This project is suitable as a Super UROP topic and could be extended to an MEng thesis (see the page of MEng research opportunities). If interested, please send a CV to Jingjing Liu (firstname.lastname@example.org).