Context-Aware Online Adaption of Mixed Reality Interfaces
Host
Stefanie Mueller
ABSTRACT
Mixed Reality has the potential to transform the way we interact with digital information. By blending virtual and real worlds, it promises a rich set of applications, ranging from manufacturing and architecture to interaction with smart devices, and gaming to name only a few. By their nature, MR interfaces will be context-sensitive: since users are no longer bound to a particular location such systems will need to adapt to a rich variety of environmental conditions (e.g., indoor versus outdoors), external (e.g., current task) and internal states (e.g., current concentration level). This inherent context-awareness does however pose significant challenges for the design of MR systems: Many UI decisions can no longer be taken at design time but need to be made in-situ, depending on the current context.
In this talk, I will present a optimization-based approach that automatically adapts MR interfaces based on users’ current context: we estimate users’ cognitive load and use knowledge about their task and environment to modify when, where and how virtual contents are displayed. I will detail this approach, which uses a mix of rule-based decision making and combinatorial optimization, and can be solved efficiently and in real-time. I will embed this work within the larger context of my research which aims at bridging the virtual and physical world, and to allow users to seamlessly transition between the two.
BIO
David Lindlbauer is a postdoctoral researcher in the field of Human–Computer Interaction, working at ETH Zurich in the Advanced Interaction Technologies Lab, led by Prof. Otmar Hilliges. He holds a PhD from TU Berlin where he was working with Prof. Marc Alexa in the Computer Graphics group, and interned at Microsoft Research in the Perception & Interaction Group. His research focuses on the intersection of the virtual and the physical world, how the two can be blended and how borders between them can be overcome. David explores ways that allow users to seamlessly transition between different levels of virtuality, and computational tools to make such approaches more usable. He has worked on projects to expand and understand the connection between humans and technology, from dynamic haptic interfaces to 3D eye-tracking. His research has been published in premier venues such as ACM CHI and UIST, and attracted media attention in outlets such Fast Company Design, MIT Technology Review and Shiropen Japan.
Mixed Reality has the potential to transform the way we interact with digital information. By blending virtual and real worlds, it promises a rich set of applications, ranging from manufacturing and architecture to interaction with smart devices, and gaming to name only a few. By their nature, MR interfaces will be context-sensitive: since users are no longer bound to a particular location such systems will need to adapt to a rich variety of environmental conditions (e.g., indoor versus outdoors), external (e.g., current task) and internal states (e.g., current concentration level). This inherent context-awareness does however pose significant challenges for the design of MR systems: Many UI decisions can no longer be taken at design time but need to be made in-situ, depending on the current context.
In this talk, I will present a optimization-based approach that automatically adapts MR interfaces based on users’ current context: we estimate users’ cognitive load and use knowledge about their task and environment to modify when, where and how virtual contents are displayed. I will detail this approach, which uses a mix of rule-based decision making and combinatorial optimization, and can be solved efficiently and in real-time. I will embed this work within the larger context of my research which aims at bridging the virtual and physical world, and to allow users to seamlessly transition between the two.
BIO
David Lindlbauer is a postdoctoral researcher in the field of Human–Computer Interaction, working at ETH Zurich in the Advanced Interaction Technologies Lab, led by Prof. Otmar Hilliges. He holds a PhD from TU Berlin where he was working with Prof. Marc Alexa in the Computer Graphics group, and interned at Microsoft Research in the Perception & Interaction Group. His research focuses on the intersection of the virtual and the physical world, how the two can be blended and how borders between them can be overcome. David explores ways that allow users to seamlessly transition between different levels of virtuality, and computational tools to make such approaches more usable. He has worked on projects to expand and understand the connection between humans and technology, from dynamic haptic interfaces to 3D eye-tracking. His research has been published in premier venues such as ACM CHI and UIST, and attracted media attention in outlets such Fast Company Design, MIT Technology Review and Shiropen Japan.