Our goal is to develop a framework for selecting between visual and haptic modalities for navigation systems used by operators in high-workload environments.

When humans need to navigate across terrain accurately and quickly, they often use portable electronic navigation systems to aid them. This is particularly prevalent where users are operating in extreme environments (high stress, high workload, high risk). While such systems have conventionally presented navigation information in the visual modality, new technology allows the option to provide haptic feedback instead. Motivated by Multiple Resource Theory, recent research has focused on using haptic feedback to replace or augment visual feedback in navigation systems. Prior work, however, has focused exclusively on using only visual, only haptic, or both simultaneously. This research project investigates a new option: switching between visual and haptic feedback over the course of a single navigation task. This option may provide improved benefits over existing designs due to avoiding the effects of sensory adaptation and taking advantage of the specific benefits of each modality, but may also suffer from an induced switching cost. We are pursuing a three-phase approach to determining the size and significance of the relevant effects and using that data, combined with results of prior work, to develop an intelligent modality-selection algorithm for navigation systems.