I am interested in leveraging machine learning algorithms to build embodied and gestural interfaces and enrich interaction in VR narratives.

There is an increasing need for embodied and gestural interfaces for VR narratives. My research seeks to develop theories and technologies to advance an understanding of embodied identity expression in virtual reality (VR) narratives through: designing interfaces which use speech, gestural, and physiological and other inputs oriented to reflect the nuance of real-world human interaction, building machine learning algorithms that make it easier to identify nuanced human affective-cognitive states and behavioral patterns, and developing new methodologies to evaluate the impact of digital representations and human-computer interaction paradigms in VEs (human-computer interaction).