Satyanarayan selected for NSF career award

Arvind S.

This week it was announced that MIT assistant professor Arvind Satyanarayan has been awarded the distinguished NSF career award for his project “Effective Interaction Design for Data Visualization.” 

Satyanarayan, who leads the MIT Visualization Group, uses visualization to study intelligence augmentation, specifically how software systems can help amplify our cognition and creativity, while respecting our agency.

About the project:

Prior research has developed theories of effective visual encoding (i.e., how best to map data values to visual properties such as position, shape, or size), and implementing these theories in software has helped drive society's broad adoption of visualization as a medium for recording, analyzing, and communicating about data.

However, there has been little analogous theory-building for interactivity – a component widely thought to be critical for enabling a tight feedback loop between generating and answering hypotheses. For instance, how do different interaction design choices affect dataset coverage, the rate of insights, and people's confidence in their findings? This lack of theory has impeded support for interaction design in visualization systems, and has made it difficult to establish conventions for interaction design. For example, in different tools, dragging may pan a chart, highlight brushed points, or zoom into a selected region.

This research project will begin to develop such a theory by evaluating alternate interaction techniques via crowdsourced studies. We will examine different design choices, data distributions, and analytic tasks to understand their effect on measurable outcomes such as usability, completion time, accuracy, and higher-level cognition. We will then study the impact of this theory on systems and techniques for interactive visualization. For instance, how might study results be implemented in a recommendation system to automatically augment static visualizations with effective interactivity? Could such a system also suggest unexplored interactive states, and could exposing this information (e.g., as perceived affordances or information scent) lead to more engaged and thorough exploration of interactive visualizations?