December 11

Add to Calendar 2018-12-11 13:00:00 2018-12-11 14:00:00 America/New_York Reflecting on the Past and Future of HCI Research Abstract: Ideas have histories and like people can only be fully appreciated in the context of their histories. Bill Buxton describes what he terms the long nose of innovation and heralds the wisdom of mining and drawing inspiration from past research. The objective of my talk today is to twofold. First, I will follow Bill’s advice and reflect on my past and current research to identify the underlying ideas worth future mining. Second, I will argue for the promise of a new project to develop a cognitive physics for information designed to ease information-based tasks by operating in accordance with cognitively motivated rules sensitive to tasks, personal and group interaction histories, and context. Bio: Jim Hollan is Professor of Cognitive Science and Computer Science at UC San Diego and Co-Director of the Design Lab. After completing a postdoc in AI at Stanford, the early part of his career was spent on the faculty at UCSD, working with Ed Hutchins and Don Norman and leading the Intelligent Systems Group. He also consulted at Xerox Parc. He then led the MCC Human Computer Interaction Lab and established the Computer Graphics and Interactive Media research group at Bellcore. He then became Chair of the Computer Science Department at the University of New Mexico and subsequently returned to UCSD in 1997 with appointments in the Department of Cognitive Science and Department of Computer Science and Engineering. In 2003 he was elected to the Association of Computing Machinery’s CHI Academy as one who “has made extensive contributions to the study of HCI and led the shaping of the field.” In 2015 he received the ACM CHI Lifetime Research award and recently was honored with the title of Distinguished Professor of Cognitive Science at UC San Diego. Kiva (32-G449)

December 10

Add to Calendar 2018-12-10 16:00:00 2018-12-10 17:00:00 America/New_York Making New Devices Real Abstract:In the first part of this talk I will describe SenseCam, one of the first wearable cameras to be developed, and its application in support of patients with memory impairments. As a researcher who aims to seed new types of hardware device in the market and change people's perceptions of how they can use technology, in many ways SenseCam was the 'perfect' project. The device was adopted enthusiastically, both by memory-impaired patients wishing to improve their recall, and by researchers and clinicians as a tool to support their work. Unfortunately, in the long-term SenseCam has not (yet) proven to be a viable commercial product. Despite recent advances in tools, processes and resources in support of hardware design and prototyping, I believe it's actually becoming more difficult to make the transition from research prototype to commercially viable product. In the second part of the talk I will present my perspectives on why this might be, along with some ideas about how it might be addressed. My ultimate aim is to enable a 'long tail' of hardware products, which fuel innovation in the device space whilst simultaneously providing greater customer choice.Bio:Steve Hodges is a Principal Researcher at Microsoft where he combines device-related research insights with emerging technologies to create new hardware concepts, tools and technologies. By seeding adoption of these beyond the lab he ultimately aims to demonstrate new ways in which technology can empower individuals, organizations and communities. Examples of his work include Azure Sphere, BBC micro:bit, Sensecam, .NET Gadgeteer, the EPC and software-defined batteries. 32-G449 (Kiva Room)

December 04

Mobile, Social, and Fashion: Three Stories from Data-Driven Design

University of Illinois at Urbana-Champaign Department of Computer Science
Add to Calendar 2018-12-04 13:00:00 2018-12-04 14:00:00 America/New_York Mobile, Social, and Fashion: Three Stories from Data-Driven Design Abstract:Having access to the right types of data at scale is increasingly the key to designing innovation. In this talk, I’ll discuss how my group has created original datasets for three domains — mobile app design, fashion retail, and social networks — and leveraged them to build novel user experiences.First, I’ll present a system for capturing and aggregating interaction data from third-party Android apps to identify effective mobile design patterns: open sourcing analytics that were previously locked away. Next, I’ll discuss fashion data collected with Wizard of Oz chatbots, used to model deep learning frameworks for automating personal styling advice. Finally, I’ll introduce an emoji-based social media designed to incentivize curation and map the “taste graph” of its users.Bio: Ranjitha Kumar is an Assistant Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign (UIUC), where she leads the Data-Driven Design group. She is the recipient of a 2018 NSF CAREER award, and UIUC’s 2018 C.W. Gear Outstanding Junior Faculty Award. Her research has won best paper awards/nominations at premier conferences in HCI, and is supported by grants from Google, Amazon, and Adobe. She received her PhD from the Computer Science Department at Stanford University in 2014, and was formerly the Chief Scientist at Apropose, Inc., a data-driven design company she founded that was backed by Andreessen Horowitz and New Enterprise Associates. Kiva (32-G449)

December 03

Add to Calendar 2018-12-03 11:00:00 2018-12-03 12:00:00 America/New_York SwarmTouch: Tactile Interaction of Human with Impedance Controlled Swarm of Nano-Quadrotors Abstract:We propose a novel interaction strategy for a human-swarmcommunication when a human operator guides a formationof quadrotors with impedance control and receives tactilefeedback. The presented approach takes into account thehuman hand velocity and changes the formation shape anddynamics accordingly using impedance interlinks simulated between quadrotors, which helps to achieve a life-like swarm behavior. Experimental results with Crazyflie 2.0 quadrotor platform validate the proposed control algorithm. The tactile patterns representing dynamics of the swarm (extension or contraction) are proposed. The user feels the state of the swarm at his fingertips and receives valuable information to improve the controllability of the complex life-like formation. The user study revealed the patterns with high recognition rates. Subjects stated that tactile sensation improves the ability to guide the drone formation and makes the human-swarm communication much more interactive. The proposed technology can potentially have a strong impact on the human swarm interaction, providing a new level of intuitiveness and immersion into the swarm navigation.Bio:Dzmitry Tsetserukou received the Ph.D. degree in Information Science and Technology from the University of Tokyo, Japan, in 2007. From 2007 to 2009, he was a JSPS Post-Doctoral Fellow at the University of Tokyo. He worked as Assistant Professor at the Electronics-Inspired Interdisciplinary Research Institute, Toyohashi University of Technology from 2010 to 2014. From August 2014 he works at Skolkovo Institute of Science and Technology as Head of Intelligent Space Robotics Laboratory. Dzmitry is a member of the Institute of Electrical and Electronics Engineers (IEEE) since 2006 and the author of over 70 technical publications, 3 patents, and a book. His research interests include swarm of drones, wearable haptic and tactile displays, robot manipulator design, telexistence, human-robot interaction, affective haptics, virtual reality, and artificial intelligence. Dzmitry is the winner of the Best Demonstration Award (Bronze prize, AsiaHaptics 2018), Laval Virtual Award (ACM Siggraph 2016), Best Presentation Award (IRAGO 2013), Best Paper Award (ACM Augmented Human 2010). He was an organizer of the first Workshop on Affective Haptics at IEEE Haptics Symposium 2012. Kiva (32-G449)

November 27

Add to Calendar 2018-11-27 13:00:00 2018-11-27 14:00:00 America/New_York The Value of Visualization for Exploring, Presenting, and Understanding Data AbstractEveryone’s talking about data these days. People, organizations, and businesses are seeking better ways to analyze, understand, and communicate their data. While a variety of approaches can be taken to address this challenge, my own work has focused on data visualization. In this talk, I’ll describe the unique advantages and benefits that visualization provides, and I’ll support these arguments through examples from recent projects in my research group. Two specific themes that I’ll emphasize are the importance of interaction in visualization and the challenge of determining a visualization’s value and utility.BioJohn Stasko is a Regents Professor in the School of Interactive Computing (IC) at the Georgia Institute of Technology, where he has been on the faculty since 1989. His Information Interfaces Research Group develops ways to help people and organizations explore, analyze, and make sense of data to solve problems. Stasko is a widely published and internationally recognized researcher in the areas of information visualization and visual analytics, approaching each from a human-computer interaction perspective. He has received Best Paper or Most Influential/Test of Time Paper awards from the IEEE InfoVis and VAST, ACM CHI, INTERACT, and ICSE conferences. Stasko has been Papers/Program Co-Chair for the IEEE Information Visualization (InfoVis) and the IEEE Visual Analytics Science and Technology (VAST) Conferences and has served on multiple journal editorial boards. He received the IEEE Visualization and Graphics Technical Committee (VGTC) Visualization Technical Achievement Award in 2012, and was named an ACM Distinguished Scientist in 2011, an IEEE Fellow in 2014, and a member of the ACM CHI Academy in 2016. In 2013 he also became an Honorary Professor in the School of Computer Science at the Univ. of St. Andrews in Scotland. 32-G449

November 19

Add to Calendar 2018-11-19 14:00:00 2018-11-19 15:00:00 America/New_York Augmenting Visualization tools with Automated Design & Recommendation Abstract: Visualization is a critical tool for data science. Analysts use plots to explore and understand distributions and relationships in their data. Machine learning developers also use diagrams to understand and communicate complex model structures. Yet visualization authoring requires a lot of manual efforts and non-trivial decisions, demanding that the authors have a lot of expertise, discipline, and time in order to effectively visualize and analyze the data.My research in human-computer interaction focuses on the design of tools that augment visualization authoring with automated design and recommendation. By automating repetitive parts of authoring while preserving user control to guide the automation, people can leverage their domain knowledge and creativity to achieve their goals more effectively with fewer efforts and human errors. In my PhD dissertation, I have developed new formal languages and systems for chart specification and recommendation including the Vega-Lite visualization grammar and the CompassQL query language. On top of on these languages, I have developed and studied graphical interfaces that enable new forms of recommendation-powered visual data exploration including the Voyager visualization browser and Voyager 2, which blends manual and automated chart authoring in a single tool. To help developers inspect deep learning architecture, I also built a tool that combines automatic layout techniques with user interaction to visualize dataflow graphs of TensorFlow models as a part of TensorBoard, TensorFlow’s official dashboard tool. These projects have won awards at premier academic venues, and are used by the Jupyter/Python data science communities and leading tech companies including Apple, AirBnB, Google, Microsoft, Netflix, Twitter, and Uber.Bio: Kanit (Ham) Wongsuphasawat is a research scientist at Apple where he works on visualization and interactive systems for data science and machine learning. Kanit has a PhD in Computer Science from the University of Washington (UW), where he worked with Jeffrey Heer and the Interactive Data Lab on visualization tools. Kanit also previously worked at a number of leading data-driven technology companies including Google, Tableau Software, Thomson Reuters, and Trifacta. 32-G882

November 13

Add to Calendar 2018-11-13 13:00:00 2018-11-13 14:00:00 America/New_York Avoiding Dystopia: User privacy preferences and empowerment online Abstract: Online sharing combined with opaque mass surveillance and powerful analytic tools has led us to a place where data is collected and transformed into incredibly personal insights, often without users' knowledge or consent. This impacts the information they see, the way they interact, and it can be used in deeply manipulative ways. This talk will look at users' feelings about these practices and how they tie back to classic sociological understandings of trust, power, and privacy. I discuss possible ways forward to avoid an impending dystopia, especially in light of GDPR on one side and Chinese Social Credit on the other.Bio: Jen Golbeck is a Professor in the College of Information Studies at the University of Maryland, College Park. Her research focuses on artificial intelligence and social media, privacy, and trust on the web. Her dogs are also famous on the internet and she runs their social media empire at theGoldenRatio4 on all platforms. She received an AB in Economics and an SB and SM in Computer Science at the University of Chicago, and a Ph.D. in Computer Science from the University of Maryland, College Park. 32-G449 (Kiva)

November 06

Add to Calendar 2018-11-06 13:00:00 2018-11-06 14:00:00 America/New_York Explicit Direct Instruction in Programming Education Abstract: In education, there is and has always been debate about how to teach. One of these debates centers around the role of the teacher: should their role be minimal, allowing students to find and classify knowledge independently, or should the teacher be in charge of what happens in the classroom, explaining students all they need to know? These forms of teaching have many names, but the most common ones are exploratory learning and direct instruction respectively. While the debate is not settled, more and more evidence is presented by researchers that explicit direct instruction is more effective than exploratory learning in teaching language and mathematics and science. These findings raise the question whether that might be true for programming education too. This is especially of interest since programming education is deeply rooted in the constructionist philosophy, leading many programmers to follow exploratory learning methods, often without being aware of it. This talk outlines this history of programming education and additional beliefs in programming that lead to the prevalence of exploratory forms of teaching. We subsequently explain the didactic principles of direct instruction, explore them in the context of programming, and hypothesize how it might look like for programming.Bio: I am assistant professor at Delft University of Technology, where where I research end-user programming. End-user programming is programming for everyone that does not think of themselves as a programmer. In my PhD dissertation I worked on applying methods from software engineering to spreadsheets. During my PhD I founded a company called Infotron, which sells a tool called PerfectXL based on techniques I developed to spot errors in spreadsheets. Me, my research and my company have gotten some media coverage over the last years. One of my biggest passions in life is to share my enthusiasm for programming/tech with others. I teach a bunch of kids LEGO Mindstorms programming every Saturday in a local community center. Furthermore, I am one of the founders of the Joy of Coding conference, a one day developer conference in Rotterdam and one of the hosts of the Software Engineering Radio podcast, one of the biggest software podcasts on the web. When I am not coding, blogging or teaching, I am probably dancing Lindy Hop with my beau Rico, out running, watching a movie or playing a (board)game. 32-449G (Kiva Room)

October 30

Add to Calendar 2018-10-30 13:00:00 2018-10-30 14:00:00 America/New_York Human-Centered Autonomous Vehicles Abstract:I will present a human-centered paradigm for building autonomous vehicle systems, contrasting it with how the problem is currently formulated and approached in academia and industry. The talk will include discussion and video demonstration of new work on driver state sensing, voice-based transfer of control, annotation of large-scale naturalistic driving data, and the challenges of building and testing a human-centered autonomous vehicle at MIT.Bio:Lex Fridman is a research scientist at MIT, working on deep learning approaches to perception, control, and planning in the context of semi-autonomous vehicles and more generally human-centered artificial intelligence systems. His work focuses on learning-based methods that leverage large-scale, real-world data. Lex received his BS, MS, and PhD from Drexel University where he worked on applications of machine learning, computer vision, and decision fusion techniques in a number of fields including robotics, active authentication, and activity recognition. Before joining MIT, Lex was at Google leading deep learning efforts for large-scale behavior-based authentication. Lex is a recipient of a CHI-17 best paper award and a CHI-18 best paper honorable mention award. 32-G882

October 16

Add to Calendar 2018-10-16 13:00:00 2018-10-16 14:00:00 America/New_York Seeing What We (Should) Think Through Visualization Interaction Abstract: Charts, graphs, and other information visualizations amplify cognition by enabling users to visually perceive trends and differences in quantitative data. While guidelines dictate how to choose visual encodings and metaphors to support accurate perception, it is less obvious how to design visualizations that encourage rational decisions from a statistical perspective. I'll motivate two challenges that must be overcome to support effective reasoning with visualizations. First, people's intuitions about uncertainty often conflict with statistical definitions. I'll describe research in my lab that shows how visualization techniques for conveying uncertainty through discrete samples can improve non-experts' ability to understand and make decisions from distributional information. Second, people often bring prior beliefs and expectations about data-driven phenomena to their interactions with data (e.g., I thought unemployment was down this year) which influence their interpretations. Most design and evaluation techniques do not account for these influences. I'll describe what we've learned by developing and studying visualization interfaces that encourage reflecting on data in light of one's own or others' prior knowledge. I'll conclude by reflecting on how better representations of uncertainty and prior knowledge can contribute to a Bayesian model of visualization interpretation.Bio: Jessica Hullman is an Assistant Professor in Computer Science and Journalism at Northwestern. The goal of her research is to develop computational tools that improve how people reason with data. She is particularly inspired by how science and data are presented to non-expert audiences in data and science journalism, where a shift toward digital news provides opportunities for informing through interactivity and visualization. Her work has provided automated tools and empirical findings around the use of visualizations to support communication and reasoning. Her current research focuses on how understandable presentations of uncertainty and interactive visualizations that enable users to articulate and reason with prior beliefs can transform how lay people and analysts alike interact with data.Jessica has received numerous paper awards from top Visualization and HCI venues, and is the recipient of an NSF CRII and CAREER Awards among other grants. Prior to joining Northwestern in 2018, she spent three years as an Assistant Professor at the University of Washington Information School. She completed her Ph.D. at the University of Michigan and spent a year as the inaugural Tableau Software Postdoctoral Scholar in Computer Science at the University of California Berkeley in 2014 prior to joining the University of Washington in 2015. Kiva Room (32-G449)

October 03

September 18

Add to Calendar 2018-09-18 13:00:00 2018-09-18 14:00:00 America/New_York HCI Seminar Interpretability and SafetyHow can we understand the inner workings of neural networks? Neural networks greatly exceed anything humans can design directly at computer vision by building up their own internal hierarchy of internal visual concepts. So, what are they detecting? How do they implement these detectors? How do they fit together to create the behavior of the network as a whole?At a more practical level, can we use these techniques to audit neural networks? Or find cases where the right decision is made for bad reasons? To allow human feedback on the decision process, rather than just the final decision? Or to improve our ability to design models?Bio: Chris Olah is best known for DeepDream, the Distill journal, and his blog. He spent five years at Google Brain, where he focused on neural network interpretability and safety. He's also worked on various other projects, including early TensorFlow, generative models, and NLP. Prior to Google Brain, Chris dropped out of university and did deep learning research independently as a Thiel Fellow. Chris will be joining OpenAI in October to start a new interpretability team there.

September 14