May 07

Add to Calendar 2018-05-07 11:00:00 2018-05-07 12:00:00 America/New_York Accelerating Intelligence ABSTRACT: A fundamental problem in the world is that the explosion of information is making it take longer and longer to learn any given domain. This leads to serious challenges for learning and decision making, whether deciding which programming API to use, what to do after a cancer diagnosis, or where to go in an unfamiliar city. Furthermore, creative breakthroughs in science and technology often come from finding analogies between multiple domains, exponentially compounding the problem. In this talk I discuss our efforts over the past 10 years towards addressing this problem by building a universal knowledge accelerator: a platform in which the sensemaking people engage in online is captured and made useful for others, leading to virtuous cycles of constantly improving information sources that in turn help people more effectively synthesize and innovate. I will demonstrate how tapping into the deep cognitive processing of the human mind can lead to fundamental advances in AI and help other users more deeply understand their data. I conclude by posing a grand challenge of capturing the deep cognitive processing involved in complex web search (1/10th of all labor hours) and developing new AI systems that can help scaffold future users’ knowledge and creativity.BIO: Aniket (Niki) Kittur is an Associate Professor and holds the Cooper-Siegel Chair in the Human-Computer Interaction Institute at Carnegie Mellon University. His research looks at how we can augment the human intellect using crowds and computation. He has authored and co-authored more than 80 peer-reviewed papers, 14 of which have received best paper awards or honorable mentions. Dr. Kittur is a Kavli fellow, has received an NSF CAREER award, the Allen Newell Award for Research Excellence, major research grants from NSF, NIH, Google, and Microsoft, and his work has been reported in venues including Nature News, The Economist, The Wall Street Journal, NPR, Slashdot, and the Chronicle of Higher Education. He received a BA in Psychology and Computer Science at Princeton, and a PhD in Cognitive Psychology from UCLA. 32G-449
Add to Calendar 2018-05-07 11:00:00 2018-05-07 12:00:00 America/New_York Accelerating Human Intelligence ABSTRACT: A fundamental problem in the world is that the explosion of information is making it take longer and longer to learn any given domain. This leads to serious challenges for learning and decision making, whether deciding which programming API to use, what to do after a cancer diagnosis, or where to go in an unfamiliar city. Furthermore, creative breakthroughs in science and technology often come from finding analogies between multiple domains, exponentially compounding the problem. In this talk I discuss our efforts over the past 10 years towards addressing this problem by building a universal knowledge accelerator: a platform in which the sensemaking people engage in online is captured and made useful for others, leading to virtuous cycles of constantly improving information sources that in turn help people more effectively synthesize and innovate. I will demonstrate how tapping into the deep cognitive processing of the human mind can lead to fundamental advances in AI and help other users more deeply understand their data. I conclude by posing a grand challenge of capturing the deep cognitive processing involved in complex web search (1/10th of all labor hours) and developing new AI systems that can help scaffold future users’ knowledge and creativity.BIO: Aniket (Niki) Kittur is an Associate Professor and holds the Cooper-Siegel Chair in the Human-Computer Interaction Institute at Carnegie Mellon University. His research looks at how we can augment the human intellect using crowds and computation. He has authored and co-authored more than 80 peer-reviewed papers, 14 of which have received best paper awards or honorable mentions. Dr. Kittur is a Kavli fellow, has received an NSF CAREER award, the Allen Newell Award for Research Excellence, major research grants from NSF, NIH, Google, and Microsoft, and his work has been reported in venues including Nature News, The Economist, The Wall Street Journal, NPR, Slashdot, and the Chronicle of Higher Education. He received a BA in Psychology and Computer Science at Princeton, and a PhD in Cognitive Psychology from UCLA. Kiva 32G449

April 10

Add to Calendar 2018-04-10 13:00:00 2018-04-10 14:00:00 America/New_York Governing Human and Machine Behavior in an Experimenting Society Governing Human and Machine Behavior in an Experimenting SocietyToday's social technologies observe and intervene in the lives of billions of people, exercising tremendous power in society. Experimentation infrastructures, which manage tens of thousands of behavioral studies a year, offer one avenue for guiding the use and accountability of platform power.In this talk, I will describe CivilServant, a citizen behavioral science infrastructure that supports the public to test ideas for a fairer, safer, more understanding internet– independently of the tech industry. Communities with tens of millions of people have used CivilServant to test ideas for responding to human/algorithmic misinformation, preventing harassment, managing politically-partisan conflict, and monitoring the effects of AI law enforcement systems on civil liberties. As social technologies grow in power and reach, many have come to expect that they should address enduring social problems including misinformation, conflict, and public health. Consequently, engineers, designers, and data scientists are becoming policymakers whose systems govern the behavior of humans and machines at scale. Re-engineering these systems for public interest purposes in democratic societies requires substantial changes in the design, accountability, and statistical capabilities of software for mass experimentation.About Nathan Matias:Dr. J. Nathan Matias is a computer scientist and social scientist who organizes citizen behavioral science for a fairer, safer and more understanding internet. He advances this work in collaboration with tens of millions of people through his nonprofit CivilServant and as a postdoc at the Princeton University departments of Psychology, the Center for Information Technology Policy, and Sociology. Before Princeton, Nathan completed a PhD at the MIT Media Lab's Center for Civic Media, served as a fellow at Harvard's Berkman Klein Center, worked in tech startups that have reached over a billion devices, and helped start a series of education and journalistic charities. His journalism has appeared in The Atlantic, PBS, the Guardian, and other international media. 32-G449 Kiva

February 20

Understanding and Supporting Communication Across Language Boundaries

Professor Susan R. Fussell
Cornell University - Departments of Communication and Information Science
Add to Calendar 2018-02-20 13:00:00 2018-02-20 14:00:00 America/New_York Understanding and Supporting Communication Across Language Boundaries Computer-mediated communication (CMC) tools and social media potentially allow people to interact fluidly across national, cultural and linguistic boundaries in ways that would have been difficult if not impossible in the past. To date, however, however, much of this potential fails to be realized. A single individual is unlikely to be fluent in a wide array of languages. The use of a lingua franca such as English permits a degree of interaction with speakers of other native languages, but it can have negative effects on non-native speakers. Advances in machine translation (MT) and other technologies could allow people to communicate with one another in their native language, but translation errors can create sizeable misunderstandings when MT is used in conversational settings. In a series of studies, my students and I have been exploring the problem space of inter-lingual communication, with the goals of better understanding the challenges of interaction across language boundaries and of informing the design of new tools to support this interaction. I will first describe two interview studies exploring how the need to use a non-native language affects communication and coordination in both formal and informal settings. I will then describe several tools we have developed to make MT more usable in everyday conversation and present the results of lab studies evaluating these tools. Taken together, these studies help help advance the area of inter-lingual computer-mediated communication. Susan R. Fussell is a Liberty Hyde Bailey Professor in the Department of Communication and the Department of of Information Science at Cornell University. She received her BS degree in psychology and sociology from Tufts University, and her Ph.D. in social and cognitive psychology from Columbia University. Dr. Fussell's primary interests lie in the areas of computer-supported cooperative work and computer-mediated communication. Her current projects focus on intercultural and multilingual communication, telepresence robotics, collaborative intelligence analysis, public deliberation, and tools to motivate people to reduce their energy usage. 32G449 Kiva

December 05

November 28

Add to Calendar 2017-11-28 14:00:00 2017-11-28 15:00:00 America/New_York Achieving Real Virtuality: Closing the Gap Between the Digital and the Physical Abstract: As digital interaction spreads to an increasing number of devices, direct physical manipulation has become the dominant metaphor in HCI. The promise made by this approach is that digital content will look, feel, and respond like content from the real world. Current commercial systems fail to keep that promise, leaving a broad gulf between what users are led to expect and what they see and feel. In this talk, Daniel will discuss two areas where his lab has been making strides to address this gap. First, in the area of passive haptics, he will describe technologies intended to enable users to feel virtual content, without having to wear gloves or hold “poking” devices. Second, in the area of systems performance, he will describe his team’s work in achieving nearly zero latency responses to touch and stylus input. Speakers's Bio: Daniel Wigdor is an associate professor of computer science and co-director of the Dynamic Graphics Project at the University of Toronto. His research is in the area of human-computer interaction, with major areas of focus in the architecture of highly-performant UI’s, on development methods for ubiquitous computing, and on post-WIMP interaction methods. Before joining the faculty at U of T in 2011, Daniel was a researcher at Microsoft Research, the user experience architect of the Microsoft Surface Table, and a company-wide expert in user interfaces for new technologies. Simultaneously, he served as an affiliate assistant professor in both the Department of Computer Science & Engineering and the Information School at the University of Washington. Prior to 2008, he was a fellow at the Initiative in Innovative Computing at Harvard University, and conducted research as part of the DiamondSpace project at Mitsubishi Electric Research Labs. He is co-founder of Iota Wireless, a startup dedicated to the commercialization of his research in mobile-phone gestural interaction, and of Tactual Labs, a startup dedicated to the commercialization of his research in high-performance, low-latency user input. For his research, he has been awarded an Ontario Early Researcher Award (2014) and the Alfred P. Sloan Foundation’s Research Fellowship (2015), as well as best paper awards or honorable mentions at CHI 2016, CHI 2015, CHI 2014, Graphics Interface 2013, CHI 2011, and UIST 2004. Three of his projects were selected as the People’s Choice Best Talks at CHI 2014 and CHI 2015.Daniel is the co-author of Brave NUI World | Designing Natural User Interfaces for Touch and Gesture, the first practical book for the design of touch and gesture interfaces. He has also published dozens of other works as invited book chapters and papers in leading international publications and conferences, and is an author of over three dozen patents and pending patent applications. Daniel’s is sought after as an expert witness, and has testified before courts in the United Kingdom and the United States. Further information, including publications and videos demonstrating some of his research, can be found on Professor Wigdor's website. 32G449 Kiva/Patil

November 07

Add to Calendar 2017-11-07 14:00:00 2017-11-07 15:00:00 America/New_York Computational Ecosystems: Tech-enabled Communities to Advance Human Values at Scale Abstract:Despite the continued development of individual technologies and processes for supporting human endeavors, major leaps in solving complex human problems will require advances in system-level thinking and orchestration. In this talk, I describe efforts to design, build, and study Computational Ecosystems that interweave community process, social structures, and intelligent systems to unite people and machines to solve complex problems and advance human values at scale. Computational ecosystems integrate various components to support ecosystem function; the interplay among components synergistically advances desired values and problem solving goals in ways that isolated technologies and processes cannot. Taking a systems approach to design, computational ecosystems emphasize (1) computational thinking to decompose and distribute problem solving to diverse people or machines most able to address them; and (2) ecological thinking to create sustainable processes and interactions that support jointly the goals of ecosystem members and proper ecosystem function.I present examples of computational ecosystems designed to advance community-based planning and research training, that respectively engages thousands of people in planning an event and empowers a single faculty member to provide authentic research training to 20+ students. These solutions demonstrate how to combine wedges of human and machine competencies into integrative technology-supported, community-based solutions. I will preview what's ahead for computational ecosystems, and close with a few thoughts on the role of computing technologies in advancing human values at scale.Bio:Haoqi Zhang is the Allen K. and Johnnie Cordell Breed Junior Chair of Design and assistant professor in Computer Science at Northwestern University. His work advances the design of integrated socio-technical models that solve complex problems and advance human values at scale. His research bridges the fields of Human-Computer Interaction, Artificial Intelligence, Social & Crowd Computing, Learning Science, and Decision Science, and is generously supported by National Science Foundation grants in Cyber-Human Systems, Cyberlearning, and the Research Initiation Initiative.Haoqi received his PhD in Computer Science and BA in Computer Science and Economics from Harvard University. At Northwestern he founded and directs the Design, Technology, and Research (DTR) program, which provides an original model for research training for 50 graduate and undergraduate students. With Matt Easterday, Liz Gerber, and Nell O'Rourke, Haoqi co-directs the Delta Lab, an interdisciplinary research lab and design studio across computer science, learning science, and design. 32G449 Kiva

November 01

Add to Calendar 2017-11-01 10:00:00 2017-11-01 11:00:00 America/New_York Work in Progress Demo from Pen- and Touch-Computing: Vizdom and Dash ABSTRACT: In this talk we will present two current research projects from our Pen- and Touch Computing lab at Brown University.First, we will first demonstrate Vizdom (and it's processing backend IDEA) which are being developed in collaboration with Professor Tim Kraska's database management group and are sponsored by NSF and DARPA awards, as well as by gifts from Microsoft Research and Adobe. Vizdom is a pen- and touch-based interactive data exploration application with three salient features: 1) An emphasis on progressive computation that we argue (and to some degree tested in usability studies) greatly improves the user experience on larger datasets. 2) A tight integration of visualizations, machine learning and statistics all within the same tool and through an accessible interaction paradigm with the goal to empower "data-enthusiasts" - people who are not mathematicians or programmers, and only know a bit of statistics. 3) Embedding visual data exploration in a statistical framework to prevent common problems and statistical pitfalls (i.e., multiple comparisons problem).In the second part of our talk, we demonstrate, Dash, an early-stage prototype of an integrated environment for document-based knowledge work, enhanced with pen- and touch interactions; this work is sponsored by Microsoft Research and Adobe. With Dash we aim to streamline common knowledge worker tasks by allowing users to create, collect and relate heterogeneous documents in both structured and free-form workspaces. In contrast to most applications which have special purpose databases that aren't exposed as databases, Dash not only allows application-specific views but also exposes database views of its document and metadata information. This allows computational operators and data visualizations to be applied to any feature of the repository. Thus, with Dash, users create, as a byproduct of their natural workflow, custom "dashboards" on their data since Dash treats all searches, visualizations and layouts as first class interactive documents on par with all other documents.Andries van Dam, is the Thomas J. Watson Jr. University Professor of Technology and Education and Professor of Computer Science at Brown University. He has been a member of Brown's faculty since 1965, was a co-founder of Brown's Computer Science Department and its first Chairman from 1979 to 1985, and was also Brown's first Vice President for Research from 2002 - 2006. His research includes work on computer graphics, hypermedia systems, post-WIMP and natural user interfaces (NUI), including pen- and touch computing, and educational software. He has been working for over four decades on systems for creating and reading electronic books with interactive illustrations for use in teaching and research. In 1967 Prof. van Dam co-founded ACM SICGRAPH (the precursor of SIGGRAPH) and from 1985 through 1987 was Chairman of the Computing Research Association. He is a Fellow of ACM, IEEE, and AAAS, a member of the National Academy of Engineering, and the American Academy of Arts & Sciences. He has received the ACM Karl V. Karlstrom Outstanding Educator Award, the SIGGRAPH Steven A. Coons Award for Outstanding Creative Contributions to Computer Graphics, and the IEEE Centennial Medal, and holds four honorary doctorates from Darmstadt Technical University in Germany, Swarthmore College, the University of Waterloo in Canada, and ETH Zurich. He has authored or co-authored over 100 papers and nine books, including "Fundamentals of Interactive Computer Graphics" and three editions of "Computer Graphics: Principles and Practice".Emanuel Zgraggen received his Fachhochschuldiplom in Informatik from HSR Hochschule für Technik Rapperswil in Switzerland and his MS in computer science from Brown University. He is currently a PhD candidate at Brown University working in the graphics group and is advised by Professor Andy van Dam and Professor Tim Kraska. His main research areas are Human Computer Interaction, Information Visualization and Data Science.Robert Zeleznik is Director of User Interface Research for Brown University's Computer Graphics Group. He has worked broadly in the area of post-WIMP and pen-based human computer interaction, having over two decades of experience developing both 2D and 3D gestural user interfaces and interaction techniques. In addition, he has worked extensively in the application domains of 2D drawing and 3D modeling, scientific and information visualization, and hypermedia. 32G-449 Kiva

October 17

Interactive Systems based on Electrical Muscle Stimulation

Hasso Plattner Institute - Human Computer Interaction Lab
Add to Calendar 2017-10-17 14:00:00 2017-10-17 15:00:00 America/New_York Interactive Systems based on Electrical Muscle Stimulation Abstract: Today's interfaces get closer and closer to our body and are now literally attached to it, e.g., wearable devices and virtual reality headsets. These provide a very direct and immersive interaction with virtual worlds. But what if, instead, these interfaces were a "part of our body"? In this talk I introduce the idea of an interactive system based on electrical muscle stimulation (EMS). EMS is a technique from medical rehabilitation in which a signal generator and electrodes attached to the user's skin are used to send electrical impulses that involuntarily contract the user's muscles. While EMS devices have been used to regenerate lost motor functions in rehabilitation medicine since the '60s, it has only been a few years since researchers started to explore EMS as a means for creating interactive systems. These more recent projects, including six of our projects, explore EMS as a means for teaching users new motor skills, increasing immersion in virtual experiences by simulating impact and walls in VR/AR, communicating with remote users and allowing users to read & write information using eyes-free wearable devices. Bio: Pedro is a researcher at Prof. Baudisch’s Human Computer Interaction Lab at the Hasso Plattner Institute, Germany. Pedro's work is published at ACM CHI/UIST and demonstrated at venues such as ACM SIGGRAPH and IEEE Haptics. Pedro has received the ACM CHI Best Paper award for his work on Affordance++, several nominations and exhibited at Ars Electronica 2017. His work also captured the interest of media, such as MIT Technology Review, NBC, Discovery Channel, NewScientist or Wired. (Learn more about Pedro's work here).VR Walls: https://www.youtube.com/watch?v=OcSmCamMKfsMuscle Plotter: https://www.youtube.com/watch?v=On738nXm5AMAffordance++: https://www.youtube.com/watch?v=Gz4dphzBb6I 32G449 Kiva

September 26

Add to Calendar 2017-09-26 14:00:00 2017-09-26 15:00:00 America/New_York Implicit User Interfaces Implicit user interfaces passively obtain information from their users, typically in addition to mouse, keyboard, or other explicit inputs. They fit into the emerging trends of physiological computing and affective computing. Our work focuses on using brain input for this purpose, measured through functional near-infrared spectroscopy (fNIRS), as a way of increasing the narrow communication bandwidth between human and computer. Most previous brain-computer interfaces have been designed for people with severe motor disabilities, and use explicit signals as the primary input; but these are too slow and inaccurate for wider use. Instead, we use brain measurement to obtain more information about the user and their context directly, and without asking additional effort from them. We have obtained good results in a number of systems we created, as measured by objective task performance metrics. I will discuss our work on brain-computer interfaces and the more general area of implicit interaction.I will also discuss our concept of Reality-Based Interaction (RBI) as a unifying framework that ties together a large subset of emerging new, non-WIMP user interfaces. It attempts to connect current paths of research in HCI, and to provide a framework that can be used to understand, compare, and relate these new developments. Viewing them through the lens of RBI can provide insights for designers, and allow us to find gaps or opportunities for future development. I will briefly discuss some past work in my research group on a variety of next generation interfaces, such as tangible interfaces and implicit eye movement-based interaction techniques.BIORobert Jacob is a Professor of Computer Science at Tufts University, where his research interests are new interaction modes and techniques and user interface software; his current work focuses on implicit brain-computer interfaces. He has been a Visiting Professor at the University College London Interaction Centre, Universite Paris-Sud, and the MIT Media Laboratory. Before coming to Tufts, he was in the Human-Computer Interaction Lab at the Naval Research Laboratory. He received his Ph.D. from Johns Hopkins University, and he is a member of the editorial board for the journal Human-Computer Interaction and a founding member for ACM Transactions on Computer-Human Interaction. He has served as Vice-President of ACM SIGCHI, Papers Co-Chair of the CHI and UIST conferences, and General Co-Chair of UIST and TEI. He was elected as a member of the ACM CHI Academy in 2007 and as an ACM Fellow in 2016. 32G-449 Kiva

September 19

May 23

Add to Calendar 2017-05-23 13:00:00 2017-05-23 14:00:00 America/New_York Enhancing the Expressivity of Augmentative Communication Technologies for People with ALS Abstract:ALS (amyotrophic lateral sclerosis) is a degenerative neuromuscular disease; people with late-stage ALS typically retain cognitive function, but lose the motor ability to speak, relying on gaze-controlled AAC (augmentative and alternative communication) devices for interpersonal interactions. State-of-the-art AAC technologies used by people with ALS do not facilitate natural communication; gaze-based AAC communication is extremely slow, and the resulting synthesized speech is flat and robotic. This lecture presents a series of novel technology prototypes from the Microsoft Research Enable team that aim to address the challenges of improving the expressivity of AAC for people with ALS.Bio:Meredith Ringel Morris is a Principal Researcher at Microsoft Research, where she is affiliated with the Ability, Enable, and neXus research teams. She is also an affiliate faculty member at the University of Washington, in both the department of Computer Science and Engineering and the School of Information. Dr. Morris earned a Ph.D. in computer science from Stanford University in 2006, and also did her undergraduate work in computer science at Brown University. Her primary research area is human-computer interaction, specifically computer-supported cooperative work and social computing. Her current research focuses on the intersection of CSCW and Accessibility ("social accessibility"), creating technologies that facilitate people with disabilities in connecting with others in social and professional contexts. Past research contributions include foundational work in facilitating cooperative interactions in the domain of surface computing, and in supporting collaborative information retrieval via collaborative web search and friendsourcing. Seminar Room G449 (Patil/Kiva)

May 16

Technology for Social Impact

Nicola Dell
Cornell Tech University
Add to Calendar 2017-05-16 13:00:00 2017-05-16 14:00:00 America/New_York Technology for Social Impact Abstract:The goal of my research is to design, build, deploy, and evaluate novel computing systems that improve the lives of underserved populations in low-income regions. As computing technologies become affordable and accessible to diverse populations across the globe, it is critical that we expand the focus of HCI research to study the social, technical, and infrastructural challenges faced by these diverse communities and build systems that address problems in critical domains such as health care and education. In this talk, I describe my general approach to building technologies for underserved communities, including identifying opportunities for technology, conducting formative research to fully understand the space, developing novel technologies, iteratively testing and deploying, evaluating with target populations, and handing off to global development organizations for long-term sustainability.Bio:Nicki Dell is an Assistant Professor in Information Science at Cornell Tech. Her research spans Human-Computer Interaction (HCI) and Information and Communication Technologies for Development (ICTD) with a focus on designing, building, and evaluating novel computing systems that improve the lives of underserved populations in low-income regions. Nicki’s research and outreach activities have been recognized through numerous paper awards and fellowships. Nicki was born and raised in Zimbabwe and received a B.Sc. in Computer Science from the University of East Anglia (UK) in 2004, and an M.S. and Ph.D. in Computer Science and Engineering from the University of Washington in 2011 and 2015 respectively. Seminar Room G449 (Patil/Kiva)

May 02

Add to Calendar 2017-05-02 13:00:00 2017-05-02 14:00:00 America/New_York Personalized Behavior-Powered Systems Abstract: I will present work that leverages user behavioral data to build personalized applications, which I call "behavior-powered systems". Two applications use online user interactions: 1) WebGazer uses interaction data made on any website to continuously calibrate a webcam-based eye tracker, so that users can manipulate any web page solely by looking. 2) Drafty tracks interactions with a detailed table of computer science professors to ask the crowd of readers to help keep structured data up-to-date by inferring their interests. And two applications use mobile sensing data: 3) SleepCoacher uses smartphone sensors to capture noise and movement data while people sleep to automatically generate recommendations about how to sleep better through a continuous cycle of mini-experiments. 4) Rewind uses passive location tracking on smartphones to recreate a person’s past memory through a fusion of geolocation, street side imagery, and weather data. Together, these systems show how subtle footprints of user behavior collected remotely can reimagine the way we gaze at websites, improve our sleep, experience the past, and maintain changing data.Bio: Jeff Huang is an Assistant Professor in Computer Science at Brown University. His research in human-computer interaction focuses on behavior-powered systems, spanning the domains of mobile devices, personal informatics, and web search. Jeff’s Ph.D. is in Information Science from the University of Washington in Seattle, and his masters and undergraduate degrees are in Computer Science from the University of Illinois at Urbana-Champaign. Before joining Brown, he analyzed search behavior at Microsoft Research, Google, Yahoo, and Bing, and co-founded World Blender, a Techstars-backed company that made geolocation mobile games. Jeff has been a Facebook Fellow and has received a Google Research Award and NSF CAREER Award. Seminar Room G449 (Patil/Kiva)

April 21

Add to Calendar 2017-04-21 13:00:00 2017-04-21 14:00:00 America/New_York Human-Computer Partnerships Abstract:Incredible advances in hardware have not been matched by equivalent advances in software; we remain mired in the graphical user interface of the 1970s. I argue that we need a paradigm shift in how we design, implement and use interactive systems. Classical artificial intelligence treats the human user as a cog in the computer's process -- the so-called “human-in-the-loop”; Classical human-computer interaction focuses on creating and controlling the 'user experience'. We seek a third approach -- a true human-computer partnership, which takes advantage of machine learning, but leaves the user in control. I describe a series of projects that illustrate our approach to making interactive systems discoverable, appropriable and expressive, using the principles of instrumental interaction and reciprocal co-adaptation.The goal is to create robust interactive systems that significantly augment human capabilities and are actually worth learning over time.Bio:Wendy Mackay is a Research Director, Classe Exceptionnelle, at Inria, France, where she heads the ExSitu (Extreme Situated Interaction) research group in Human-Computer Interaction at the Université Paris-Saclay. After receiving her Ph.D. from MIT, she managed research groups at Digital Equipment and Xerox EuroPARC, which were among the first to explore interactive video and tangible computing. She has been a visiting professor at University of Aarhus and Stanford University and recently served as Vice President for Research at the University of Paris-Sud. Wendy is a member of the ACM CHI academy, is a past chair of ACM/SIGCHI, chaired CHI'13 and received the ACM/SIGCHI Lifetime Acheivement Service Award. She also received the prestigious ERC Advanced Grant for her research on co-adaptive instruments. She has published over 150 peer-reviewed research articles in the area of Human-computer Interaction. Her current research interests include human-computer partnerships, co-adaptive instruments, creativity, mixed reality and interactive paper, and participatory design and research methods. Seminar Room G449 (Patil/Kiva)

April 19

Interactive Design Tools for the Maker Movement

Bjoern Hartmann
University of California, Berkeley
Add to Calendar 2017-04-19 14:00:00 2017-04-19 15:00:00 America/New_York Interactive Design Tools for the Maker Movement Abstract:My group's research in Human-Computer Interaction focuses on design, prototyping and implementation tools for the era of ubiquitous embedded computing and digital fabrication. We focus especially on supporting the growing ranks of amateur designers and engineers in the Maker Movement. Over the past decade, a resurgence in interest how the artifacts in our world are designed, engineered and fabricated has led to new approaches for teaching art and engineering; new methods for creating artifacts for personal use; and new models for launching hardware products. The Maker Movement is enabled by a confluence of new technologies like digital fabrication and a sharing ethos built around online tutorials and open source design files. A crucial missing building block are appropriate design tools that enable Makers to translate their intent into appropriate machine instructions - whether code or 3D prints. Makers’ expertise and work practices differ significantly from those of professional engineers - a reality that design tools have to reflect.I will present research that enables Makers and designers to rapidly prototype, fabricate and program interactive products. Making headway in this area involves working in both hardware and software. Our group creates new physical fabrication hardware such as augmented power tools and custom CNC machines; new design software to make existing digital fabrication tools more useful; software platforms for the type of connected IoT devices many Makers are creating; and debugging tools for working at the intersection of hardware and software. We also create expertise sharing tools that lower the cost and increase the quality of online tutorials and videos through which knowledge is disseminated in this community.Our work on these tools is motivated by the daily experience of teaching and building in the Jacobs Institute for Design Innovation - a 24,000 sq ft space for 21st-century design education that opened in 2015. I will give an overview of institute activities and projects, and how they inform our research agenda.Bio:Bjoern Hartmann is an Associate Professor in EECS at UC Berkeley. He is the faculty director of the new Jacobs Institute for Design Innovation. He previously co-founded the CITRIS Invention Lab and also co-directs the Berkeley Institute of Design. His research has received numerous Best Paper Awards at top Human-Computer Interaction conferences, a Sloan Fellowship, an Okawa Research Award and an NSF CAREER Award. He received both the Diane S. McEntyre Award and the Jim and Donna Gray Faculty Award for Excellence in Teaching. He completed his PhD in Computer Science at Stanford University in 2009, and received degrees in Digital Media Design, Communication, and Computer and Information Science from the University of Pennsylvania in 2002. Before academia, he had a previous career as the owner of an independent record label and as a traveling DJ. Seminar Room G449 (Patil/Kiva)

March 21

Add to Calendar 2017-03-21 13:00:00 2017-03-21 14:00:00 America/New_York What Makes Robots Special? Lessons from Building Robots that Teach Abstract:For the past 15 years, I have been building robots that teach social and cognitive skills to children. Typically, we construct these robots to be social partners, to engage individuals with social skills that encourage that person to respond to the robot as a social agent rather than as a mechanical device. Most of the time, interactions with artificial agents (both robots and virtual characters) follow the same rules as interactions with people. The first part of this talk will focus on how human-robot interactions are uniquely different from both human-agent interactions and human-human interactions. These differences, taken together, provide a case for why robots might be unique tools for learning.The second part of this talk will describe some of our ongoing work on building robots that teach. In particular, I will describe some of the efforts to use robots to enhance the therapy and diagnosis of autism spectrum disorder. Bio:Brian Scassellati is a Professor of Computer Science, Cognitive Science, and Mechanical Engineering at Yale University and Director of the NSF Expedition on Socially Assistive Robotics. His research focuses on building embodied computational models of human social behavior, especially the developmental progression of early social skills. Dr. Scassellati received his Ph.D. in Computer Science from the Massachusetts Institute of Technology in 2001. His dissertation work (Foundations for a Theory of Mind for a Humanoid Robot) with Rodney Brooks used models drawn from developmental psychology to build a primitive system for allowing robots to understand people. His work at MIT focused mainly on two well-known humanoid robots named Cog and Kismet. Dr. Scassellati's research in social robotics and assistive robotics has been recognized within the robotics community, the cognitive science community, and the broader scientific community. He was named an Alfred P. Sloan Fellow in 2007 and received an NSF CAREER award in 2003. His work has been awarded five best-paper awards. He was the chairman of the IEEE Autonomous Mental Development Technical Committee from 2006 to 2007, the program chair of the IEEE International Conference on Development and Learning (ICDL) in both 2007 and 2008, and the program chair for the IEEE/ACM International Conference on Human-Robot Interaction (HRI) in 2009. Seminar Room G449 (Patil/Kiva)

March 07

Add to Calendar 2017-03-07 13:00:00 2017-03-07 14:00:00 America/New_York Measuring Sleep, Stress and Wellbeing with Wearable Sensors and Mobile Phones Sleep, stress and mental health have been major health issues inmodern society. Poor sleep habits and high stress, as well asreactions to stressors and sleep habits, can depend on many factors.Internal factors include personality types and physiological factorsand external factors include behavioral, environmental and socialfactors. What if 24/7 rich data from mobile devices could identifywhich factors influence your bad sleep or stress problem and providepersonalized early warnings to help you change behaviors, beforesliding from a good to a bad health condition such as depression?In my talk, I will present a series of studies and systems we havedeveloped to investigate how to leverage multi-modal data frommobile/wearable devices to measure, understand and improve mentalwellbeing.First, I will talk about methodology and tools we developed for theSNAPSHOT study, which seeks to measure Sleep, Networks, Affect,Performance, Stress, and Health using Objective Techniques. To learnabout behaviors and traits that impact health and wellbeing, we havemeasured over 200,000 hours of multi-sensor and smartphone use data aswell as trait data such as personality from about 300 college studentsexposed to sleep deprivation and high stress.Second, I will describe statistical analysis and machine learningmodels to characterize, model, and forecast mental wellbeing using theSNAPSHOT study data. I will discuss behavioral and physiologicalmarkers and models that may provide early detection of a changingmental health condition.Third, I will introduce recent projects that might help people toreflect on and change their behaviors for improving their wellbeing.I will conclude my talk by presenting future directions in measuring,understanding and improving mental wellbeing.BioAkane Sano is a Research Scientist at MIT Media Lab, AffectiveComputing Group. Her research focuses on mobile health and affectivecomputing. She has been working on measuring and understanding stress,sleep, mood and performance from ambulatory human long-term data anddesigning intervention systems to help people be aware of theirbehaviors and improve their health conditions. She completed her PhDat the MIT Media Lab in 2015. Before she came to MIT, she worked forSony Corporation as a researcher and software engineer on wearablecomputing, human computer interaction and personal health care. Recentawards include the AAAI Spring Symposium Best Presentation Award andMIT Global Fellowship. Seminar Room G449 (Patil/Kiva)

February 21

Add to Calendar 2017-02-21 14:00:00 2017-02-21 15:00:00 America/New_York Extreme Crowdsourcing: From Balloons to Ethics ABSTRACT: This talk explores the physical and cognitive limits of crowds, by following a number of real-world experiments that utilized social media to mobilize the masses in tasks of unprecedented complexity. From finding people in remote cities, to reconstructing shredded documents, the power of crowdsourcing is real, but so are exploitation, sabotage, and hidden biases that undermine the power of crowds.BIO: Iyad Rahwan is the AT&T Career Development Professor and an Associate Professor of Media Arts & Sciences at the MIT Media Lab, where he leads the Scalable Cooperation group. He holds a PhD from the University of Melbourne, Australia, and is an affiliate faculty at the MIT Institute of Data, Systems and Society (IDSS). Seminar Room G449 (Patil/Kiva)