December 05

Add to Calendar 2023-12-05 16:00:00 2023-12-05 17:00:00 America/New_York HCI Seminar - Nadya Peek - Machine Agency: Toolkits for Creative Automation Abstract:How can we harness the precision of machines for the creativity of individuals? Robotics gives us access to precision and repeatability, but there is a high threshold to automating. Domain experts such as ceramicists, plant biologists, wood workers, or chemical engineers have extensive knowledge of intricate processes and workflows, but are not also experts in control systems or programming. Because of this, highly skilled individuals conduct thousands of hours of manual work to support their goals. In my research, I develop end-user systems to lower the threshold to automation without loss of complexity. In particular, I develop low-cost and modular open-source toolkits for domain experts to build and customize while iterating on and refining their workflows. Cost forms only one barrier—the main problem is that robots for many workflows simply do not exist. Not all niche applications form a viable market segment. I show how using rapid prototyping equipment such as 3D printers is a viable strategy for domain experts to build application-specific machines and robots. In this talk, I will show example machines, workflows, and processes we are developing to support machine agency.Bio:Nadya Peek develops unconventional digital fabrication tools, small scale automation, networked controls, and advanced manufacturing systems. Spanning electronics, firmware, software, and mechanics, her research focuses on harnessing the precision of machines for the creativity of individuals. Nadya directs the Machine Agency at the University of Washington where she is an assistant professor in Human-Centered Design and Engineering. Machines and systems Nadya has built have been shared widely, including at the White House Office of Science and Technology Policy, the World Economic Forum, TED, and many Maker Faires and outreach events. Her research has been supported by the National Science Foundation, the Alfred P. Sloan Foundation, and the Gordon and Betty Moore Foundation, and her teaching has been recognized with the University of Washington's Distinguished Teaching Award for Innovation with Technology. She received the MIT Technology Review's 35 under 35 award in 2020. Nadya is an active member of the global fab lab community, making digital fabrication more accessible with better CAD/CAM tools and developing open source hardware machines and control systems. She is on the board of the Open Source Hardware Association, the editor in chief of the Journal of Open Hardware, half of the design studio James and the Giant Peek, plays drum machines and synths in the band Construction, and got her PhD at MIT in the Center for Bits and Atoms.This talk will also be streamed over Zoom: https://mit.zoom.us/j/97928948160. 32-D463 (Star)

November 28

Add to Calendar 2023-11-28 16:00:00 2023-11-28 17:00:00 America/New_York Alex Leavitt - Ten Provocations for Safety in the Metaverse Abstract:What does it mean to be safe in an interactive digital environment? Trust & Safety has become a formalized tech industry discipline to address this question, but most companies still struggle to define harm, design interventions, and determine the impact of so-called "safety" initiatives. Even more, interactive digital environments pose significant additional challenges. In social VR, more complications arise around investigating, operationalizing, and measuring real-time interactions between thousands of individuals, in addition to both building content moderation frameworks to govern such environments and providing users support through safety-related interactive tools and resources. This talk proposes an agenda for research and product development for addressing harm in interactive digital environments and improving the safety of people's experiences online. Bio:Alex Leavitt, PhD (they/them) is a computational ethnographer and Principal Researcher in Trust & Safety at Roblox, where they lead social science research on content moderation, international harm measurement, child safety, and emerging interactive technologies. Previously, they worked at Meta on Trust & Safety research initiatives spanning misinformation & politics; polarization & social conflict; COVID-19 & the pandemic; and global humanitarian & activist support networks.The talk will also be streamed over Zoom: https://mit.zoom.us/j/99585052077. 32-D463 (Star)

November 14

Add to Calendar 2023-11-14 16:00:00 2023-11-14 17:00:00 America/New_York Tawanna Dillahunt - Amplifying Voices: Community-Engaged Surveillance Redesign via Speculative Design and Photovoice Abstract: Moving toward equitable, inclusive, and sustainable futures requires new and evolved approaches to conducting human-computer interaction research. This requires that we, as academics, practitioners, and policymakers, embrace community-engaged approaches to our work, reaching out to the often-overlooked voices in society who possess profound insights into technology’s ethics, potential, and future. I will present two such cases—a journey through speculative design and an illuminating exploration through photovoice, both undertaken in collaboration with two vibrant Detroit community organizations. Within these narratives lies the power to redefine the fabric of our technological infrastructure, especially the critical realm of safety systems. This presentation will unravel the theoretical and methodological implications of redesigning and rethinking surveillance infrastructure through the wisdom and voices of those who have long been marginalized.Bio:Tawanna Dillahunt is a 2023-2024 MIT MLK Fellow in the Department of Urban Studies and Planning and an associate professor at the University of Michigan’s School of Information. She holds a courtesy appointment with the Electrical Engineering and Computer Science Department. She leads the Social Innovations Group (SIG), an interdisciplinary group of individuals whose vision is to design, build, and enhance technologies to solve real-world problems affecting marginalized groups and individuals primarily in the U.S. Her current projects aim to address unemployment, environmental sustainability, and technical literacy by fostering social and socio-technical capital within these communities. At MIT, she is working to explore and raise the visibility of alternative economic futures for Black and Brown Detroiters.Tawanna is a 2022-2023 William Bentinck-Smith Fellow at the Harvard Radcliffe Institute, an ACM Distinguished Member, and an inaugural Skip Ellis Early Career Award recipient. The talk will also be streamed over Zoom: https://mit.zoom.us/j/98097552312. 32-G882 (Hewlett)

November 07

Add to Calendar 2023-11-07 16:00:00 2023-11-07 17:00:00 America/New_York HCI Seminar - Michael Terry - Interactive Alignment and the Design of Interactive AI Abstract:AI alignment considers the overall problem of ensuring an AI produces desired outcomes, without undesirable side effects. These goals are not dissimilar to those of researchers developing novel interfaces to the latest AI (e.g., LLMs and generative AI systems). What points of intersection exist between these communities? In this talk, I'll map the concepts of AI alignment onto a basic interaction cycle to highlight three potential focus areas in the design of interactive AI: 1) specification alignment (ensuring users can efficiently and reliably communicate objectives to an AI and verify their correct interpretation), 2) process alignment (supporting users in verifying and optionally controlling the AI’s execution process), and 3) evaluation support (ensuring the user can verify and understand the AI's output). I'll use a set of case studies to illustrate the descriptive and prescriptive value of these concepts, and draw implications for future research.Bio:Michael Terry is a Research Scientist at Google, where he co-leads the People and AI Research (PAIR) group. Prior to Google, Michael was an Associate Professor at the University of Waterloo where he co-founded the HCI Research Lab with Ed Lank. His current research focuses on designing new tools and interfaces to AI, which has led to external offerings such as Google's MakerSuite prompt programming environment.The talk will also be streamed over Zoom: https://mit.zoom.us/j/99976311141. 32-G882 (Hewlett)

October 17

Add to Calendar 2023-10-17 16:00:00 2023-10-17 17:00:00 America/New_York Lydia Chilton - Designing with AI: Enabling Human-AI Co-Creativity Abstract:AI has recently demonstrated unprecedented new abilities to generate text, images, and code. Although there is well-deserved excitement around AI, there are also limitations: it is highly unreliable in its ability to separate fact from fiction, to understand the context of a problem, and to evaluate its own outputs. We present several projects that illustrate how AI can assist the design process: brainstorming, prototyping, and iterating. We show that humans and AI have complementary skills that can be combined to achieve outputs that neither could achieve alone and that human input is critical for conceptualization, framing, guidance, feedback, and editing. Applications include making news illustrations, explaining science on Twitter, and creating TikTok videos for news. Bio: Lydia Chilton is an Assistant Professor in the Computer Science Department at Columbia University. Her research is in computational design - how computation and AI can help people with design, innovation, and creative problem-solving. Applications include: creating media for journalism, developing technology for public libraries, improving risk communication during hurricanes, helping scientists explain their work, and improving mental health in marginalized communities. Dr. Chilton received her bachelor's degree in computer science from MIT in 2007, her Master's in Engineering from MIT in 2009 and her PhD from the University of Washington in 2016. She was a post-doc at Stanford before joining Columbia Engineering in 2017.The talk will also be streamed over Zoom: https://mit.zoom.us/j/91028540582. 32-D463 (Star)

October 03

Add to Calendar 2023-10-03 16:00:00 2023-10-03 17:00:00 America/New_York Narges Mahyar - Harnessing Data for Social Impact: Empowering Communities through Visualization and Social Computing Abstract:Today’s world faces several complex problems, such as climate change, transportation, infrastructure, education, and healthcare. Technology, if designed right, can play an essential role in informing people, raising awareness, sharing data, and connecting communities and decision-makers to take data-informed actions. In this talk, I present examples of my recent work on building and studying community-centered tools to empower the general public to engage in real-world sociotechnical problems such as urban planning and climate change and bring their ideas and comments for shaping future policies. These examples demonstrate my multidisciplinary approach in combining information visualization, HCI, applied ML, and human-centered AI to design and build innovative tools and technologies to address complex sociotechnical problems. I then describe a vision for expanding my research to further advance democracy, equity, well-being, and sustainability by fostering the inclusion and empowerment of marginalized populations. I also briefly present my work on inclusive data visualization to empower the public to understand the data that is increasingly part of their lives and make better data-informed decisions. I close with a discussion of how my work can be applied to other sociotechnical problems, such as health informatics and learning sciences.Bio:Narges Mahyar is an Assistant Professor in the Manning College of Information and Computer Sciences at the University of Massachusetts Amherst. Narges’s research falls at the intersection of Human-Computer Interaction, Information Visualization, Social Computing, and Design. She designs, develops, and evaluates novel social computing and visualization techniques that help people explore, understand, and make data-informed decisions. In addition, over the past nine years, she has focused on an emerging interdisciplinary area of “Digital Civics,” which explores new strategies for scaling and diversifying public engagement in massive decision-making processes related to civic issues. She holds a Ph.D. in Computer Science from the University of Victoria, an MS in Information Technology from the University of Malaya, and a BS in Electrical Engineering from Tehran Azad University. She was a postdoctoral fellow in the Department of Computer Science at the University of British Columbia from 2014 to 2016 and in the Design Lab at the University of California San Diego from 2016 to 2018. Her recognition in the field has been repeatedly confirmed through many accolades for her research, including five Best Paper Awards from CHI 2023, Eurovis 2022, CSCW 2020, the Council of Educators in Landscape Architecture 2017, and VAST 2014; and three Best Paper Honorable Mention Awards from TiiS 2022, DIS 2021 and ISS 2016.The talk will also be streamed over Zoom: https://mit.zoom.us/j/99213422200. 32-G882 (Hewlett)

September 22

Add to Calendar 2023-09-22 13:00:00 2023-09-22 14:00:00 America/New_York Daniel Weld - Intelligence Augmentation: Effective Human-AI Interaction to Supercharge Scientific Research Abstract:Recent advances in Artificial Intelligence are powering revolutionary interactive tools that will transform the capabilities of everyone, especially knowledge workers. But in order to create synergy, where humans’ augmented intelligence and creativity reaches its true potential, we need improved interaction methods. AI presents several challenges to existing UI paradigms, including nondeterminism, inexplicable behavior, and significant errors (such as LLM hallucinations). We discuss principles and pitfalls for effective human-AI interaction, grounding our discussion in Semantic Scholar - a free, open, AI-powered scientific discovery platform aimed at augmenting the intelligence of human researchers. We present ML and NLP-based approaches for paper recommendation, summarization, discovery of emerging concepts, and question answering - and the adaptive interfaces that let users control and verify their behavior. effective use. via its Web interface, API, and downloadable academic knowledge graph. Bio:Daniel S. Weld is Chief Scientist and General Manager of Semantic Scholar at the Allen Institute of Artificial Intelligence and Professor Emeritus at the University of Washington. After formative education at Phillips Academy, he received bachelor’s degrees in both Computer Science and Biochemistry at Yale University in 1982. He landed a Ph.D. from the MIT Artificial Intelligence Lab in 1988, received a Presidential Young Investigator’s award in 1989, an Office of Naval Research Young Investigator’s award in 1990; he is a Fellow of the Association for Artificial Intelligence (AAAI), the American Association for the Advancement of Science (AAAS), and the Association for Computing Machinery (ACM). Dan was a founding editor for the Journal of AI Research, was area editor for the Journal of the ACM and on the editorial board for the Artificial Intelligence journal. Weld is a Venture Partner at the Madrona Venture Group and has co-founded three companies, Netbot (sold to Excite), Adrelevance (sold to Media Metrix), and Nimble Technology (sold to Actuate).The talk will also be streamed over Zoom: https://mit.zoom.us/j/98770446797. 32-G882 (Hewlett)

September 12

Add to Calendar 2023-09-12 16:00:00 2023-09-12 17:00:00 America/New_York Put-What-Where? Abstract: Back in the day MIT was the home of something called The Architecture Machine Group. It was largely their innovative work which spawned the MIT Media Lab. One of my all-time favourite projects was undertaken there, Put-That-There. It was done by two (then) graduate students, Chris Schmandt and Eric Hulteen, along with their supervisor Richard Bolt.Given the importance that I give to learning from history, and the fact that I’m speaking at MIT, it seemed fitting that I build my presentation around that project. Seeking a good fit in form and function, this will not be a talk by some old geezer ranting on about the good old days. Rather, it will focus on what we can learn from that work, along many dimensions. Hopefully it will help provide additional optics through which to think about the nature of interaction, language models AI, “natural” language interfaces, and how our work fits into the broader ecosystem within which it is, and will be, situated.All that from a project from around 1979! I hope.Bio:Bill has an over 40-year involvement in research, practice and commentary around design, innovation, and human aspects of technology. Following a 20-year career as a professional musician, he morphed into a researcher and interaction designer - at the University of Toronto, Xerox PARC, Alias|Wavefront, SGI Inc. (Chief Scientist at the last two), and Microsoft Research, from which he “rewired” in Dec. 2022. He has been awarded four honourary doctorates, is co-recipient of an Academy Award for Scientific and Technical Achievement, received an ACM/SIGCHI Lifetime Achievement Award, and is a Fellow of the Association of Computer Machinery (ACM). Bill has published, lectured, and consulted widely, and is an Adjunct Professor at the University of Toronto, and a Distinguished Professor of Industrial Design at the Technical University Eindhoven. He is currently focused on curating his collection of over 850 devices which document the history of human interaction with computers. Beyond work, his passions are his family, mountains, rivers, bikes and books.The talk will also be streamed over Zoom: https://mit.zoom.us/j/96454082075. 32-D463 (Star)

May 17

Add to Calendar 2023-05-17 16:00:00 2023-05-17 17:00:00 America/New_York Yingtao Tian - Computational Creativity in Abstract Art: Exploring 2D and 3D Art with Evolutionary Algorithms Abstract:Computational creativity has a significant role in modern-era abstract arts, where an ever-lasting quest is to assist artists in creating high quality, abstract art with computational approaches. However, in this context, two recent challenges have emerged: archiving high-quality abstract painting creation comparable to those produced by recent promising deep learning-based approaches, and extending beyond two-dimensional art to address abstract 3D art with high-quality and controllability. These challenges stem from the difficulty in defining gradient-based models and the varying interpretations of abstract arts.In this talk, we will present our research works aimed at advancing computational creativity in the direction of abstract art creation, with an emphasis on both 2D and 3D abstract art forms. We will discuss two key studies: "Modern Evolution Strategies for Creativity: Fitting Concrete Images and Abstract Concepts" and "Evolving Three-Dimensional (3D) Abstract Art: Fitting Concepts by Language." Both investigations utilize modern evolutionary algorithms to tackle the challenges of abstract art creation, where a well-defined gradient-based optimization process is hard to define.In this first study, we explore the use of modern evolutionary algorithms for generating two-dimensional abstract art that conforms to textual or visual prompts. In the second study, we propose to bridge modern evolution strategies and 3D rendering through customizable parameterization to produce 3D scenes. These scenes, when rendered into films, follow artists' textual specifications. While our works focus on a limited set of abstract art forms, we hope they could offer a fresh perspective for artists to easily express creative ideas for abstract art, and serve as an inspiration for future works in the field.Bio:Yingtao Tian is a Research Scientist at Google Brain Tokyo. He obtained his PhD in Computer Science at Stony Brook University and B.S. in Computer Science and Technology at Fudan University. His research interests lie in generative models and representation learning, as well as their applications in image generation, natural language processing, knowledge base modeling, social network modeling, bioinformatics. In addition to his core research interests, he is passionate about evolution strategies and the interdisciplinary study of machine intelligence and creativity within the context of humanities research. His work aims to advance the understanding and development of computational creativity and its impact on various artistic domains.The talk will also be streamed over Zoom: https://mit.zoom.us/j/93441815357. 32-G449 (Kiva)

May 16

Add to Calendar 2023-05-16 16:00:00 2023-05-16 17:00:00 America/New_York Scott Hudson - Fabrication in HCI: Materials, Machines, Mechanisms, and Tools, Oh My! Abstract: In this talk I will provide a (semi-)random walk through some fabrication research results from my group(s) from the last decade or so, and consider why fabrication is, and is not a Human-Computer Interaction problem. I will look primarily, but not exclusively, at topics of new materials and processes for fabricating with them — especially for soft materials. These will include work in printed optics, pneumatics, acoustics, and electromechanical devices, as well as a heavy dose of textiles as a fabrication medium. I will also talk a bit about work in fabricatable mechanisms (and applications more generally) as well as tools. Bio: Scott Hudson is a Professor of Human-Computer Interaction at Carnegie Mellon and previously held positions at the University of Arizona and Georgia Tech. He has published extensively in technical HCI. He recently received the ACM SIGCHI Lifetime Research Award. Previously he received the ACM SIGCHI Lifetime Service Award, was elected to the CHI Academy, and received the Allen Newell Award for Research Excellence at CMU. His research interests within HCI are wide ranging, but tend to focus on technical aspects of HCI. Much of his recent work has been considering advanced fabrication technologies such as new machines, processes, and materials for 3D printing, as well as computational knitting and weaving, and applications of mechanical meta-materials.The talk will also be streamed over Zoom: https://mit.zoom.us/j/94837523930. 32-D463 (Star)

May 02

Add to Calendar 2023-05-02 16:00:00 2023-05-02 17:00:00 America/New_York Q. Vera Liao - Human-Centered Explainable AI: From Algorithms to User Experiences Abstract:Artificial Intelligence technologies are increasingly used to aid human decisions and perform autonomous tasks in critical domains. The need to understand AI in order to improve, contest, develop appropriate trust, and better interact with AI systems has spurred great academic and public interest in Explainable AI (XAI). The technical field of XAI has produced a vast collection of algorithms in recent years. However, explainability is an inherently human-centric property and the field is starting to embrace human-centered approaches. Human-computer interaction (HCI) research and user experience (UX) design in this area are increasingly important especially as practitioners begin to leverage XAI algorithms to build XAI applications. In this talk, I will draw on my own research and broad HCI works to highlight the central role that human-centered approaches should play in shaping XAI technologies, including driving technical choices by understanding users’ explainability needs, uncovering pitfalls of existing XAI methods, and providing conceptual frameworks for human-compatible XAI. Bio:Q. Vera Liao is a Principal Researcher at Microsoft Research Montréal, where she is part of the FATE (Fairness, Accountability, Transparency, and Ethics in AI) group. Her current research interests are in human-AI interaction, explainable AI, and responsible AI. Prior to joining MSR, she worked at IBM Research and studied at the University of Illinois at Urbana-Champaign and Tsinghua University. Her research received multiple paper awards at ACM and AAAI venues. She currently serves as the Co-Editor-in-Chief for Springer HCI Book Series, on the Editorial Board of ACM Transactions on Interactive Intelligent Systems (TiiS), an Editor for CSCW, and an Area Chair for FAccT 2023.The talk will also be streamed over Zoom: https://mit.zoom.us/j/96445121768. 32-D463 (Star)

April 18

Add to Calendar 2023-04-18 16:00:00 2023-04-18 17:00:00 America/New_York Liang He - Beyond Shape: Fabricating Kinetic Objects with 3D Printable Spring-Based Mechanisms for Interactivity Abstract:3D printing technology has long been touted as a technique to revolutionize manufacturing, transform rapid prototyping, and enable personalized fabrication. In the past few decades, 3D printers have evolved to create multi-material, multi-color, multi-scale objects and, more recently, convert static 3D models into non-static objects. My research aims to increase the expressivity of consumer-grade 3D printing beyond fixed and rigid shapes for interaction and computing. In this talk, I will talk about my Ph.D. research in the past few years on kinetic fab I/O using spring-based mechanisms. In these projects, I explored parametric design techniques and developed interactive, computational systems to lower barriers to design and control desired I/O behaviors for 3D printing in many application domains, such as physical computing, prototyping, accessibility, and education. Bio:Liang He is an Assistant Professor at the Department of Computer Graphics Technology, Polytechnic Institute at Purdue University. His research in human-computer interaction (HCI) focuses on designing and developing novel interactive techniques and tools that mediate human interactions with custom physical objects, devices, and interfaces for computing. In his research, Liang draws on his cross-disciplinary background in computational design, engineering, and computer science to build large, complex systems and make technical contributions in digital fabrication, physical computing, haptic/tactile interfaces, accessibility, and ubiquitous computing. The talk will also be streamed over Zoom: https://mit.zoom.us/j/95391418975. 32-D463 (Star)

April 11

Add to Calendar 2023-04-11 16:00:00 2023-04-11 17:00:00 America/New_York Zhicheng "Leo" Liu - Human-Machine Symbiosis in Visual Analytics and Visualization Design Abstract: A tight coupling of humans and machines is often required to address many challenges in data visualization research. In this talk, I will present two threads of work that explore different configurations of human-machine symbiosis to effectively support analytic and design tasks. The first thread focuses on interactive analysis of event sequence data, where the machine automatically generates visual summaries of the data, and the user employs the visual summaries as overviews to guide analysis. The second thread focuses on visualization authoring and reuse through manipulable semantic components, where the user specifies desired designs, and the machine assists by performing data binding and example deconstruction. Based on these two threads, I will discuss research opportunities going forward to enhance human-machine symbiosis in data visualization. Bio: Zhicheng "Leo" Liu is an assistant professor in the department of computer science at the University of Maryland College Park. He directs the Human-Data Interaction research group, focusing on human-centered techniques and systems that support interactive data analysis and visual communication. Before joining UMD, he worked at Adobe Research as a research scientist and at Stanford University as a postdoc fellow. He obtained his Ph.D. at Georgia Tech. His work has been recognized with a Test-of-Time award at IEEE VIS, and multiple Best Paper Awards and Honorable Mentions at ACM CHI and IEEE VIS.The talk will also be streamed over Zoom: https://mit.zoom.us/j/97604964013. 32-D463 (Star)

April 04

Add to Calendar 2023-04-04 16:00:00 2023-04-04 17:00:00 America/New_York Dr. Chao Mbogho - Teaching Computer Science in Kenya for Global Competitiveness Abstract:Skills in responsible and ethical computing are crucial for Kenyan educators to prepare the next generation of computer scientists who will solve the region's most pressing socio-economic issues. This talk will explore the growing and thriving landscape of Kenya's tech ecosystem and delve into how computer science (CS) education is taught within the country's higher education space. With specific examples of how CS pedagogy has impacted Kenyan students via online learning, mobile learning, and structured mentorship, the talk will highlight how CS education can be taught using multidisciplinary models. The audience will also learn about the inaugural Responsible Computing Challenge in Kenya by Mozilla Foundation and what is envisioned over the next year and beyond. Bio Dr Chao Mbogho is a multi-award-winning Kenyan Computer Science educator, mentor, speaker, and leader whose work intersects teaching computer programming in resource-constrained environments, supporting educators in Kenya to create impactful curricula, and designing effective structured mentoring programs. Her innovation-oriented approach, academic excellence, and leadership expertise have seen her receive nearly 30 awards, scholarships, and fellowships. For example, in 2020, she was the first Kenyan to receive the distinguished global OWSD-Elsevier award in Engineering, Technology, and Innovation. Chao’s work has been featured in Forbes, VOGUE, The Conversation Africa, and she also delivered a TEDx talk titled 'Holding up the Ladder'. She recently joined Mozilla Foundation as the Responsible Computing Challenge Fellow to support Kenyan educators in delivering ethical and responsible curricula. Chao is also the Founder and CEO of KamiLimu, an impactful structured mentorship social enterprise that complements classroom learning for university students in tech in Kenya. Before transitioning to industry, Chao worked in academia as a Senior Lecturer of Computing and the Dean of the School of Science and Technology at Kenya Methodist University. Dr Chao holds a PhD in Computer Science from the University of Cape Town, an MSc in Computer Science from the University of Oxford, and a BSc in Mathematics and Computer Science from Kenya Methodist University. The talk will also be streamed over Zoom: https://mit.zoom.us/j/95282967348. 32-D463 (Star)

March 21

Add to Calendar 2023-03-21 16:00:00 2023-03-21 17:00:00 America/New_York Ken Holstein - Designing for complementarity in AI-augmented work Abstract:AI systems are increasingly used to augment human work in complex social and creative contexts, including social work, education, design, and content moderation. These technologies are typically introduced with the promise of overcoming human limitations and biases. However, AI judgments in such settings are themselves likely to be imperfect and biased, even if in different ways than humans. In this talk, I argue that today’s worker-facing AI systems often represent missed opportunities to meaningfully complement and enhance human workers’ abilities. I will present real-world case studies and conceptual frameworks from our research, which illustrate critical challenges to achieving human-AI complementarity in practice. I will then share some of our efforts to overcome these challenges, targeting various points across the AI project lifecycle—from the earliest problem formulation stages, to the design of AI system evaluations, to the development of worker-AI interfaces.Bio:Ken Holstein is an Assistant Professor in the Human-Computer Interaction Institute at Carnegie Mellon University, where he leads the Co-Augmentation, Learning, & AI (CoALA) Lab:https://www.thecoalalab.com/. Their research focuses on supporting effective forms of AI-augmented work in real-world contexts, and on scaffolding more responsible AI development practices in industry and the public sector. Throughout their work, they draw on approaches from human-computer interaction, AI, design, psychology, statistics, and the learning sciences. Their work has received awards at ACM CHI, IEEE SaTML, AIED, and ICLS, and has been covered by outlets such as PBS, Wired, Forbes, The Boston Globe, The Hechinger Report, and Marketplace Tech.The talk will also be streaming live over Zoom: https://mit.zoom.us/j/91645609513. 32-D463 (Star)

March 14

Andrew Head - Designing the Interactive Paper

Andrew Head
Department of Computer & Information Science, University of Pennsylvania
Add to Calendar 2023-03-14 16:00:00 2023-03-14 17:00:00 America/New_York Andrew Head - Designing the Interactive Paper Abstract:In this talk, I share a vision of interactive research papers, where user interfaces surface information for readers when and where they need it. Grounded in tools that I and my collaborators have developed, I discuss what it takes to design reading interfaces that (1) surface definitions of terms where readers need them (2) explain the meaning of math notation and (3) convey the meaning of jargon-dense passages in simpler terms. In our research, we have found that effective reading support requires not only sufficient document processing techniques, but also the careful presentation of derived information atop visually complex documents. I discuss tensions and solutions in designing interactive papers, and identify future research directions that can bring about powerful augmenting reading experiences.Bio:Andrew Head is an assistant professor at the University of Pennsylvania in the Department of Computer and Information Science. He co-leads Penn HCI, Penn's research group in human-computer interaction. Andrew's research focuses on how interactive systems can support the tasks of reading, writing, and programming. His most recent work has focused on how scientific documents can be enhanced with interactivity to help people understand them. Often, his systems incorporate novel backends for document authoring, processing, and understanding. Andrew partners with the Allen Institute for AI in conducting this line of research. His research frequently appears in the top ACM and IEEE-sponsored venues for HCI research, and has been recognized with best paper awards and nominations on numerous occasions. The talk will also be streamed over Zoom: https://mit.zoom.us/j/93009269842. 32-G882 (Hewlett)

February 21

Add to Calendar 2023-02-21 16:00:00 2023-02-21 17:00:00 America/New_York Fernanda Viégas and Martin Wattenberg - From visual thinking to artificial intelligence ABSTRACT:How can we make people smarter? How can we understand computers that appear to be smart themselves? We’ll discuss how data visualization, a broad and expressive medium, can play both of these roles. We’ll touch on some of the visual techniques for powerful exploratory data analysis that look at different kinds of rich data such as text and images. Through a series of examples we will then discuss how visualization can shed light on how AI works, and how using this lens can broaden participation in the field of AI.BIO:Fernanda Viégas and Martin Wattenberg are Gordon McKay Professors of Computer Science at Harvard, where Fernanda is also Sally Starling Seaver Professor at the Radcliffe Institute for Advanced Study. In addition, Viégas and Wattenberg are Principal Scientists at Google, where they co-founded the PAIR (People+AI Research) initiative. Their work in machine learning focuses on transparency and interpretability, as part of a broad agenda to improve human/AI interaction. They are well known for their contributions to social and collaborative visualization, and the systems they’ve created are used daily by millions of people. Viégas and Wattenberg are also known for visualization-based artwork, which has been exhibited in venues such as the Museum of Modern Art in New York, London Institute of Contemporary Arts and the Whitney Museum of American Art.The talk will also be streamed over Zoom: https://mit.zoom.us/j/92140011462. 32-D463 (Star)

February 14

Add to Calendar 2023-02-14 15:00:00 2023-02-14 16:00:00 America/New_York Pedro Lopes - Integrating interactive devices with the user’s body Abstract:The main question that drives my research is: what is the next interface paradigm that supersedes wearable devices? I argue that the new paradigm is one in which interactive devices will integrate with the user’s biological senses and actuators.This way of engineering devices that intentionally borrow parts of the user’s biology puts forward a new generation of miniaturized devices; allowing us to circumvent traditional physical constraints. For instance, in the case of my devices based on electrical muscle stimulation, they demonstrate how our body-device integration circumvents the constraints imposed by the size of motors used in traditional haptic devices (e.g., robotic exoskeletons). Taking this further, we can apply this integrated approach to other modalities. For instance, we engineered a device that delivers chemicals to the user to generate temperature sensations, without the need to rely on cumbersome thermal actuators, such as air conditioners or heaters. My approach to miniaturizing devices is especially useful to advance mobile interactions, such as in virtual or augmented reality, where users have a desire to remain untethered & free.Integrating devices with the user’s body allows to give users new physical abilities. For example, we have engineered a device that allows users to locate odor sources by “smelling in stereo” as well as a device that physically accelerates the user’s reaction time using muscle stimulation, which allows users to steer to safety or even catch a falling object that they would normally miss.While this integration can offer many benefits (e.g., faster reaction time, realistic simulations in VR/AR, or faster skill acquisition), it also requires tackling new challenges, such as the question of agency: do we feel in control when our body is integrated with an interface? Together with our colleagues in neuroscience, we have been measuring how our brain encodes agency to improve the design of this new type of integrated interfaces. We found that, even in the extreme case of interfaces that electrically control the user’s muscles, it is possible to improve the sense of agency. More importantly, we found that it is only by preserving the user’s sense of agency that these integrated devices provide benefits even after the user takes them out. Bio:Pedro Lopes is an Assistant Professor in Computer Science at the University of Chicago. Pedro focuses on integrating computer interfaces with the human body—exploring the interface paradigm that supersedes wearable computing. Some of these new integrated-devices include: muscle stimulation wearable that allows users to manipulate tools they have never seen before or that accelerate their reaction time, or a device that leverages the sense of smell to create an illusion of temperature. Pedro’s work has received a number of academic awards, such as five ACM CHI/UIST Best Papers, Sloan Fellowship and NSF CAREER. Pedro’s research also captured the interest of the public & media, covered by the New York Times and was exhibited at Ars Electronica & World Economic Forum.The talk will also be streamed over Zoom: https://mit.zoom.us/j/96214841975. 32-D463 (Star)