September 23

Add to Calendar 2025-09-23 16:00:00 2025-09-23 17:00:00 America/New_York HCI Seminar - Kate Isaacs - Strategies for Visualization in the Specific: Building Interactive Visualizations For and With Computing Experts Abstract:Despite advances in visualization authoring and analysis support, even technically trained communities struggle to effectively and efficiently achieve their data analysis goals. These scenarios frequently occur in computing and simulation science when experts in these domains seek an understanding of the computing processes that led to a measured behavior, such as optimization failures. This understanding is then used as a basis to develop new algorithmic or systems solutions. Deriving what occurred during program execution and why is a complicated analysis task. I will present examples from multiple computing systems domains and visualization strategies that have helped address the corresponding analysis need. From these projects, I further discuss barriers domain experts face in building their own visualization for their analyses.Bio:Katherine (Kate) Isaacs is an Associate Professor of Computer Science at the Scientific Computing and Imaging Institute and the Kahlert School of Computing at the University of Utah. Before joining the University of Utah, she was an Assistant Professor of Computer Science at the University of Arizona. She received her Ph.D. in computer science from the University of California, Davis and has undergraduate degrees in computer science and mathematics from San José State University and in physics from the California Institute of Technology. She primarily publishes in data visualization and high performance computing venues. She is a recipient of the NSF CAREER award, the Department of Energy Early Career Research Program award, and the Presidential Early Career Award for Scientists and Engineers (PECASE).This talk will also be streamed over Zoom: https://mit.zoom.us/j/97866869874. TBD

September 16

HCI Seminar - Ziv Epstein - Re-inventing the attention machine (& building the serendipity machine)

Ziv Epstein
MIT (Schwarzman College of Computing and School of Architecture)
Add to Calendar 2025-09-16 16:00:00 2025-09-16 17:00:00 America/New_York HCI Seminar - Ziv Epstein - Re-inventing the attention machine (& building the serendipity machine) Abstract:Today, algorithmic systems such as social media feeds and generative AI systems increasingly mediate human interactions and experiences. But interactions with these black-boxes reflect the worst of us due to  impoverished objectives that amplify problematic content, induce algorithmic overreliance and monoculture. This funhouse mirror-room in turn raises new questions about representation, agency and creativity. Whose perspective and values are being implicitly and explicitly amplified by these algorithms? Where does accountability lie and insight come from? What do users really want in the long run and how do we encode that into machines? In this talk, I will discuss two lines of work. The first explores how to reinvent the engagement-based “attention machine” of social media the interfaces and algorithms by aligning them with users’ values. I will discuss the role of attention and distraction in browsing patterns online and how to design mitigations to fight misinformation at scale by shifting attention to accuracy. Then I will discuss how to measure which human values are being algorithmically amplified by social media algorithms, and if those align with people's own values.  The second explores the domain of generative AI for creative application, and how we can foster active and divergent interactions with generative models to foster “serendipity” by re-injecting randomness into the models. Together, this work underscores the promise of new forms of interactions with algorithmic systems that center human agency to produce prosocial outcomes.Bio:Ziv Epstein is a postdoctoral associate at MIT sitting between Schwarzman School of Computing (SCC) and History, Theory & Criticism (HTC) in the School of Architecture. His current research explores how to audit the values amplified by social media ranking algorithms, and how to steer these algorithms to align with human values. Beyond social media, he is also interested in the impacts of AI on creative production in settings such as visual media and interpretative labor (e.g. divination). He was previously a postdoctoral fellow at Stanford University (2023-2025), and received his PhD from the MIT Media Lab (2023) where his dissertation focused on new ways to operationalize and measure attention on social media and implications for fighting misinformation online. He has published papers in venues such as the general interest journals Nature, Science and PNAS, as well as top-tier computer science proceedings such as CHI and CSCW. His work has received widespread media attention in outlets like the New York Times, Scientific American, and Fast Company. He is also a practicing multimedia artist whose work has been featured in Ars Electronica, the MIT Museum, and Burning Man.This talk will also be streamed over Zoom: https://mit.zoom.us/j/97497384805. TBD

May 20

What is so rapid about prototyping anyway?

Abdullah Muhammad
Hasso Plattner Institute
Add to Calendar 2025-05-20 16:00:00 2025-05-20 17:00:00 America/New_York What is so rapid about prototyping anyway?  The term rapid prototyping was coined in the late 80s and commonly makes reference to the use of 3D printers, milling machines, and laser cutters as means to speed up the creative process in industrial design.But is rapid prototyping really rapid? And is it even prototyping?When taking the idea of rapid prototyping seriously, the fabrication steps find themselves embedded into a series of other creative activities, such as design sessions, brainstorming sessions, and critique sessions, all of which commonly take place on a 1min to 1h scale, often with multiple people involved. And this is where today's fabrication technology fails: Design/fabrication/assembly takes too long, the meeting is over, and everyone goes home. By the time a new date is found and everyone reconvenes, the flow has been interrupted, and the creative process comes slows down to a crawl. At this point, the actual speed of the fabrication step is of little importance and might as well take a week or whenever the next meeting takes place.In this talk, I will challenge this traditional notion of rapid prototyping. I will try to convince you that (1) in order to make fabrication technology a serious contender for creative process, we need to eliminate the interruption, meaning: fabricate not between meetings, but within a meeting. And I will try to convince you that (2) doing so is possible, i.e., we can indeed fabricate within the time frame of a meeting. Picking laser cutting as my weapon of choice, I will lay out the core idea of a fast fabrication process and analyze its timing to identify bottlenecks (spoiler: they are all in assembly). I will then aggregate designs, techniques, and algorithms from my most recent 4 CHI/UIST papers into a process that allows prototyping even human-scale objects (I will show a 4' piece of furniture) in an hour session--including designing, fabricating, and assembling--before anyone leaves the meeting.I will complete my talk by zooming out to the bigger picture of "design for manufacturing and assembly" and how it could—and I would argue should—form the basis of a more encompassing notion of fabrication and rapid prototyping.The talk will also be streamed on Zoom. Here is the link and password to the room:https://uni-potsdam.zoom.us/my/muhammadabdullah69755517  TBD

April 22

Add to Calendar 2025-04-22 16:00:00 2025-04-22 17:00:00 America/New_York HCI Seminar - Cindy Bennett - Accessibility and Disability Considerations for Responsible AI Abstract:Generative (gen) AI is widely considered to have the potential to scale accessibility solutions. For example, users can turn on AI-generated captions on most virtual conference platforms and blind and low vision users can receive detailed image and video descriptions on demand, capabilities and scales unheard of until recently. However, greater responsible AI research shows how gen AI leveraged in particular domains (e.g., creative) is transforming professionals and threatening workers, such as artists who frequently work in precarious conditions. Further, gen AI exhibits bias in how it represents various groups of people. In this talk I will share two projects addressing these topics–(1) how disabled artists make their workflows accessible and negotiate recent gen AI advancements and (2) representational tropes participants with disabilities identified in AI-generated images. By sharing these projects I will show how gen AI exhibits potential to enhance disabled people’s work by relieving them of certain access barriers and taking over undesired administrative labor, but its wider favoring over hiring artists raises concerns about the cost of leveraging it as an accessibility tool. Further, AI-generated images represented people with disabilities extremely poorly, amplifying longstanding stereotypes which disabled advocates have countered for decades. I will argue that these limitations cannot be read separately from AI applied to solve perennial digital inaccessibility, but they must motivate multi-pronged approaches to outline responsible AI development.Bio:Dr. Cynthia Bennett is a senior research scientist at Google Research. She researches making technology-mediated experiences, such as those leveraging generative AI, accessible to and representative of people with disabilities while mitigating harmful applications. Prior, Bennett was a researcher at Apple and a postdoctoral Research Fellow  at Carnegie Mellon University, after receiving her Ph.D. in Human Centered Design and Engineering from the University of Washington. Bennett's research has been recognized with awards from top scientific publication venues and funding agencies in her field. She is also a disabled woman scholar committed to raising participation of people with disabilities in the tech industry.This talk will also be streamed over Zoom: https://mit.zoom.us/j/96239100489. TBD

April 08

Add to Calendar 2025-04-08 16:00:00 2025-04-08 17:00:00 America/New_York HCI Seminar - Hugo Garcia - Controllable and Expressive Generative Modeling for the Sound Arts Abstract:State-of-the-art generative audio models rely on text prompting mechanisms as a primary form of interaction with users. While text prompting can be a powerful supplement to more gestural interfaces, a sound is worth more than a thousand words: sonic structures like a syncopated rhythm or the timbral morphology of a moving texture are hard to describe in text. They can be more easily described through a sonic gesture. This talk describes two research works exploring generative audio modeling with gestural and interactive control mechanisms: VampNet (via masked acoustic token modeling) and Sketch2Sound (via fine-grained interpretable control signals).Bio:Hugo Flores García (he/they) is a Honduran computer musician, improviser, programmer, and scientist. Hugo’s creative practice spans improvised  music for guitars, sound objects and electronics, sound installations, bespoke digital musical instruments, and interactive art. He is a PhD candidate at Northwestern University, doing research at the intersection of applied machine learning, music, and human-computer interaction. Hugo’s research centers around designing new instruments for creative expression, focusing on artist-centered machine learning interfaces for the sound arts.This talk will also be streamed over Zoom: https://mit.zoom.us/j/93099356333. TBD

April 01

Add to Calendar 2025-04-01 16:00:00 2025-04-01 17:00:00 America/New_York HCI Seminar - John Stasko - Reflections on the Value of Visualization Abstract:Although everyone today seems focused on AI and LLMs to solve problems and make decisions, many everyday activities still benefit from a more human touch and presence. Data Visualization, the focus of my research, is fundamentally a tool to help people perform analysis better and communicate information about that analysis more effectively. In this talk, I'll recount multiple examples from my career that illustrated the value of visualization and the lessons that I learned from those examples. Additionally, I will attempt to more precisely explain how visualization helps analysis and communication, and I will describe the situations in which visualization can be most beneficial. Bio:John Stasko is a Regents Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he has been on the faculty since 1989. From 2021-22, he also served as the School’s Interim Chair. Stasko is a widely published and internationally recognized researcher in the areas of information visualization and visual analytics, approaching each from a human-computer interaction perspective. He was inducted into the ACM CHI Academy in 2016 and IEEE Visualization Academy in 2019. Stasko received the IEEE Visualization and Graphics Technical Committee (VGTC) Visualization Technical Achievement Award in 2012 and the Visualization Lifetime Achievement Award in 2023. He was named an IEEE Fellow in 2014 and an ACM Fellow in 2022.This talk will also be streamed over Zoom: https://mit.zoom.us/j/97746469057. TBD

March 11

Add to Calendar 2025-03-11 16:00:00 2025-03-11 17:00:00 America/New_York HCI Seminar - Amy Bruckman - Patterns of Polarization: Parallels Between Online Discussion of Men’s Rights and Gun Policy Abstract:Why do people join online groups that promote extreme views? What draws people in, and what value do they find in their participation? In this talk, I will draw connections between our results from mixed-methods studies of groups on Reddit for discussing men’s rights, and groups for discussing gun politics. In both, the reward for expressing more extreme views is social approval, and a strong and supportive sense of membership in a community. However, we find that many members privately articulate more moderate views than they would be comfortable expressing online.  I’ll review the social and political science literatures showing that the same person may express different views in different contexts. One possible solution is to create a context that validates moderate views and civil discussion across difference. Towards this end, we launched the subreddit r/guninsights in 2022. I’ll review our results to date, and suggest broader implications for understanding and remediating polarization. Bio:Amy Bruckman is Regents’ Professor in the School of Interactive Computing at the Georgia Institute of Technology. Her research focuses on social computing, with interests in online communities, the nature of knowledge construction online, content moderation, CSCW, and technology ethics. Bruckman received her Ph.D. from the MIT Media Lab in 1997, and a B.A. in physics from Harvard University in 1987. She is a Fellow of The ACM and a member of the SIGCHI Academy. She is the author of the book “Should You Believe Wikipedia? Online Communities and the Construction of Knowledge” (2022). This talk will also be streamed over Zoom: https://mit.zoom.us/j/98000061929. TBD

February 25

Add to Calendar 2025-02-25 16:00:00 2025-02-25 17:00:00 America/New_York HCI Seminar - Rahul Bhargava - Data Beyond the Visual Description:Our standard toolkit of chart and graphs is poorly suited for the new community-oriented settings where data is now commonly used. Inspired by the arts, we can break free from outdated data practices and embrace creative, community-centered approaches that empower and engage people in public settings. Data sculptures, data murals, data theatre, and multi-sensory data experiences offer a broader and more appropriate set of approaches. Using this larger toolbox of data viz techniques can bring people together around data in ways that more fully reflect, embrace, and uplift their communities.Bio:Rahul Bhargava is an educator, designer, and artist working on creative data storytelling and computational journalism in support of goals of social justice and community empowerment. He has run over 100 workshops on data literacy, designed arts-based data murals and theatre, built award-winning museum exhibits, co-created AI-powered civic technologies with CSOs, and delivered keynote talks across the globe. Rahul’s first book, “Community Data: Creative Approaches to Empowering People with Information”, is now available from Oxford University Press.  He leads the Data Culture Group as an Assistant Professor of Journalism and Art + Design at Northeastern University.This talk will also be streamed over Zoom: https://mit.zoom.us/j/95955852702. TBD

February 04

Add to Calendar 2025-02-04 16:00:00 2025-02-04 17:00:00 America/New_York HCI Seminar - Shriram Krishnamurthi - The Human Factors of Formal Methods Abstract:"Formal methods" include specification, programming, and more: from logics to express desired program behavior to algorithms to check correctness. Lean is a formal method, SMT is a formal method, LTL is a formal method, Rust's type system is a formal method. As formal methods improve in expressiveness and power, they create new opportunities for non-expert adoption. In principle, formal tools are now powerful enough to enable developers to scalably validate realistic systems artifacts without extensive formal training. However, realizing this potential for adoption requires attention to not only the technical but also the human side—which has received extraordinarily little attention from formal-methods research. This talk presents some of our efforts to address this paucity. We apply ideas from cognitive science, human-factors research, and education theory to improve the usability of formal methods. Along the way, we find misconceptions suffered by users, how technically appealing designs that experts may value may fail to help, and how our tools may even mislead users.Bio:Shriram is the Vice President for Programming Languages at Brown University in Providence, RI, USA. He’s not, really, but that’s what it says on his business card. At heart, he's a person of ill-repute: a Schemer, Racketeer, and Pyreteer. He believes tropical fruit are superior to all other kinds. He is terrified of success, because he may be forced to buy a suit. On a more serious note, he's a professor at Brown who has created several influential systems (such as DrRacket, Margrave, Flapjax, and Lambda-JS) and written multiple widely-used books. He has won SIGPLAN's Robin Milner Young Researcher Award, SIGPLAN's Software Award (jointly), SIGSOFT's Influential Educator Award, SIGPLAN's Distinguished Educator Award (jointly), and other recognitions. This talk will also be streamed over Zoom: https://mit.zoom.us/j/97298991671. TBD

December 16

Add to Calendar 2024-12-16 16:00:00 2024-12-16 17:00:00 America/New_York Alexander Lex - The reVISit User Study Platform and Applications in Studying Misinformation Abstract:In this talk I introduce the reVISit framework for designing and running empirical studies online. Traditional survey tools limit the flexibility and reproducibility of online experiments. To remedy this, we introduce a domain-specific language, the reVISit Spec, that researchers can use to design complex online user studies. reVISit Spec, combined with the relevant stimuli, is compiled into a ready-to-deploy website that handles all aspects of a user study, including sophisticated provenance-based data tracking, randomization, etc. reVISit is a community focused project and ready to use! Visit https://revisit.dev/ to get started. I will then pivot to talk about data-driven misinformation in the form of charts shared on social networks. I will demonstrate that “lying with charts” doesn’t work the way we (used to) think about it, and introduce a few strategies to “protect” charts and charting tools from being abused by malicious users. I will connect back to reVISit by illustrating how we leveraged it to run a series of crowd-sourced experiments. Bio:Alexander Lex is an Associate Professor of Computer Science at the Scientific Computing and Imaging Institute and the Kahlert School of Computing at the University of Utah. He directs the Visualization Design Lab where he and his team develop visualization methods and systems to help solve today’s scientific problems. Recently he is working on visualization accessibility, visual misinformation, provenance and reproducibility, and user study infrastructure. He is the recipient of an NSF CAREER award and multiple best paper awards or best paper honorable mentions at IEEE VIS, ACM CHI, and other conferences. He also received a best dissertation award from his alma mater. He co-founded datavisyn, a startup company developing visualization solutions for the pharmaceutical industry.This talk will also be streamed over Zoom: https://mit.zoom.us/j/99703652090. D463 (Star)

November 19

Add to Calendar 2024-11-19 16:00:00 2024-11-19 17:00:00 America/New_York Jeff Huang - Reshaping the Creative Process through Peer Review, Media Formats, and Copyright Abstract:Digital artists compose and share their work on online platforms that are meant to support creativity. Recently, collaboration and AI tools have been the primary new features in these platforms, enabling teams to do more, with or without artists. I propose shifting the focus back towards the individual artist by exploring three aspects of these platforms: peer review, media formats, and copyright. I share examples of platforms, Sketchy and UX Factor, incorporating peer review that provides self-assessment along with peer feedback for the artist or designer. Filtered.ink extends an existing media format to enable new capabilities for the artist without depending on the platform or proprietary format. Finally, I propose a workflow for generative AI platforms that can support artists' authorship of the finished work based on the idea-expression doctrine in copyright law.Bio:Jeff Huang is an Associate Professor and Associate Chair of Computer Science at Brown University. His research is in Human-Computer Interaction, focusing on building personalized systems based on behavior data. These systems enable new user-centric capabililties and are applied to attention, mobile, user experience, and health. Jeff is primarily funded by the NSF, NIH, and ARO, and he has received the NSF CAREER award, Facebook Fellowship, and ARO Young Investigator Award.This talk will also be streamed over Zoom: https://mit.zoom.us/j/99877634306. D463 (Star)

October 29

Add to Calendar 2024-10-29 16:00:00 2024-10-29 17:00:00 America/New_York Amber Horvath - Meta-Information to Support Sensemaking by Developers Abstract:Software development requires developers to juggle and balance many information-seeking and understanding tasks. From determining how a bug was introduced, to choosing what API method to use to resolve the bug, to how to properly integrate this change, even the smallest implementation tasks can lead to many questions. These questions may range from hard-to-answer questions about the rationale behind the original code to common questions such as how to use an API. Once this challenging sensemaking is done, this rich thought history is often lost given the high cost of externalizing these details, despite potentially being useful to future developers. In this talk, I discuss the design principles necessary to capture and make useful this rich set of data and the different systems I have developed that instantiate these principles. Specifically, I have developed systems for annotating to support developers’ natural sensemaking when understanding information-dense sources such as software documentation and source code. I then demonstrated how to automate and scale the capturing of other forms of meta-information to assist with reasoning about design. Lastly, I explored how this information can be utilized by LLMs to assist in the applied developer sensemaking task of print debugging. In looking towards the future of developer information needs, I discuss how these processes and systems may change to adapt to the new classes of information needs that the shift towards AI-driven software engineering are creating. Bio:Amber Horvath is a post-doctoral researcher at the Massachusetts Institute of Technology, working with Arvind Satyanarayan and David Karger. She received her Ph.D. from Carnegie Mellon University in the Human-Computer Interaction Institute, where she was advised by Brad Myers. She works at the intersection of human-computer interaction (HCI), software engineering, and applied AI. She uses human-centered methods to design and build novel tools to help developers better manage their information. She has also done work related to fostering more inclusive environments for underrepresented populations in computing, using novel methodologies and large-scale data analysis. She publishes at premier venues in the fields of HCI and software engineering, including CHI, UIST, ICSE, and CSCW, with award-winning papers at CHI and CSCW.This talk will also be streamed over Zoom: https://mit.zoom.us/j/98354678322. 32-D463 (Star)

October 22

Add to Calendar 2024-10-22 16:00:00 2024-10-22 17:00:00 America/New_York Kim Marriott - Visualization without Vision Abstract:Tactile graphics have been used by blind people for hundreds of years and remain the recommended method for blind people to access to graphics in which spatial layout is important, such as maps or charts. In this talk I will sketch the history of tactile graphics and explore the cognitive and perceptual similarities and differences between tactile and visual graphics. Finally, I will look at how new technologies such as 3D printing and refreshable tactile displays are transforming the provision of tactile graphics.Bio:Kim Marriott leads the Monash Assistive Tech & Society (MATS) Centre at Monash University in Australia. MATS is a multidisciplinary centre bringing together more than 100 researchers and educators interested in technology and disability. Kim's research is in both data visualization and accessibility with a particular focus on the use of emerging technologies to support people who are blind or have low vision to access graphical materials. He has just published a history of data visualization, The Golden Age of Data Visualization: How Did We Get Here?, which includes a chapter on the history of tactile graphics. This talk will also be streamed over Zoom: https://mit.zoom.us/j/91729958241. 32-D463 (Star)

October 08

Add to Calendar 2024-10-08 16:00:00 2024-10-08 17:00:00 America/New_York Remco Chang - Conceptualizing Visualizations as Functions, Spaces, and Grammars Abstract:Visualization is often regarded as a static artifact – an image-based representation of data. However, from a mathematical and programmatic perspective, it can be more accurately described as a function: an action that transforms data and parameters into visual form. By framing visualization as a function, we can investigate its properties by examining its inputs (domain) and outputs (range), both of which can be conceptualized as distinct spaces. In this talk, I first present our work on learning the input and output spaces of visualizations using neural networks. I then introduce other spaces considered by the visualization research community, such as pixel space, interaction space, and design space. Finally, I discuss our research on viewing visualizations through the lens of grammars, demonstrating how this approach helps us uncover key properties and delineate the boundaries between data, task, and visualization spaces.Bio: Remco Chang is a Professor in the Computer Science Department at Tufts University. He received his BA in Computer Science and Economics from Johns Hopkins University, his MSc from Brown University, and his PhD from the University of North Carolina (UNC) at Charlotte. Prior to his PhD, he worked at Boeing, developing real-time flight tracking and visualization software, and later served as a research scientist at UNC Charlotte. His research interests include visual analytics, information visualization, human-computer interaction (HCI), and databases. His work has been supported by the NSF, DARPA, Navy, DOD, Walmart Foundation, Merck, DHS, MIT Lincoln Lab, and Draper, and he is a co-founder of two startups, Hopara.io and GraphPolaris. He has received best paper, best poster, and honorable mention awards at InfoVis, VAST, CHI, EuroVis. He served as program chair of the IEEE VIS conference in 2018 and 2019 and is the general chair of VIS in 2024. Additionally, he is an associate editor for the ACM TiiS and IEEE TVCG journals and received the NSF CAREER Award in 2015. He has mentored 11 PhD students and postdocs who now hold faculty positions at institutions such as Smith College (x2), DePaul University, Washington University in St. Louis, University of Washington, University of San Francisco, University of Colorado Boulder, WPI, San Francisco State, the University of Utrecht, and Brandeis, as well as 7 researchers working in companies and government agencies like Google, Draper, Facebook, MIT Lincoln Lab (x2), the National Renewable Energy Lab, and Idaho National Lab.This talk will also be streamed over Zoom: https://mit.zoom.us/j/99222844035. D463 (Star)

October 01

Add to Calendar 2024-10-01 16:00:00 2024-10-01 17:00:00 America/New_York Suresh Venkatasubramanian - Moles, Turtles, and Snakes: On what it means to do practical AI governance research Abstract:Over the last decade or so, we've built an impressive list of examples of AI gone wrong, and a fairly comprehensive list of reasons why. Critique of technological systems, especially those based on ML and AI, are a common and arguably necessary counterweight to the hype around AI. But I'd argue that perhaps our desire to critique has gone a little too far, in that we seem unwilling to answer the question "if not this, then what" with anything but "nothing". I think we can do better than that, while still not falling into the trap of technosolutionism. We're at a moment where the door has been opened to provide methods, tools, and general sociotechnical systems - for auditing, for measurement, and for mitigation. These will necessarily be imperfect, and will have to iterated over and improved, again and again. But they can help us reimagine more expansively what's possible, and more importantly help show policymakers what's possible, when thinking about the next wave of AI governance work. I'll illustrate this with a few examples from my own recent research.Bio:Suresh Venkatasubramanian directs the Center for Technological Responsibility, Reimagination, and Redesign (CNTR) with the Data Science Institute at Brown University, and is a Professor of Computer Science and Data Science. He recently finished a stint in the Biden-Harris administration, where he served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy. In that capacity, he helped co-author the Blueprint for an AI Bill of Rights.Prior to Brown University, Suresh was at the University of Utah, where as an assistant professor he was the John and Marva Warnock Assistant Professor. He has received a CAREER award from the NSF for his work in the geometry of probability, a test-of-time award at ICDE 2017 for his work in privacy, and a KAIS Journal award for his work on auditing black-box models. His research on algorithmic fairness has received press coverage across the globe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. He is a past member of the Computing Community Consortium Council of the CRA, spent 4 years (2017-2021) as a member of the board of the ACLU in Utah, and is a past member of New York City’s Failure to Appear Tool (FTA) Research Advisory Council, the Research Advisory Council for the First Judicial District of Pennsylvania and the Utah State Auditor's Commission on protecting privacy and preventing discrimination. He was recently named by Fast Company to their AI20 list of thinkers shaping the world of generative AI. This talk will also be streamed over Zoom: https://mit.zoom.us/j/94023976132. 32-D463 (Star)

September 24

Add to Calendar 2024-09-24 16:00:00 2024-09-24 17:00:00 America/New_York HCI Seminar - Lane Harrison - Shaping Visualization Ecosystems in a Changing Technosocial Landscape Abstract:Progress across visualization systems, data journalism, and social media has brought charts and interactives into peoples’ daily lives. But this progress brings new challenges: How do people engage with visualizations they encounter? How might people differ in their ability to read and use visualizations, and can these skills be improved? Do visualization tools and creators favor audiences with particular social or cultural characteristics over others? This talk will cover research initiatives that interrogate these challenges through experiments and design, and propose how we might anticipate and respond to coming shifts in visualization ecosystems.Bio:Lane Harrison is an Associate Professor in the Department of Computer Science at Worcester Polytechnic Institute. Before joining WPI, Lane was a postdoctoral fellow in the Department of Computer Science at Tufts University. Lane directs the Visualization and Information Equity lab at WPI (VIEW), where he and students leverage computational methods to understand and shape how people engage with data visualizations and visual analytics systems. Lane’s work has been supported by the NSF, DoED, DoD, and industry.This talk will also be streamed over Zoom: https://mit.zoom.us/j/91991608861 32-G882 (Hewlett)

September 17

Add to Calendar 2024-09-17 16:00:00 2024-09-17 17:00:00 America/New_York Ethan Zuckerman - The Quotidian Web Abstract:Internet researchers have a bias towards the extraordinary. We pay special attention to unusual phenomena like mis/disinformation, to successful activist campaigns, to authors and creators who reach large audiences - and for good reason. But what might we learn from studying ordinary online behavior? Our lab has developed tools to take random samples of YouTube and TikTok by guessing at valid video addresses. The videos we collect often have fewer than 100 views and frequently were not intended for viewership by broad audiences. What can we learn about the role of online video in different languages and cultures from this data? How does an archive of random videos allow us to study cultural change over time? What are the ethical pitfalls of studying data that is public but obscure? Bio:Ethan Zuckerman is associate professor of public policy, information and communication at the University of Massachusetts at Amherst and director of the Initiative for Digital Public Infrastructure. His research focuses on the use of media as a tool for social change, the use of new media technologies by activists and alternative business and governance models for the internet. He is the author of Mistrust: How Losing Trust in Institutions Provides Tools to Transform Them (2021), Rewire: Digital Cosmopolitans in the Age of Connection (2013) and co-author with Chand Rajendra-Nicolucci of "The Illustrated Field Guide to Social Media" forthcoming on MIT Press. With Rebecca MacKinnon, Zuckerman co-founded the international blogging community Global Voices. It showcases news and opinions from citizen media in more than 150 nations and 30 languages, publishing editions in 20 languages. Previously, Zuckerman directed the Center for Civic Media at MIT and taught at the MIT Media Lab. He and his family live in Berkshire County in western Massachusetts. This talk is remote over Zoom: https://mit.zoom.us/j/97272203935. https://mit.zoom.us/j/97272203935

July 09

Add to Calendar 2024-07-09 16:00:00 2024-07-09 17:00:00 America/New_York Wendy Mackay - WWW: Wendy’s Words of Wisdom Abstract:The original title for the SIGCHI Lifetime research award was: “The Design of Interactive Things: From Theory to Design and Back”. However, after I joked that I should just entitle it “WWW” for “Wendy’s Words of Wisdom”, I was surprised to see the latter title appear in the CHI '24 program. Yet when I considered how to structure this talk — Just how do you compress 40 years of research into 40 minutes? — I realized that I can trace both my history and SIGCHI’s through a series of insights that each launched a new research theme.This talk offers a whirlwind tour of my research interests, including interactive video, tangible computing, multi-disciplinary design, collaborative systems, human-computer partnerships, and generative theories of interaction. Of course, such research is highly collaborative and I appreciate this opportunity to show off the contributions of the many students, colleagues, friends and mentors who have influenced my thinking and collaborated on this work.Bio:Professor Wendy Mackay is a Research Director, Classe Exceptionnelle, at Inria, France's national research laboratory for Computer Science and a full Professor at the Université Paris-Saclay, where she also served as Vice President of Research for the Computer Science Department. She runs the joint ExSitu research lab in Human-Computer Interaction with five faculty members plus 20 Ph.D. students, post-doctoral fellows and research engineers. She received her Ph.D. from MIT and managed research groups at Digital Equipment and Xerox EuroPARC, where she pioneered research in customizable software, interactive video and mixed reality systems. In addition to receiving the ACM/SIGCHI Lifetime Research award, she was the 2021-2022 Annual Chair for Computer Science for the Collège de France, and is a Doctor Honoris Causa, Aarhus University, an ACM Fellow and a member of the ACM CHI Academy. She received a six-year European Research Council Advanced Grant for her research on human-computer partnerships, where she introduced the theory of reciprocal co-adaptation. She has published over 200 peer-reviewed research articles in the area of Human-Computer Interaction. Her work combines theoretical, empirical and design contributions with a current focus on re-envisioning the interaction between human users and intelligent systems. She has introduced numerous multi-disciplinary design and evaluation methods, and is currently exploring how to design systems where users and intelligent agents share agency, both interactively and over long time periods, to avoid deskilling and instead increase human capabilities. Current application areas range from work with creative professionals (choreographers, designers, and musicians) to safety critical settings (smart cockpits, hospitals and emergency control rooms). This talk will also be streamed over Zoom: https://mit.zoom.us/j/96939675190. 32-G882

May 07

Add to Calendar 2024-05-07 16:00:00 2024-05-07 17:00:00 America/New_York Siva Vaidhyanathan - Digital Hegemony and Digital Sovereignty Abstract: Through the first 30 years of the development of the internet, we were promised a global “network of networks” that would offer free speech, democratic empowerment, and the spread of democracy. Leaders from Ronald Reagan to Margaret Thatcher to Barack Obama all promised that technology would unite and enlighten the world. Somehow it all went differently, and now we live in a world traversed by networks dominated by hegemons like the United States, Russia, and China. In this talk, Professor Siva Vaidhyanathan will explain the idea of “digital sovereignty,” the ways that a nation state creates and enforces its own sense of what should be allowed and watched on digital networks, resisting digital hegemony through strategies of digital sovereingty. There are many models of “digital sovereignty,” each offering a distinct set of value and opportunities, as well as methods of oppression. This talk will focus on how the Russian invasion of Ukraine exposes the dangers and necessities of digital sovereignty.Bio:Siva Vaidhyanathan is the Robertson Professor of Media Studies and director of the Center for Media and Citizenship at the University of Virginia. He is the author of Antisocial Media: How Facebook Disconnects Us and Undermines Democracy (2018), Intellectual Property: A Very Short Introduction (2017), The Googlization of Everything -- and Why We Should Worry (2011), Copyrights and Copywrongs: The Rise of Intellectual Property and How it Threatens Creativity ( 2001), and The Anarchist in the Library: How the Clash between Freedom and Control is Hacking the Real World and Crashing the System (2004). He also co-edited (with Carolyn Thomas) the collection, Rewiring the Nation: The Place of Technology in American Studies (2007). Vaidhyanathan is a columnist for The Guardian and has written for many other periodicals, including The New York Times, Wired, Bloomberg View, American Scholar, Reason, Dissent, The Chronicle of Higher Education, The New York Times Magazine, Slate.com, BookForum, Columbia Journalism Review, Washington Post, The Virginia Quarterly Review, The New York Times Book Review, and The Nation. He is a frequent contributor to public radio programs. And he has appeared on news programs on BBC, CNN, NBC, CNBC, MSNBC, ABC, and on The Daily Show with Jon Stewart on Comedy Central. In 2015 he was portrayed on stage at the Public Theater in a play called Privacy. After five years as a professional journalist, he earned a Ph.D. in American Studies from the University of Texas at Austin. Vaidhyanathan has also taught at Wesleyan University, the University of Wisconsin at Madison, Columbia University, New York University, McMaster University, and the University of Amsterdam. He is a fellow at the New York Institute for the Humanities and a Faculty Associate of the Berkman Center for Internet and Society at Harvard University. He was born and raised in Buffalo, New York and resides in Charlottesville, Virginia.This talk will also be streamed over Zoom: https://mit.zoom.us/j/95568018736. Kiva (G449)

April 23

Add to Calendar 2024-04-23 16:00:00 2024-04-23 17:00:00 America/New_York Cindy Hsin-Liu Kao - Designing Hybrid Skins Abstract:Hybrid Skins are an emerging form of conformable interface situated at all scales of the human experience. These conformable interfaces are hybrid in their integration of technological function with social and cultural perspectives, blending historical craft with miniaturized robotics, machines, and materials in their development. The resulting skins also serve social, cultural, and technological purposes while supporting the construction of individual identities. This seminar examines recent work from the Hybrid Body Lab in designing Hybrid Skins through under-explored approaches of textile robotics, bio-fluid sensing, modular flexible electronics, and sustainable materials exploration. With their seamless and conformable form factor, Hybrid Skins afford unprecedented intimacy to the human experience and an opportunity for us to carefully rethink and redesign how our relationship with technology can, should (or should not) be. By blending engineering, design, and committed engagement with diverse communities, Kao and her lab’s research aims to foster inclusive design for future wearable technology that can celebrate (instead of constrict) the diversity of the human experience. Bio:Cindy Hsin-Liu Kao is an assistant professor at Cornell University. She directs the Hybrid Body Lab, which focuses on integrating cultural and social perspectives into the design of on-body interfaces. Through her research, she aims to foster inclusive designs for soft wearable technologies, like smart tattoos and textiles and develops novel digital fabrication methods. Kao, honored with a National Science Foundation CAREER Award, has received accolades in major ACM Human-Computer Interaction venues and media attention from Forbes, CNN, WIRED, and VOGUE. Her work has been showcased internationally, including at the Pompidou Centre in Paris and New York Fashion Week, earning multiple design awards. Kao holds a Ph.D. from MIT Media Lab.This talk will also be streamed over Zoom: https://mit.zoom.us/j/99183558682. Star (D463)