September 18

Add to Calendar 2019-09-18 14:00:00 2019-09-18 15:30:00 America/New_York Yoshua Bengio: Learning High-Level Representations for Agents Abstract: A dream of the deep learning project was that a learner could discover a hierarchy of representations with the highest level capturing abstract concepts of the kind we can communicate with language, reason with and generally use to understand how the world works. It is still a challenge but recent progress in machine learning could help us approach that objective. We will discuss how the ability to discover causal structure, and in particular causal variables (from low-level perception), would be a progress in that direction, and how recent advances in meta-learning and taking the perspective of an agent (rather than a passive learner) could also play in important role. Because we are talking about high-level variables, this discussion touches on the old divide between system 1 cognition (intuitive and anchored in perception) and system 2 cognition (conscious and more sequential): these high-level variables sit at the interface between the two types of cognitive computations. Unlike what some advocate when they talk about disentangling factors of variation, I do not believe that these high-level variables should be considered to be independent of each other in a statistical sense. They might be independent in a different sense, in the sense that we can independently modify some rather than others, and in fact they are connected to each other through a rich web of dependencies of the kind we communicate with language. The agent and meta-learning perspective also force us to leave the safe ground of iid data of current learning theory and start thinking about non-stationarity, which a learning agent is necessarily confronted with. Instead of viewing such non-stationarity as a hurdle, we propose to view it as a source of information because these changes are often due to interventions by agents (the learner or other agents), and can thus help a learner figure out causal structure. In return, we might be able to build learning systems which are much more robust to changes in the environment, because they capture what is stationary and stable in the long run throughout these nonstationarities, and they build models of the world which can quickly adapt to such changes and sometimes may even be able to correctly infer what caused those changes (thus requiring no additional examples to make sense of the change in distribution).Bio: Recognized as one of the world’s leading experts in artificial intelligence (AI), Yoshua Bengio is a pioneer in deep learning. He began his education in Montreal, where he earned his Ph.D. in computer science from McGill University, then completed his postdoctoral studies at the Massachusetts Institute of Technology (MIT). Since 1993, he has been a professor in the Department of Computer Science and Operational Research at the Université de Montréal. In 2000, he became the holder of the Canada Research Chair in Statistical Learning Algorithms. At the same time, he founded and became scientific director of Mila, the Quebec Institute of Artificial Intelligence, which is the world’s largest university-based research group in deep learning. Lastly, he is also the Scientific Director of IVADO. His research contributions have been undeniable. In 2018, Yoshua Bengio collected the largest number of new citations in the world for a computer scientist thanks to his three reference works and some 500 publications. Professor Bengio aspires to discover the principles that lead to intelligence through learning, and his research has earned him multiple awards. In 2019, he earned the prestigious Killam Prize in computer science from the Canada Council for the Arts and was co-winner of the A.M. Turing Prize, considered the “Nobel of computer science,” which he received jointly with Geoffrey Hinton and Yann LeCun. He is also an Officer of the Order of Canada, a Fellow of the Royal Society of Canada, a recipient of the Excellence Awards of the Fonds de recherche du Québec – Nature et technologies 2019 and the Marie-Victorin prize and was named Scientist of the Year by Radio-Canada in 2017. These honours reflect the profound influence of his work on the evolution of our society.Concerned about the social impact of AI, he has actively contributed to the development of the Montreal Declaration for the Responsible Development of Artificial Intelligence. Grier 34-401

October 16

Add to Calendar 2019-10-16 16:30:00 2019-10-16 17:30:00 America/New_York David Patterson: Domain Specific Architectures for Deep Neural Networks: Three Generations of Tensor Processing Units (TPUs) Abstract:The recent success of deep neural networks (DNN) has inspired a resurgence in domain specific architectures (DSAs) to run them, partially as a result of the deceleration of microprocessor performance improvement due to the ending of Moore’s Law. DNNs have two phases: training, which constructs accurate models, and inference, which serves those models. Google’s first generation Tensor Processing Unit (TPUv1) offered 50X improvement in performance per watt over conventional architectures for inference. We naturally asked whether a successor could do the same for training. This talk reviews TPUv1 and explores how Google built the first production DSA supercomputer for the much harder problem of training, which was deployed in 2017. Google’s TPUv2/TPUv3 supercomputers with up to 1024 chips train production DNNs at close to perfect linear speedup with 10X-40X higher floating point operations per Watt than general-purpose supercomputers running the high-performance computing benchmark Linpack.Bio:David Patterson is a Berkeley CS professor emeritus, a Google distinguished engineer, and the RISC-V Foundation Vice-Chair. He received his BA, MS, and PhD degrees from UCLA. His Reduced Instruction Set Computer (RISC), Redundant Array of Inexpensive Disks (RAID), and Network of Workstation projects helped lead to multibillion-dollar industries. This work led to 40 awards for research, teaching, and service plus many papers and seven books. The best known book is ‘Computer Architecture: A Quantitative Approach,’ and the newest is ‘The RISC-V Reader: An Open Architecture Atlas.’ In 2018 he and John Hennessy shared the ACM A.M. Turing Award. 32-123

September 30

Add to Calendar 2020-09-30 16:00:00 2020-09-30 17:00:00 America/New_York Fireside with Sen. Ron Wyden **Register for this talk here: **Oregon Senator Ron Wyden was elected to the U.S. Senate in 1996. Wyden has consistently pushed for smart tech policies that put users – not powerful corporations – first.Wyden coauthored the bipartisan Section 230 of the Communications Decency Act, wrote the first net neutrality bill and has defended strong encryption against threats from short-sighted government officials.Wyden, a fierce advocate for strong data privacy protections, last fall introduced the most comprehensive bill to protect Americans’ personal details online, the Mind Your Own Business Act. This sweeping piece of legislation would also hold corporate executives accountable for abusing personal information.

March 10

Add to Calendar 2021-03-10 16:00:00 2021-03-10 17:00:00 America/New_York Prof. Ross Anderson: Infrastructure – the Good, the Bad and the Ugly Abstract:Computer technology, like the railroad, gives us infrastructure that empowers innovators. The Internet and cloud computing let startups like YouTube and Instagram soar to huge valuations almost overnight, with only a handful of staff. But 21st century tech differs from the 19th century variety in that criminals also build infrastructure, from botnets through malware-as-a-service. There's also dual-use infrastructure, from Tor to bitcoins, with entangled legitimate and criminal applications. So crime can scale too. And even "respectable" infrastructure has disruptive uses. Social media enabled both Barack Obama and Donald Trump to outflank the political establishment and win power; they have also been used to foment communal violence in Asia. How are we to make sense of all this? Is it simply a matter for antitrust lawyers and cybercrime fighters, or do computer scientists have some insights to offer?For the past twenty years, we have been studying the economics of information security. If Alice guards a system while Bob pays the cost of failure, you can expect trouble. This subject started out with concerns about infrastructure, namely payment card fraud and the insecurity of Windows. It worked on topics from the patch cycle through the behavioural economics of privacy to cybercrime. We learned that many persistent problems are down to misaligned incentives.We are now realising that when problems scale, infrastructure is usually involved; that we need computer-science insights into scaling as well as economists' insights into incentives; and that both of us have underestimated the role of institutions. We need to understand all this better to put controls at the right level in the stack and to develop better strategies to fight cybercrime. We may also find some new directions as the regulation of technology moves up the political agenda.Bio:Ross Anderson has devoted his career to developing security engineering as a discipline. He was a pioneer of hardware tamper-resistance, API security, peer-to-peer systems, prepayment metering and powerline communications. His other research extends from cryptography through side channels and the safety and privacy of clinical systems to technology policy. He was one of the founders of the discipline of security economics, and is PI of the Cambridge Cybercrime Centre, which collects and analyses data about online crime and abuse. He is a Fellow of the Royal Society and the Royal Academy of Engineering, as well as a winner of the Lovelace Medal – the UK's top award in computing. He holds faculty positions at both Cambridge and Edinburgh universities.Join Zoom Meeting: 946469One tap mobile+16465588656,,97159851800# US (New York)+16699006833,,97159851800# US (San Jose)Meeting ID: 971 5985 1800US : +1 646 558 8656 or +1 669 900 6833International Numbers:

February 23

Add to Calendar 2022-02-23 16:00:00 2022-02-23 17:00:00 America/New_York Dertouzos Distinguished Lecture: Connectivity, Dr. Robert M. Metcalfe Abstract:The Internet is 52+ years old. It has by now suddenly reached two thirds of the human race. It has moved us considerably toward our goals of freedom and prosperity. And when COVID hit, the Internet was ready. Zoom! And yet we are overwhelmed by all the connectivity the Internet is delivering. For example, fake news. It’s time to treat CONNECTIVITY as a thing, with its own science, engineering, dimensions, disruptions, and … pathologies. I will try this by telling you stories from the evolution of the Internet, invention of Ethernet, founding of 3Com Corporation, and Metcalfe’s Law (V~N^2).Bio:Dr. Robert M. Metcalfe has for the last 11 years been Professor of Innovation in the Cockrell School of Engineering and Professor of Entrepreneurship in the McCombs School of Business, at The University of Texas at Austin. Last month he retired from UTAustin to start his sixth career, TBD. Metcalfe was an Internet pioneer beginning 1970 at MIT, Harvard, Xerox Palo Alto Research Center, Stanford, and 3Com. He invented Ethernet at Xerox Parc on May 22, 1973. Today Ethernet is the Internet’s standard plumbing. It now adds, especially if we count Wireless Ethernet (Wi-Fi), billions of standard Internet ports per year. Metcalfe has won the Bell, Hopper, Japan C&C, Marconi, McCluskey, Shannon and Stibitz Prizes. He is a Life Trustee Emeritus of MIT and a member of the National Academy of Engineering. Metcalfe received the IEEE Medal of Honor in 1996 and the National Medal of Technology in 2005, both for his leadership in the invention, standardization and commercialization of Ethernet.Join Zoom Meeting: 789147One tap mobile+16465588656,,93738417481# US (New York)+16699006833,,93738417481# US (San Jose)Meeting ID: 937 3841 7481US : +1 646 558 8656 or +1 669 900 6833

November 02

Add to Calendar 2022-11-02 17:00:00 2022-11-02 18:00:00 America/New_York What we see and what we value: AI with a human perspective Please join us on Wednesday, November 2 at 5:00pm in the Stata Center, Kirsch Auditorium, 32-123 for a special event as part of the Dertouzos Distinguished Speakers series. We will welcome Prof. Fei-Fei Li from Stanford University for her talk entitled ‘What we see and what we value: AI with a human perspective’. Abstract:One of the most ancient sensory functions, vision emerged in prehistoric animals more than 540 million years ago. Since then animals, empowered first by the ability to perceive the world, and then to move around and change the world, developed more and more sophisticated intelligence systems, culminating in human intelligence. Throughout this process, visual intelligence has been a cornerstone of animal intelligence. Enabling machines to see is hence a critical step toward building intelligent machines. In this talk, I will explore a series of projects with my students and collaborators, all aiming to develop intelligent visual machines using machine learning and deep learning methods. I begin by explaining how neuroscience and cognitive science inspired the development of algorithms that enabled computers to see what humans see. Then I discuss intriguing limitations of human visual attention and how we can develop computer algorithms and applications to help, in effect allowing computers to see what humans don’t see. Yet this leads to important social and ethical considerations about what we do not want to see or do not want to be seen, inspiring work on privacy computing in computer vision, as well as the importance of addressing data bias in vision algorithms. Finally I address the tremendous potential and opportunity to develop smart cameras and robots that help people see or do what we want machines’ help seeing or doing, shifting the narrative from AI’s potential to replace people to AI’s opportunity to help people. We present our work in ambient intelligence in healthcare as well as household robots as examples of AI’s potential to augment human capabilities. Last but not least, the cumulative observations of developing AI from a human-centered perspective has led to the establishment of Stanford’s Institute for Human-centered AI (HAI). I will showcase a small sample of interdisciplinary projects supported by HAI. Bio:Dr. Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford’s Human-Centered AI Institute. She served as the Director of Stanford’s AI Lab from 2013 to 2018. And during her sabbatical from Stanford from January 2017 to September 2018, she was Vice President at Google and served as Chief Scientist of AI/ML at Google Cloud. Dr. Fei-Fei Li obtained her B.A. degree in physics from Princeton in 1999 with High Honors, and her PhD degree in electrical engineering from California Institute of Technology (Caltech) in 2005. She also holds a Doctorate Degree (Honorary) from Harvey Mudd College. Dr. Li joined Stanford in 2009 as an assistant professor. Prior to that, she was on faculty at Princeton University (2007-2009) and University of Illinois Urbana-Champaign (2005-2006).Dr. Fei-Fei Li’s current research interests include cognitively inspired AI, machine learning, deep learning, computer vision and AI+healthcare especially ambient intelligent systems for healthcare delivery. In the past she has also worked on cognitive and computational neuroscience. Dr. Li has published more than 200 scientific articles in top-tier journals and conferences, including Nature, PNAS, Journal of Neuroscience, CVPR, ICCV, NIPS, ECCV, ICRA, IROS, RSS, IJCV, IEEE-PAMI, New England Journal of Medicine, Nature Digital Medicine, etc. Dr. Li is the inventor of ImageNet and the ImageNet Challenge, a critical large-scale dataset and benchmarking effort that has contributed to the latest developments in deep learning and AI. In addition to her technical contributions, she is a national leading voice for advocating diversity in STEM and AI. She is co-founder and chairperson of the national non-profit AI4ALL aimed at increasing inclusion and diversity in AI education.Dr. Li is an elected Member of the National Academy of Engineering (NAE), the National Academy of Medicine (NAM) and American Academy of Arts and Sciences (AAAS). She is also a Fellow of ACM, a member of the Council on Foreign Relations (CFR), a recipient of the 2022 IEEE PAMI Thomas Huang Memorial Prize, 2019 IEEE PAMI Longuet-Higgins Prize, 2019 National Geographic Society Further Award, 2017 Athena Award for Academic Leadership, IAPR 2016 J.K. Aggarwal Prize, the 2016 IEEE PAMI Mark Everingham Award, the 2016 nVidia Pioneer in AI Award, 2014 IBM Faculty Fellow Award, 2011 Alfred Sloan Faculty Award, 2012 Yahoo Labs FREP award, 2009 NSF CAREER award, the 2006 Microsoft Research New Faculty Fellowship, among others. Dr. Li is a keynote speaker at many academic or influential conferences, including the World Economics Forum (Davos), the Grace Hopper Conference 2017 and the TED2015 main conference. Work from Dr. Li's lab have been featured in a variety of magazines and newspapers including New York Times, Wall Street Journal, Fortune Magazine, Science, Wired Magazine, MIT Technology Review, Financial Times, and more. She was selected as a 2017 Women in Tech by the ELLE Magazine, a 2017 Awesome Women Award by Good Housekeeping, a Global Thinker of 2015 by Foreign Policy, and one of the “Great Immigrants: The Pride of America” in 2016 by the Carnegie Foundation, past winners include Albert Einstein, Yoyo Ma, Sergey Brin, et al. 32-123

December 07

Add to Calendar 2022-12-07 17:00:00 2022-12-07 18:00:00 America/New_York Shading Languages and the Emergence of Programmable Graphics Systems Title: Shading Languages and the Emergence of Programmable Graphics SystemsAbstract: A major challenge in using computer graphics for movies and games is to create a rendering system that can create realistic pictures of a virtual world.The system must handle the variety and complexity of the shapes, materials, and lighting that combine to create what we see every day. The images must also be free of artifacts, emulate cameras to create depth of field and motion blur, and compose seamlessly with photographs of live action.Pixar's RenderMan was created for this purpose,and has been widely used in feature film production. A key innovation in the system is to use a shading language to procedurally describe appearance. Shading languages were subsequently extended to run in real-time on graphics processing units (GPUs), and now shading languages are widely used in game engines. The final step was the realization that the GPU is a data-parallel computer, and the the shading language could be extended into a general-purpose data-parallel programming language. This enabled a wide variety of applications in high performance computing, such as physical simulation and machine learning, to be run on GPUs.Nowadays, GPUs are the fastest computers in the world.This talk will review the history of shading languages and GPUs, and discuss the broader implications for computing.Bio:Pat Hanrahan is the Canon Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. His research focuses on rendering algorithms, graphics systems, andvisualization.Hanrahan received a Ph.D. in biophysics from the University of Wisconsin-Madison in 1985. As a founding employee at Pixar Animation Studios in the 1980s, Hanrahan led the design of the RenderMan Interface Specification and the RenderMan Shading Language. In1989, he joined the faculty of Princeton University. In 1995, he moved to Stanford University. More recently, Hanrahan served as a co-founder and CTO of Tableau Software. He has received three Academy Awards for Science and Technology, the SIGGRAPH ComputerGraphics Achievement Award, the SIGGRAPH Stephen A. Coons Award, and the IEEE Visualization Career Award. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences. In 2019, he received the ACM A. M. Turing Award. 32-123