February 19

Add to Calendar 2025-02-19 17:00:00 2025-02-19 18:00:00 America/New_York Dertouzos Distinguished Lecture: Deborah Estrin, Transforming longitudinal care with digital biomarkers and therapeutics Abstract:This talk explores how patient-generated data from wearables, ambient devices, and digital health tools can transform the delivery and quality of individualized clinical care. Digital Biomarkers (DBx) and Digital Therapeutics (DTx) leverage AI to convert raw data into action, helping clinicians adjust treatments, patients manage conditions, and researchers understand differentiated outcomes. Successes in Parkinson’s management and metabolic interventions demonstrate the value of integrating these technologies into specific care pathways. However, to realize scalable and affordable benefits for patients, providers, and payers will require implementing hybrid care systems that optimize patient-clinician collaboration across conditions and episodes of care. Bio: Deborah Estrin is a Professor of Computer Science at Cornell Tech in New York City where she holds The Robert V. Tishman Founder's Chair, serves as the Associate Dean for Impact, and is an Affiliate Faculty at Weill Cornell Medicine. Her research interests are in digitally-enabled innovations that support patients and providers in optimizing clinical outcomes and quality of life. Estrin founded the Public Interest Technology Initiative (PiTech) at Cornell Tech, which promotes public impact as a component of students' training and future careers. Estrin was previously the Founding Director of the NSF Center for Embedded Networked Sensing (CENS) at UCLA; pioneering the development of mobile and wireless systems to collect and analyze real time data about the physical world.  Estrin's honors include: the IEEE Internet Award (2017), MacArthur Fellowship (2018), and the IEEE John von Neumann Medal (2022). She is an elected member of the American Academy of Arts and Sciences (2007), the National Academy of Engineering (2009), and the National Academy of Medicine (2019).  32-123

September 11

Add to Calendar 2024-09-11 17:00:00 2024-09-11 18:00:00 America/New_York Reading Alan Turing Abstract: I will discuss some well-known and less-known papers of Turing, exemplify the scope of deep, prescient ideas he put forth, and mention follow-up work on these by the Theoretical CS community.No special background will be assumed.Avi Wigderson is the Herbert H. Maass Professor in School of Mathematics at the Institute for Advanced Study. He received his B.Sc. in Computer Science from Technion in 1980, and his Ph.D. in Computer Science from Princeton University in 1983. After post-doctorate positions in UC Berkeley, IBM research, and MSRI, Avi joined the faculty of the Computer Science department at the Hebrew University starting 1986. In 1999, Avi joined IAS as faculty in the School of Math and founded the Computer Science and Discrete Mathematics program. His research interests are in computational complexity theory, algorithms and optimization, randomness and cryptography, parallel and distributed computation, combinatorics, and graph theory, as well as connections between theoretical computer science and mathematics and science. Avi has received many awards, including the 2021 Abel Prize (along with László Lovász) and most recently the 2023 ACM A.M. Turing Award for foundational contributions to the theory of computation, and for his decades of intellectual leadership in theoretical computer science. 32-123

March 20

Add to Calendar 2024-03-20 17:00:00 2024-03-20 18:00:00 America/New_York Algorithmic Discrepancy Theory and Randomized Controlled Trials Abstract:Theorems in discrepancy theory tell us that it is possible to partition a set of vectors into two sets that look surprisingly similar to each other. In particular, these sets can be much more similar to each other than those produced by a random partition. For many measures of similarity, computer scientists have developed algorithms that produce these partitions efficiently.A natural application for these algorithms is the design of randomized controlled trials (RCTs). RandomizedControlled Trials are used to test the effectiveness of interventions, like medical treatments and educationalinnovations. Randomization is used to ensure that the test and control groups are probably similar. When we know nothing about the experimental subjects, a random partition into test and control groups is the best choice. When experimenters have measured quantities about the experimental subjects that they expect could influence a subject's response to a treatment, the experimenters try to ensure that these quantities are evenly distributed between the test and control groups. That is, they want a random partition of low discrepancy.In this talk, I will survey some fundamental results in discrepancy theory, present a model for the analysis of RCTs, and summarize results from my joint work with Chris Harshaw, Fredrik Sävje, and Peng Zhang that uses algorithmic discrepancy theory to improve the design of randomized controlled trials.Bio:Daniel Alan Spielman is the Sterling Professor of Computer Science, and Professor of Statistics and Data Science, and of Mathematics at Yale. He received his B.A. in Mathematics and Computer Science from Yale in 1992, and his Ph.D in Applied Mathematics from M.I.T. in1995. After spending a year as an NSF Mathematical Sciences Postdoctoral Fellow in the Computer Science Department at U.C. Berkeley, he became a professor in the Applied Mathematics Department at M.I.T. He moved to Yale in 2005.He has received many awards, including the 1995 ACM Doctoral Dissertation Award, the 2002 IEEE Information Theory Paper Award, the 2008 and 2015 Godel Prizes, the 2009 Fulkerson Prize, the 2010 Nevanlinna Prize, the 2014 Polya Prize, the 2021 NAS Held Prize, the 2023 Breakthrough Prize in Mathematics, a Simons Investigator Award, and a MacArthur Fellowship. He is a Fellow of the Association for Computing Machinery and a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and the Connecticut Academy of Science and Engineering.His main research interests include the design andanalysis of algorithms, network science, machine learning, digital communications and scientific computing. 32-123

November 08

Add to Calendar 2023-11-08 17:00:00 2023-11-08 18:00:00 America/New_York TURING LECTURE: CONNECTIVITY is a thing, is THE thing Abstract:Hear stories from Arpanet, Alohanet, Ethernet, and Internet, many of which are true. After 50+ years since the first Internet packets were switched, the most important new fact about the human condition is that we are now suddenly connected. The suddenness has caused pathologies -- hacking, porno, advertising, censorship, polarization, fake news -- but then extreme poverty has suddenly declined for the first time, thanks to the World Wide Web? Connectivity has evolved with various transitions and reversals, of which we can expect many more. And some of our AIs, still short of AGI, are approaching 100 trillion connection parameters.Bio:Bob Metcalfe '68 was an Internet pioneer starting at MIT in 1970. He continued switching Internet packets at Harvard, Xerox Parc, Stanford, and 3Com Corporation, which he founded in 1979. He invented Ethernet at Xerox Parc in 1973. He took 3Com public in 1984. He's been defending Metcalfe's Law (V~N^2) since the 1980s. This year Bobreceived the million-dollar ACM Turing Award for the invention, standardization, and commercialization of Ethernet. 32-123

March 15

Add to Calendar 2023-03-15 17:00:00 2023-03-15 18:00:00 America/New_York The search for evidence of quantum advantage Abstract: The field of quantum computation heavily relies on the belief that quantum computation violates the extended Church Turing thesis, namely, that quantum many-body systems cannot be simulated by classical ones with only polynomially growing overhead. Importantly, we must ask: what experimental evidence do we have for this assumption? A natural candidate for such evidence is quantum simulations of highly complex many body quantum evolutions. However, there are inherent difficulties in viewing such simulations as providing evidence for quantum advantage.Assuming the high complexity regimes of quantum systems are hard to simulate classically, verification of quantum evolution requires either exponential time or sophisticated interactive protocols; unfortunately, so far no (conjectured to be) computationally hard problem was identified and verified to be solved by quantum simulations. A major effort towards providing such evidence for scalable quantum advantage have concentrated in the past decade on quantum random circuit sampling (RCS); such is the famous supremacy experiment by Google from 2019. The RCS experiments can be modeled as sampling from a random quantum circuit where each gate is subject to a small amount of noise. I will describe a recent work in which we give a polynomial time classical algorithm for sampling from the output distribution of a noisy random quantum circuit under an anti-concentration assumption, to within inverse polynomial total variation distance. It should be noted that our algorithm is not practical in its current form, and does not address finite-size RCS based quantum supremacy experiments. Never the less our result gives strong evidence that random circuit sampling (RCS) cannot be the basis of a scalable experimental violation of the extended Church-Turing thesis. I will end with a discussion regarding the prospects of providing evidence for scalable violation of the ECTT in the pre-quantum fault tolerance era (also known as NISQ era). Based on recent joint work with Xun Gao, Zeph Landau Yunchao Liu and Umesh Vazirani, arxiv: 2211.03999, QIP 2023, STOC 2023Bio:Dorit Aharonov is a Professor at the school of computer science and engineering at the Hebrew university of Jerusalem and the CSO of Qedma quantum computing. In her PhD, Aharonov proved the quantum fault tolerance theorem together with her advisor Ben-Or; this theorem is one of the main pillars of quantum computation today. She later contributed several pioneering works in a variety of areas, including quantum algorithms, specifically quantum walks. quantum adiabatic computation and topologically related algorithms; as well as Hamiltonian complexity, quantum cryptography and quantum verification. Much of her research can be viewed as creating a bridge between physics and computer science, attempting to study fundamental physics questions using computational language. Aharonov was educated at the Hebrew university in Jerusalem (BSc in Mathematics and Physics, PhD in Computer Science and Physics) and then continued to a postdoc at IAS Princeton (Mathematics) and UC Berkeley (Computer Science). She had joined the faculty of the computer science department of the Hebrew university of Jerusalem in 2001. In 2005 Aharonov was featured by the journal Nature as one of four theoreticians making waves in their chosen field; In 2006 she won the Krill prize, and in 2014 she was awarded the Michael Bruno award. In 2020 she joined forces with Dr. Asif Sinay and Prof. Netanel Lindner to co-found QEDMA quantum computing. 32-123

February 15

Add to Calendar 2023-02-15 17:00:00 2023-02-15 18:00:00 America/New_York Generative Machines and Ground Truth Abstract:We are living in a period of rapid acceleration for generative AI, where large language and text-to-image diffusion models are being deployed in a multitude of everyday contexts. From ChatGPT’s training set of hundreds of billions of words to LAION-5B’s corpus of almost 6 billion image-text pairs, these vast datasets – scraped from the internet and treated as “ground truth” – play a critical role in shaping the epistemic boundaries that govern machine learning models. Yet training data is beset with complex social, political, and epistemological challenges. What happens when data is stripped of context, meaning, and provenance? How does training data limit *what* and *how* machine learning systems interpret the world? And most importantly, what forms of power do these approaches enhance and enable? This lecture is an invitation to reflect on the epistemic foundations of generative AI, and to consider the wide-ranging impacts of the current generative turn. Bio:Professor Kate Crawford is a leading international scholar of the social implications of artificial intelligence. She is a Research Professor at USC Annenberg in Los Angeles, a Senior Principal Researcher at MSR in New York, an Honorary Professor at the University of Sydney, and the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris. Her latest book, Atlas of AI (Yale, 2021) won the Sally Hacker Prize from the Society for the History of Technology, the ASSI&T Best Information Science Book Award, and was named one of the best books in 2021 by New Scientist and the Financial Times. Over her twenty-year research career, she has also produced groundbreaking creative collaborations and visual investigations. Her project Anatomy of an AI System with Vladan Joler is in the permanent collection of the Museum of Modern Art in New York and the V&A in London, and was awarded with the Design of the Year Award in 2019 and included in the Design of the Decades by the Design Museum of London. Her collaboration with the artist Trevor Paglen, Excavating AI, won the Ayrton Prize from the British Society for the History of Science. She has advised policy makers in the United Nations, the White House, and the European Parliament, and she currently leads the Knowing Machines Project, an international research collaboration that investigates the foundations of machine learning. 32-123

December 07

Add to Calendar 2022-12-07 17:00:00 2022-12-07 18:00:00 America/New_York Shading Languages and the Emergence of Programmable Graphics Systems Title: Shading Languages and the Emergence of Programmable Graphics SystemsAbstract: A major challenge in using computer graphics for movies and games is to create a rendering system that can create realistic pictures of a virtual world.The system must handle the variety and complexity of the shapes, materials, and lighting that combine to create what we see every day. The images must also be free of artifacts, emulate cameras to create depth of field and motion blur, and compose seamlessly with photographs of live action.Pixar's RenderMan was created for this purpose,and has been widely used in feature film production. A key innovation in the system is to use a shading language to procedurally describe appearance. Shading languages were subsequently extended to run in real-time on graphics processing units (GPUs), and now shading languages are widely used in game engines. The final step was the realization that the GPU is a data-parallel computer, and the the shading language could be extended into a general-purpose data-parallel programming language. This enabled a wide variety of applications in high performance computing, such as physical simulation and machine learning, to be run on GPUs.Nowadays, GPUs are the fastest computers in the world.This talk will review the history of shading languages and GPUs, and discuss the broader implications for computing.Bio:Pat Hanrahan is the Canon Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. His research focuses on rendering algorithms, graphics systems, andvisualization.Hanrahan received a Ph.D. in biophysics from the University of Wisconsin-Madison in 1985. As a founding employee at Pixar Animation Studios in the 1980s, Hanrahan led the design of the RenderMan Interface Specification and the RenderMan Shading Language. In1989, he joined the faculty of Princeton University. In 1995, he moved to Stanford University. More recently, Hanrahan served as a co-founder and CTO of Tableau Software. He has received three Academy Awards for Science and Technology, the SIGGRAPH ComputerGraphics Achievement Award, the SIGGRAPH Stephen A. Coons Award, and the IEEE Visualization Career Award. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences. In 2019, he received the ACM A. M. Turing Award. 32-123

November 02

Add to Calendar 2022-11-02 17:00:00 2022-11-02 18:00:00 America/New_York What we see and what we value: AI with a human perspective Please join us on Wednesday, November 2 at 5:00pm in the Stata Center, Kirsch Auditorium, 32-123 for a special event as part of the Dertouzos Distinguished Speakers series. We will welcome Prof. Fei-Fei Li from Stanford University for her talk entitled ‘What we see and what we value: AI with a human perspective’. Abstract:One of the most ancient sensory functions, vision emerged in prehistoric animals more than 540 million years ago. Since then animals, empowered first by the ability to perceive the world, and then to move around and change the world, developed more and more sophisticated intelligence systems, culminating in human intelligence. Throughout this process, visual intelligence has been a cornerstone of animal intelligence. Enabling machines to see is hence a critical step toward building intelligent machines. In this talk, I will explore a series of projects with my students and collaborators, all aiming to develop intelligent visual machines using machine learning and deep learning methods. I begin by explaining how neuroscience and cognitive science inspired the development of algorithms that enabled computers to see what humans see. Then I discuss intriguing limitations of human visual attention and how we can develop computer algorithms and applications to help, in effect allowing computers to see what humans don’t see. Yet this leads to important social and ethical considerations about what we do not want to see or do not want to be seen, inspiring work on privacy computing in computer vision, as well as the importance of addressing data bias in vision algorithms. Finally I address the tremendous potential and opportunity to develop smart cameras and robots that help people see or do what we want machines’ help seeing or doing, shifting the narrative from AI’s potential to replace people to AI’s opportunity to help people. We present our work in ambient intelligence in healthcare as well as household robots as examples of AI’s potential to augment human capabilities. Last but not least, the cumulative observations of developing AI from a human-centered perspective has led to the establishment of Stanford’s Institute for Human-centered AI (HAI). I will showcase a small sample of interdisciplinary projects supported by HAI. Bio:Dr. Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford’s Human-Centered AI Institute. She served as the Director of Stanford’s AI Lab from 2013 to 2018. And during her sabbatical from Stanford from January 2017 to September 2018, she was Vice President at Google and served as Chief Scientist of AI/ML at Google Cloud. Dr. Fei-Fei Li obtained her B.A. degree in physics from Princeton in 1999 with High Honors, and her PhD degree in electrical engineering from California Institute of Technology (Caltech) in 2005. She also holds a Doctorate Degree (Honorary) from Harvey Mudd College. Dr. Li joined Stanford in 2009 as an assistant professor. Prior to that, she was on faculty at Princeton University (2007-2009) and University of Illinois Urbana-Champaign (2005-2006).Dr. Fei-Fei Li’s current research interests include cognitively inspired AI, machine learning, deep learning, computer vision and AI+healthcare especially ambient intelligent systems for healthcare delivery. In the past she has also worked on cognitive and computational neuroscience. Dr. Li has published more than 200 scientific articles in top-tier journals and conferences, including Nature, PNAS, Journal of Neuroscience, CVPR, ICCV, NIPS, ECCV, ICRA, IROS, RSS, IJCV, IEEE-PAMI, New England Journal of Medicine, Nature Digital Medicine, etc. Dr. Li is the inventor of ImageNet and the ImageNet Challenge, a critical large-scale dataset and benchmarking effort that has contributed to the latest developments in deep learning and AI. In addition to her technical contributions, she is a national leading voice for advocating diversity in STEM and AI. She is co-founder and chairperson of the national non-profit AI4ALL aimed at increasing inclusion and diversity in AI education.Dr. Li is an elected Member of the National Academy of Engineering (NAE), the National Academy of Medicine (NAM) and American Academy of Arts and Sciences (AAAS). She is also a Fellow of ACM, a member of the Council on Foreign Relations (CFR), a recipient of the 2022 IEEE PAMI Thomas Huang Memorial Prize, 2019 IEEE PAMI Longuet-Higgins Prize, 2019 National Geographic Society Further Award, 2017 Athena Award for Academic Leadership, IAPR 2016 J.K. Aggarwal Prize, the 2016 IEEE PAMI Mark Everingham Award, the 2016 nVidia Pioneer in AI Award, 2014 IBM Faculty Fellow Award, 2011 Alfred Sloan Faculty Award, 2012 Yahoo Labs FREP award, 2009 NSF CAREER award, the 2006 Microsoft Research New Faculty Fellowship, among others. Dr. Li is a keynote speaker at many academic or influential conferences, including the World Economics Forum (Davos), the Grace Hopper Conference 2017 and the TED2015 main conference. Work from Dr. Li's lab have been featured in a variety of magazines and newspapers including New York Times, Wall Street Journal, Fortune Magazine, Science, Wired Magazine, MIT Technology Review, Financial Times, and more. She was selected as a 2017 Women in Tech by the ELLE Magazine, a 2017 Awesome Women Award by Good Housekeeping, a Global Thinker of 2015 by Foreign Policy, and one of the “Great Immigrants: The Pride of America” in 2016 by the Carnegie Foundation, past winners include Albert Einstein, Yoyo Ma, Sergey Brin, et al. 32-123

February 23

Add to Calendar 2022-02-23 16:00:00 2022-02-23 17:00:00 America/New_York Dertouzos Distinguished Lecture: Connectivity, Dr. Robert M. Metcalfe Abstract:The Internet is 52+ years old. It has by now suddenly reached two thirds of the human race. It has moved us considerably toward our goals of freedom and prosperity. And when COVID hit, the Internet was ready. Zoom! And yet we are overwhelmed by all the connectivity the Internet is delivering. For example, fake news. It’s time to treat CONNECTIVITY as a thing, with its own science, engineering, dimensions, disruptions, and … pathologies. I will try this by telling you stories from the evolution of the Internet, invention of Ethernet, founding of 3Com Corporation, and Metcalfe’s Law (V~N^2).Bio:Dr. Robert M. Metcalfe has for the last 11 years been Professor of Innovation in the Cockrell School of Engineering and Professor of Entrepreneurship in the McCombs School of Business, at The University of Texas at Austin. Last month he retired from UTAustin to start his sixth career, TBD. Metcalfe was an Internet pioneer beginning 1970 at MIT, Harvard, Xerox Palo Alto Research Center, Stanford, and 3Com. He invented Ethernet at Xerox Parc on May 22, 1973. Today Ethernet is the Internet’s standard plumbing. It now adds, especially if we count Wireless Ethernet (Wi-Fi), billions of standard Internet ports per year. Metcalfe has won the Bell, Hopper, Japan C&C, Marconi, McCluskey, Shannon and Stibitz Prizes. He is a Life Trustee Emeritus of MIT and a member of the National Academy of Engineering. Metcalfe received the IEEE Medal of Honor in 1996 and the National Medal of Technology in 2005, both for his leadership in the invention, standardization and commercialization of Ethernet.Join Zoom Meeting: https://mit.zoom.us/j/93738417481?pwd=aEtEUTZJS1R2SGg3SWtubHhnc05uQT09Password: 789147One tap mobile+16465588656,,93738417481# US (New York)+16699006833,,93738417481# US (San Jose)Meeting ID: 937 3841 7481US : +1 646 558 8656 or +1 669 900 6833

March 10

Add to Calendar 2021-03-10 16:00:00 2021-03-10 17:00:00 America/New_York Prof. Ross Anderson: Infrastructure – the Good, the Bad and the Ugly Abstract:Computer technology, like the railroad, gives us infrastructure that empowers innovators. The Internet and cloud computing let startups like YouTube and Instagram soar to huge valuations almost overnight, with only a handful of staff. But 21st century tech differs from the 19th century variety in that criminals also build infrastructure, from botnets through malware-as-a-service. There's also dual-use infrastructure, from Tor to bitcoins, with entangled legitimate and criminal applications. So crime can scale too. And even "respectable" infrastructure has disruptive uses. Social media enabled both Barack Obama and Donald Trump to outflank the political establishment and win power; they have also been used to foment communal violence in Asia. How are we to make sense of all this? Is it simply a matter for antitrust lawyers and cybercrime fighters, or do computer scientists have some insights to offer?For the past twenty years, we have been studying the economics of information security. If Alice guards a system while Bob pays the cost of failure, you can expect trouble. This subject started out with concerns about infrastructure, namely payment card fraud and the insecurity of Windows. It worked on topics from the patch cycle through the behavioural economics of privacy to cybercrime. We learned that many persistent problems are down to misaligned incentives.We are now realising that when problems scale, infrastructure is usually involved; that we need computer-science insights into scaling as well as economists' insights into incentives; and that both of us have underestimated the role of institutions. We need to understand all this better to put controls at the right level in the stack and to develop better strategies to fight cybercrime. We may also find some new directions as the regulation of technology moves up the political agenda.Bio:Ross Anderson has devoted his career to developing security engineering as a discipline. He was a pioneer of hardware tamper-resistance, API security, peer-to-peer systems, prepayment metering and powerline communications. His other research extends from cryptography through side channels and the safety and privacy of clinical systems to technology policy. He was one of the founders of the discipline of security economics, and is PI of the Cambridge Cybercrime Centre, which collects and analyses data about online crime and abuse. He is a Fellow of the Royal Society and the Royal Academy of Engineering, as well as a winner of the Lovelace Medal – the UK's top award in computing. He holds faculty positions at both Cambridge and Edinburgh universities.Join Zoom Meeting: https://mit.zoom.us/j/97159851800?pwd=eURFTGtQMFBCUFJFUDRVeGQ5T0NxQT09Password: 946469One tap mobile+16465588656,,97159851800# US (New York)+16699006833,,97159851800# US (San Jose)Meeting ID: 971 5985 1800US : +1 646 558 8656 or +1 669 900 6833International Numbers: https://mit.zoom.us/u/abcbYreLGW zoom

September 30

Add to Calendar 2020-09-30 16:00:00 2020-09-30 17:00:00 America/New_York Fireside with Sen. Ron Wyden **Register for this talk here: http://web.mit.edu/webcast/csail/f20/seminar_series/ **Oregon Senator Ron Wyden was elected to the U.S. Senate in 1996. Wyden has consistently pushed for smart tech policies that put users – not powerful corporations – first.Wyden coauthored the bipartisan Section 230 of the Communications Decency Act, wrote the first net neutrality bill and has defended strong encryption against threats from short-sighted government officials.Wyden, a fierce advocate for strong data privacy protections, last fall introduced the most comprehensive bill to protect Americans’ personal details online, the Mind Your Own Business Act. This sweeping piece of legislation would also hold corporate executives accountable for abusing personal information.

October 16

Add to Calendar 2019-10-16 16:30:00 2019-10-16 17:30:00 America/New_York David Patterson: Domain Specific Architectures for Deep Neural Networks: Three Generations of Tensor Processing Units (TPUs) Abstract:The recent success of deep neural networks (DNN) has inspired a resurgence in domain specific architectures (DSAs) to run them, partially as a result of the deceleration of microprocessor performance improvement due to the ending of Moore’s Law. DNNs have two phases: training, which constructs accurate models, and inference, which serves those models. Google’s first generation Tensor Processing Unit (TPUv1) offered 50X improvement in performance per watt over conventional architectures for inference. We naturally asked whether a successor could do the same for training. This talk reviews TPUv1 and explores how Google built the first production DSA supercomputer for the much harder problem of training, which was deployed in 2017. Google’s TPUv2/TPUv3 supercomputers with up to 1024 chips train production DNNs at close to perfect linear speedup with 10X-40X higher floating point operations per Watt than general-purpose supercomputers running the high-performance computing benchmark Linpack.Bio:David Patterson is a Berkeley CS professor emeritus, a Google distinguished engineer, and the RISC-V Foundation Vice-Chair. He received his BA, MS, and PhD degrees from UCLA. His Reduced Instruction Set Computer (RISC), Redundant Array of Inexpensive Disks (RAID), and Network of Workstation projects helped lead to multibillion-dollar industries. This work led to 40 awards for research, teaching, and service plus many papers and seven books. The best known book is ‘Computer Architecture: A Quantitative Approach,’ and the newest is ‘The RISC-V Reader: An Open Architecture Atlas.’ In 2018 he and John Hennessy shared the ACM A.M. Turing Award. 32-123

September 18

Add to Calendar 2019-09-18 14:00:00 2019-09-18 15:30:00 America/New_York Yoshua Bengio: Learning High-Level Representations for Agents Abstract: A dream of the deep learning project was that a learner could discover a hierarchy of representations with the highest level capturing abstract concepts of the kind we can communicate with language, reason with and generally use to understand how the world works. It is still a challenge but recent progress in machine learning could help us approach that objective. We will discuss how the ability to discover causal structure, and in particular causal variables (from low-level perception), would be a progress in that direction, and how recent advances in meta-learning and taking the perspective of an agent (rather than a passive learner) could also play in important role. Because we are talking about high-level variables, this discussion touches on the old divide between system 1 cognition (intuitive and anchored in perception) and system 2 cognition (conscious and more sequential): these high-level variables sit at the interface between the two types of cognitive computations. Unlike what some advocate when they talk about disentangling factors of variation, I do not believe that these high-level variables should be considered to be independent of each other in a statistical sense. They might be independent in a different sense, in the sense that we can independently modify some rather than others, and in fact they are connected to each other through a rich web of dependencies of the kind we communicate with language. The agent and meta-learning perspective also force us to leave the safe ground of iid data of current learning theory and start thinking about non-stationarity, which a learning agent is necessarily confronted with. Instead of viewing such non-stationarity as a hurdle, we propose to view it as a source of information because these changes are often due to interventions by agents (the learner or other agents), and can thus help a learner figure out causal structure. In return, we might be able to build learning systems which are much more robust to changes in the environment, because they capture what is stationary and stable in the long run throughout these nonstationarities, and they build models of the world which can quickly adapt to such changes and sometimes may even be able to correctly infer what caused those changes (thus requiring no additional examples to make sense of the change in distribution).Bio: Recognized as one of the world’s leading experts in artificial intelligence (AI), Yoshua Bengio is a pioneer in deep learning. He began his education in Montreal, where he earned his Ph.D. in computer science from McGill University, then completed his postdoctoral studies at the Massachusetts Institute of Technology (MIT). Since 1993, he has been a professor in the Department of Computer Science and Operational Research at the Université de Montréal. In 2000, he became the holder of the Canada Research Chair in Statistical Learning Algorithms. At the same time, he founded and became scientific director of Mila, the Quebec Institute of Artificial Intelligence, which is the world’s largest university-based research group in deep learning. Lastly, he is also the Scientific Director of IVADO. His research contributions have been undeniable. In 2018, Yoshua Bengio collected the largest number of new citations in the world for a computer scientist thanks to his three reference works and some 500 publications. Professor Bengio aspires to discover the principles that lead to intelligence through learning, and his research has earned him multiple awards. In 2019, he earned the prestigious Killam Prize in computer science from the Canada Council for the Arts and was co-winner of the A.M. Turing Prize, considered the “Nobel of computer science,” which he received jointly with Geoffrey Hinton and Yann LeCun. He is also an Officer of the Order of Canada, a Fellow of the Royal Society of Canada, a recipient of the Excellence Awards of the Fonds de recherche du Québec – Nature et technologies 2019 and the Marie-Victorin prize and was named Scientist of the Year by Radio-Canada in 2017. These honours reflect the profound influence of his work on the evolution of our society.Concerned about the social impact of AI, he has actively contributed to the development of the Montreal Declaration for the Responsible Development of Artificial Intelligence. Grier 34-401

May 03

The Era of Artificial Intelligence

Kai-Fu Lee
Chairman and CEO, Sinovation Ventures
Add to Calendar 2019-05-03 11:00:00 2019-05-03 12:00:00 America/New_York The Era of Artificial Intelligence *there will be a book sale and signing during the reception prior to the lecture, at 10:30am*ABSTRACT:In this talk, I will talk about the four waves of Artificial Intelligence (AI) , and how AI will permeate every part of our lives in the next decade. I will also talk about how this will be different from previous technology revolutions -- it will be faster and be driven by not one superpower, but two (US and China). AI will add $16 trillion to our global GDP, but also cause many challenges that will be hard to solve. I will talk in particular about AI replacing routine jobs -- the consequences, the proposed solutions that don't work (such as UBI), and end with a blueprint of co-existence between humans and AI.BIO:Dr. Kai-Fu Lee is the Chairman and CEO of Sinovation Ventures (www.sinovationventures.com/) and President of Sinovation Venture’s Artificial Intelligence Institute. Sinovation Ventures, managing US$2 billion dual currency investment funds, is a leading venture capital firm focusing on developing the next generation of Chinese high-tech companies. Prior to founding Sinovation in 2009, Dr. Lee was the President of Google China. Previously, he held executive positions at Microsoft, SGI, and Apple. Dr. Lee received his Bachelor degree from Computer Science from Columbia University, Ph.D. from Carnegie Mellon University, as well as Honorary Doctorate Degrees from both Carnegie Mellon and the City University of Hong Kong. He is a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), Times 100 in 2013, WIRED 25 Icons , Asian Business Leader 2018 by Asia House, and followed by over 50 million audience on social media.In the field of artificial intelligence, Dr. Lee built one of the first game playing programs to defeat a world champion (1988, Othello), as well as the world’s first large-vocabulary, speaker-independent continuous speech recognition system. Dr. Lee founded Microsoft Research China, which was named as the hottest research lab by MIT Technology Review. Later renamed Microsoft Research Asia, this institute trained the great majority of AI leaders in China, including CTOs or AI heads at Baidu, Tencent, Alibaba, Lenovo, Huawei, and Haier. While with Apple, Dr. Lee led AI projects in speech and natural language, which have been featured on Good Morning America on ABC Television and the front page of Wall Street Journal. He has authored 10 U.S. patents, and more than 100 journal and conference papers. Altogether, Dr. Lee has been in artificial intelligence research, development, and investment for more than 30 years. His New York Time and Wall Street Journal bestselling book AI Superpowers (aisuperpowers.com) discusses US-China co-leadership in the age of AI as well as the greater societal impacts brought upon by the AI technology revolution. Kirsch 32-123

March 06

Add to Calendar 2019-03-06 16:30:00 2019-03-06 17:30:00 America/New_York John Hennessy: The End of Road for General Purpose Processors & the Future of Computing Abstract: After 40 years of remarkable progress in VLSI microprocessors, a variety of factors are combining to lead to a much slower rate of performance growth in the future. These limitations arise from three different areas: IC technology, architectural limitations, and changing applications and usage. The end of Dennard scaling and the slowdown in Moore's Law will require more efficient architectural approaches than we have relied on. Although progress on general-purpose processors may hit an asymptote, domain specific architectures may be one attractive path for important classes of problems. Such an approach will pose new challenges for software and chip designers, as well as increase the need for more advanced design tools.Bio: John L. Hennessy, Professor of Electrical Engineering and Computer Science, served as President of Stanford University from September 2000 until August 2016. In 2017, he initiated the Knight-Hennessy Scholars Program, the largest fully endowed graduate-level scholarship program in the world, and he currently serves as Director of the program. Hennessy joined Stanford’s faculty in 1977. In 1981, he drew together researchers to focus on a technology known as RISC (Reduced Instruction Set Computer), which revolutionized computing by increasing performance while reducing costs. Hennessy helped transfer this technology to industry cofounding MIPS Computer Systems in 1984. He served as chair of Computer Science, dean of the School of Engineering, and university provost before being appointed as Stanford’s 10th president. As president he focused on increasing financial aid and on developing new initiatives in multidisciplinary research and teaching. He was the founding board chair of Atheros Communications, one of the early developers of WiFi technology, and has served on the board of Cisco and Alphabet (Google’s parent company). He is the coauthor of two internationally used textbooks in computer architecture.His honors include the 2012 Medal of Honor of the Institute of Electrical and Electronics Engineers and the ACM Turing Award (jointly with David Patterson). He is an elected member of the National Academy of Engineering, the National Academy of Science, the American Academy of Arts and Sciences, The Royal Academy of Engineering, and the American Philosophical Society. Hennessy earned his bachelor’s degree in electrical engineering from Villanova University and his master’s and doctoral degrees in computer science from the Stony Brook University. 32-123

September 26

Add to Calendar 2018-09-26 16:30:00 2018-09-26 17:30:00 America/New_York Learning Using Statistical Invariants (Revision of Machine Learning Problem) Abstract:This talk covers a new learning paradigm. In the classical paradigm, the learning machine uses a data-driven model of learning. In the LUSI paradigm, the learning machine computes statistical invariants that are specific for the problem, and then minimizes the expected error in a way that preserves these invariants; it is thus both data- and intelligent-driven learning. Mathematically, methods of the new paradigm employ both strong and weak convergence mechanisms, increasing the rate of convergence. LUSI describes a complete theory of learning and can be considered as a mathematical alternative to "deep learning" heuristic. The talk includes content from a paper published in Machine Learning, Springer 2018. Bio: Vladimir Vapnik has taught and researched in computer science, theoretical and applied statistics for over 30 years. His major achievements include a general theory of minimizing the expected risk using empirical data, and a new type of learning machine called Support Vector that possesses a high level of generalization ability. These techniques have been used in constructing intelligent machines. ��Prof. Vapnik gained his Masters Degree in Mathematics in 1958 at Uzbek State University, Samarkand, USSR, received his master's degree in mathematics from the Uzbek State University in 1958, and completed his Ph.D in statistics at the Institute of Control Sciences, Moscow in 1964, where he became Head of the Computer Science Research Department, before he joined AT&T Bell Laboratories, NJ. �He holds a Professor of Computer Science and Statistics position at Royal Holloway, University of London since 1995, and a position as Professor of Computer Science at Columbia University, New York City since 2003. Kirsch Auditorium 32-123

May 02

Add to Calendar 2018-05-02 16:30:00 2018-05-02 17:30:00 America/New_York Dertouzos Distinguished Lecture: From Utopia to Dystopia in 29 Short Years This year marks a milestone in the history of the World Wide Web: more than half of the world's population will be online. However, threats to the web are real and many; from misinformation and questionable political advertising to a loss of control over our personal data.If a future web were to empower the hopes we had for the original web, what would it look like? A new web is possible in which people have complete control of their own data; applications and data are separated from each other; and a user's choice for each are separate. Let's assemble the brightest minds from business, technology, government, civil society, the arts and academia to build a new web which will again empower science and democracy. 32-123/Kirsch Auditorium