September 18

Add to Calendar 2019-09-18 14:00:00 2019-09-18 15:30:00 America/New_York Yoshua Bengio: Learning High-Level Representations for Agents Abstract: A dream of the deep learning project was that a learner could discover a hierarchy of representations with the highest level capturing abstract concepts of the kind we can communicate with language, reason with and generally use to understand how the world works. It is still a challenge but recent progress in machine learning could help us approach that objective. We will discuss how the ability to discover causal structure, and in particular causal variables (from low-level perception), would be a progress in that direction, and how recent advances in meta-learning and taking the perspective of an agent (rather than a passive learner) could also play in important role. Because we are talking about high-level variables, this discussion touches on the old divide between system 1 cognition (intuitive and anchored in perception) and system 2 cognition (conscious and more sequential): these high-level variables sit at the interface between the two types of cognitive computations. Unlike what some advocate when they talk about disentangling factors of variation, I do not believe that these high-level variables should be considered to be independent of each other in a statistical sense. They might be independent in a different sense, in the sense that we can independently modify some rather than others, and in fact they are connected to each other through a rich web of dependencies of the kind we communicate with language. The agent and meta-learning perspective also force us to leave the safe ground of iid data of current learning theory and start thinking about non-stationarity, which a learning agent is necessarily confronted with. Instead of viewing such non-stationarity as a hurdle, we propose to view it as a source of information because these changes are often due to interventions by agents (the learner or other agents), and can thus help a learner figure out causal structure. In return, we might be able to build learning systems which are much more robust to changes in the environment, because they capture what is stationary and stable in the long run throughout these nonstationarities, and they build models of the world which can quickly adapt to such changes and sometimes may even be able to correctly infer what caused those changes (thus requiring no additional examples to make sense of the change in distribution).Bio: Recognized as one of the world’s leading experts in artificial intelligence (AI), Yoshua Bengio is a pioneer in deep learning. He began his education in Montreal, where he earned his Ph.D. in computer science from McGill University, then completed his postdoctoral studies at the Massachusetts Institute of Technology (MIT). Since 1993, he has been a professor in the Department of Computer Science and Operational Research at the Université de Montréal. In 2000, he became the holder of the Canada Research Chair in Statistical Learning Algorithms. At the same time, he founded and became scientific director of Mila, the Quebec Institute of Artificial Intelligence, which is the world’s largest university-based research group in deep learning. Lastly, he is also the Scientific Director of IVADO. His research contributions have been undeniable. In 2018, Yoshua Bengio collected the largest number of new citations in the world for a computer scientist thanks to his three reference works and some 500 publications. Professor Bengio aspires to discover the principles that lead to intelligence through learning, and his research has earned him multiple awards. In 2019, he earned the prestigious Killam Prize in computer science from the Canada Council for the Arts and was co-winner of the A.M. Turing Prize, considered the “Nobel of computer science,” which he received jointly with Geoffrey Hinton and Yann LeCun. He is also an Officer of the Order of Canada, a Fellow of the Royal Society of Canada, a recipient of the Excellence Awards of the Fonds de recherche du Québec – Nature et technologies 2019 and the Marie-Victorin prize and was named Scientist of the Year by Radio-Canada in 2017. These honours reflect the profound influence of his work on the evolution of our society.Concerned about the social impact of AI, he has actively contributed to the development of the Montreal Declaration for the Responsible Development of Artificial Intelligence. Grier 34-401

October 16

Add to Calendar 2019-10-16 16:30:00 2019-10-16 17:30:00 America/New_York David Patterson: Domain Specific Architectures for Deep Neural Networks: Three Generations of Tensor Processing Units (TPUs) Abstract:The recent success of deep neural networks (DNN) has inspired a resurgence in domain specific architectures (DSAs) to run them, partially as a result of the deceleration of microprocessor performance improvement due to the ending of Moore’s Law. DNNs have two phases: training, which constructs accurate models, and inference, which serves those models. Google’s first generation Tensor Processing Unit (TPUv1) offered 50X improvement in performance per watt over conventional architectures for inference. We naturally asked whether a successor could do the same for training. This talk reviews TPUv1 and explores how Google built the first production DSA supercomputer for the much harder problem of training, which was deployed in 2017. Google’s TPUv2/TPUv3 supercomputers with up to 1024 chips train production DNNs at close to perfect linear speedup with 10X-40X higher floating point operations per Watt than general-purpose supercomputers running the high-performance computing benchmark Linpack.Bio:David Patterson is a Berkeley CS professor emeritus, a Google distinguished engineer, and the RISC-V Foundation Vice-Chair. He received his BA, MS, and PhD degrees from UCLA. His Reduced Instruction Set Computer (RISC), Redundant Array of Inexpensive Disks (RAID), and Network of Workstation projects helped lead to multibillion-dollar industries. This work led to 40 awards for research, teaching, and service plus many papers and seven books. The best known book is ‘Computer Architecture: A Quantitative Approach,’ and the newest is ‘The RISC-V Reader: An Open Architecture Atlas.’ In 2018 he and John Hennessy shared the ACM A.M. Turing Award. 32-123

September 30

Add to Calendar 2020-09-30 16:00:00 2020-09-30 17:00:00 America/New_York Fireside with Sen. Ron Wyden **Register for this talk here: **Oregon Senator Ron Wyden was elected to the U.S. Senate in 1996. Wyden has consistently pushed for smart tech policies that put users – not powerful corporations – first.Wyden coauthored the bipartisan Section 230 of the Communications Decency Act, wrote the first net neutrality bill and has defended strong encryption against threats from short-sighted government officials.Wyden, a fierce advocate for strong data privacy protections, last fall introduced the most comprehensive bill to protect Americans’ personal details online, the Mind Your Own Business Act. This sweeping piece of legislation would also hold corporate executives accountable for abusing personal information.

March 10

Add to Calendar 2021-03-10 16:00:00 2021-03-10 17:00:00 America/New_York Prof. Ross Anderson: Infrastructure – the Good, the Bad and the Ugly Abstract:Computer technology, like the railroad, gives us infrastructure that empowers innovators. The Internet and cloud computing let startups like YouTube and Instagram soar to huge valuations almost overnight, with only a handful of staff. But 21st century tech differs from the 19th century variety in that criminals also build infrastructure, from botnets through malware-as-a-service. There's also dual-use infrastructure, from Tor to bitcoins, with entangled legitimate and criminal applications. So crime can scale too. And even "respectable" infrastructure has disruptive uses. Social media enabled both Barack Obama and Donald Trump to outflank the political establishment and win power; they have also been used to foment communal violence in Asia. How are we to make sense of all this? Is it simply a matter for antitrust lawyers and cybercrime fighters, or do computer scientists have some insights to offer?For the past twenty years, we have been studying the economics of information security. If Alice guards a system while Bob pays the cost of failure, you can expect trouble. This subject started out with concerns about infrastructure, namely payment card fraud and the insecurity of Windows. It worked on topics from the patch cycle through the behavioural economics of privacy to cybercrime. We learned that many persistent problems are down to misaligned incentives.We are now realising that when problems scale, infrastructure is usually involved; that we need computer-science insights into scaling as well as economists' insights into incentives; and that both of us have underestimated the role of institutions. We need to understand all this better to put controls at the right level in the stack and to develop better strategies to fight cybercrime. We may also find some new directions as the regulation of technology moves up the political agenda.Bio:Ross Anderson has devoted his career to developing security engineering as a discipline. He was a pioneer of hardware tamper-resistance, API security, peer-to-peer systems, prepayment metering and powerline communications. His other research extends from cryptography through side channels and the safety and privacy of clinical systems to technology policy. He was one of the founders of the discipline of security economics, and is PI of the Cambridge Cybercrime Centre, which collects and analyses data about online crime and abuse. He is a Fellow of the Royal Society and the Royal Academy of Engineering, as well as a winner of the Lovelace Medal – the UK's top award in computing. He holds faculty positions at both Cambridge and Edinburgh universities.Join Zoom Meeting: 946469One tap mobile+16465588656,,97159851800# US (New York)+16699006833,,97159851800# US (San Jose)Meeting ID: 971 5985 1800US : +1 646 558 8656 or +1 669 900 6833International Numbers:

February 23

Add to Calendar 2022-02-23 16:00:00 2022-02-23 17:00:00 America/New_York Dertouzos Distinguished Lecture: Connectivity, Dr. Robert M. Metcalfe Abstract:The Internet is 52+ years old. It has by now suddenly reached two thirds of the human race. It has moved us considerably toward our goals of freedom and prosperity. And when COVID hit, the Internet was ready. Zoom! And yet we are overwhelmed by all the connectivity the Internet is delivering. For example, fake news. It’s time to treat CONNECTIVITY as a thing, with its own science, engineering, dimensions, disruptions, and … pathologies. I will try this by telling you stories from the evolution of the Internet, invention of Ethernet, founding of 3Com Corporation, and Metcalfe’s Law (V~N^2).Bio:Dr. Robert M. Metcalfe has for the last 11 years been Professor of Innovation in the Cockrell School of Engineering and Professor of Entrepreneurship in the McCombs School of Business, at The University of Texas at Austin. Last month he retired from UTAustin to start his sixth career, TBD. Metcalfe was an Internet pioneer beginning 1970 at MIT, Harvard, Xerox Palo Alto Research Center, Stanford, and 3Com. He invented Ethernet at Xerox Parc on May 22, 1973. Today Ethernet is the Internet’s standard plumbing. It now adds, especially if we count Wireless Ethernet (Wi-Fi), billions of standard Internet ports per year. Metcalfe has won the Bell, Hopper, Japan C&C, Marconi, McCluskey, Shannon and Stibitz Prizes. He is a Life Trustee Emeritus of MIT and a member of the National Academy of Engineering. Metcalfe received the IEEE Medal of Honor in 1996 and the National Medal of Technology in 2005, both for his leadership in the invention, standardization and commercialization of Ethernet.Join Zoom Meeting: 789147One tap mobile+16465588656,,93738417481# US (New York)+16699006833,,93738417481# US (San Jose)Meeting ID: 937 3841 7481US : +1 646 558 8656 or +1 669 900 6833

November 02

Add to Calendar 2022-11-02 17:00:00 2022-11-02 18:00:00 America/New_York What we see and what we value: AI with a human perspective Please join us on Wednesday, November 2 at 5:00pm in the Stata Center, Kirsch Auditorium, 32-123 for a special event as part of the Dertouzos Distinguished Speakers series. We will welcome Prof. Fei-Fei Li from Stanford University for her talk entitled ‘What we see and what we value: AI with a human perspective’. Abstract:One of the most ancient sensory functions, vision emerged in prehistoric animals more than 540 million years ago. Since then animals, empowered first by the ability to perceive the world, and then to move around and change the world, developed more and more sophisticated intelligence systems, culminating in human intelligence. Throughout this process, visual intelligence has been a cornerstone of animal intelligence. Enabling machines to see is hence a critical step toward building intelligent machines. In this talk, I will explore a series of projects with my students and collaborators, all aiming to develop intelligent visual machines using machine learning and deep learning methods. I begin by explaining how neuroscience and cognitive science inspired the development of algorithms that enabled computers to see what humans see. Then I discuss intriguing limitations of human visual attention and how we can develop computer algorithms and applications to help, in effect allowing computers to see what humans don’t see. Yet this leads to important social and ethical considerations about what we do not want to see or do not want to be seen, inspiring work on privacy computing in computer vision, as well as the importance of addressing data bias in vision algorithms. Finally I address the tremendous potential and opportunity to develop smart cameras and robots that help people see or do what we want machines’ help seeing or doing, shifting the narrative from AI’s potential to replace people to AI’s opportunity to help people. We present our work in ambient intelligence in healthcare as well as household robots as examples of AI’s potential to augment human capabilities. Last but not least, the cumulative observations of developing AI from a human-centered perspective has led to the establishment of Stanford’s Institute for Human-centered AI (HAI). I will showcase a small sample of interdisciplinary projects supported by HAI. Bio:Dr. Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford’s Human-Centered AI Institute. She served as the Director of Stanford’s AI Lab from 2013 to 2018. And during her sabbatical from Stanford from January 2017 to September 2018, she was Vice President at Google and served as Chief Scientist of AI/ML at Google Cloud. Dr. Fei-Fei Li obtained her B.A. degree in physics from Princeton in 1999 with High Honors, and her PhD degree in electrical engineering from California Institute of Technology (Caltech) in 2005. She also holds a Doctorate Degree (Honorary) from Harvey Mudd College. Dr. Li joined Stanford in 2009 as an assistant professor. Prior to that, she was on faculty at Princeton University (2007-2009) and University of Illinois Urbana-Champaign (2005-2006).Dr. Fei-Fei Li’s current research interests include cognitively inspired AI, machine learning, deep learning, computer vision and AI+healthcare especially ambient intelligent systems for healthcare delivery. In the past she has also worked on cognitive and computational neuroscience. Dr. Li has published more than 200 scientific articles in top-tier journals and conferences, including Nature, PNAS, Journal of Neuroscience, CVPR, ICCV, NIPS, ECCV, ICRA, IROS, RSS, IJCV, IEEE-PAMI, New England Journal of Medicine, Nature Digital Medicine, etc. Dr. Li is the inventor of ImageNet and the ImageNet Challenge, a critical large-scale dataset and benchmarking effort that has contributed to the latest developments in deep learning and AI. In addition to her technical contributions, she is a national leading voice for advocating diversity in STEM and AI. She is co-founder and chairperson of the national non-profit AI4ALL aimed at increasing inclusion and diversity in AI education.Dr. Li is an elected Member of the National Academy of Engineering (NAE), the National Academy of Medicine (NAM) and American Academy of Arts and Sciences (AAAS). She is also a Fellow of ACM, a member of the Council on Foreign Relations (CFR), a recipient of the 2022 IEEE PAMI Thomas Huang Memorial Prize, 2019 IEEE PAMI Longuet-Higgins Prize, 2019 National Geographic Society Further Award, 2017 Athena Award for Academic Leadership, IAPR 2016 J.K. Aggarwal Prize, the 2016 IEEE PAMI Mark Everingham Award, the 2016 nVidia Pioneer in AI Award, 2014 IBM Faculty Fellow Award, 2011 Alfred Sloan Faculty Award, 2012 Yahoo Labs FREP award, 2009 NSF CAREER award, the 2006 Microsoft Research New Faculty Fellowship, among others. Dr. Li is a keynote speaker at many academic or influential conferences, including the World Economics Forum (Davos), the Grace Hopper Conference 2017 and the TED2015 main conference. Work from Dr. Li's lab have been featured in a variety of magazines and newspapers including New York Times, Wall Street Journal, Fortune Magazine, Science, Wired Magazine, MIT Technology Review, Financial Times, and more. She was selected as a 2017 Women in Tech by the ELLE Magazine, a 2017 Awesome Women Award by Good Housekeeping, a Global Thinker of 2015 by Foreign Policy, and one of the “Great Immigrants: The Pride of America” in 2016 by the Carnegie Foundation, past winners include Albert Einstein, Yoyo Ma, Sergey Brin, et al. 32-123

December 07

Add to Calendar 2022-12-07 17:00:00 2022-12-07 18:00:00 America/New_York Shading Languages and the Emergence of Programmable Graphics Systems Title: Shading Languages and the Emergence of Programmable Graphics SystemsAbstract: A major challenge in using computer graphics for movies and games is to create a rendering system that can create realistic pictures of a virtual world.The system must handle the variety and complexity of the shapes, materials, and lighting that combine to create what we see every day. The images must also be free of artifacts, emulate cameras to create depth of field and motion blur, and compose seamlessly with photographs of live action.Pixar's RenderMan was created for this purpose,and has been widely used in feature film production. A key innovation in the system is to use a shading language to procedurally describe appearance. Shading languages were subsequently extended to run in real-time on graphics processing units (GPUs), and now shading languages are widely used in game engines. The final step was the realization that the GPU is a data-parallel computer, and the the shading language could be extended into a general-purpose data-parallel programming language. This enabled a wide variety of applications in high performance computing, such as physical simulation and machine learning, to be run on GPUs.Nowadays, GPUs are the fastest computers in the world.This talk will review the history of shading languages and GPUs, and discuss the broader implications for computing.Bio:Pat Hanrahan is the Canon Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University. His research focuses on rendering algorithms, graphics systems, andvisualization.Hanrahan received a Ph.D. in biophysics from the University of Wisconsin-Madison in 1985. As a founding employee at Pixar Animation Studios in the 1980s, Hanrahan led the design of the RenderMan Interface Specification and the RenderMan Shading Language. In1989, he joined the faculty of Princeton University. In 1995, he moved to Stanford University. More recently, Hanrahan served as a co-founder and CTO of Tableau Software. He has received three Academy Awards for Science and Technology, the SIGGRAPH ComputerGraphics Achievement Award, the SIGGRAPH Stephen A. Coons Award, and the IEEE Visualization Career Award. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences. In 2019, he received the ACM A. M. Turing Award. 32-123

February 15

Add to Calendar 2023-02-15 17:00:00 2023-02-15 18:00:00 America/New_York Generative Machines and Ground Truth Abstract:We are living in a period of rapid acceleration for generative AI, where large language and text-to-image diffusion models are being deployed in a multitude of everyday contexts. From ChatGPT’s training set of hundreds of billions of words to LAION-5B’s corpus of almost 6 billion image-text pairs, these vast datasets – scraped from the internet and treated as “ground truth” – play a critical role in shaping the epistemic boundaries that govern machine learning models. Yet training data is beset with complex social, political, and epistemological challenges. What happens when data is stripped of context, meaning, and provenance? How does training data limit *what* and *how* machine learning systems interpret the world? And most importantly, what forms of power do these approaches enhance and enable? This lecture is an invitation to reflect on the epistemic foundations of generative AI, and to consider the wide-ranging impacts of the current generative turn. Bio:Professor Kate Crawford is a leading international scholar of the social implications of artificial intelligence. She is a Research Professor at USC Annenberg in Los Angeles, a Senior Principal Researcher at MSR in New York, an Honorary Professor at the University of Sydney, and the inaugural Visiting Chair for AI and Justice at the École Normale Supérieure in Paris. Her latest book, Atlas of AI (Yale, 2021) won the Sally Hacker Prize from the Society for the History of Technology, the ASSI&T Best Information Science Book Award, and was named one of the best books in 2021 by New Scientist and the Financial Times. Over her twenty-year research career, she has also produced groundbreaking creative collaborations and visual investigations. Her project Anatomy of an AI System with Vladan Joler is in the permanent collection of the Museum of Modern Art in New York and the V&A in London, and was awarded with the Design of the Year Award in 2019 and included in the Design of the Decades by the Design Museum of London. Her collaboration with the artist Trevor Paglen, Excavating AI, won the Ayrton Prize from the British Society for the History of Science. She has advised policy makers in the United Nations, the White House, and the European Parliament, and she currently leads the Knowing Machines Project, an international research collaboration that investigates the foundations of machine learning. 32-123

March 15

Add to Calendar 2023-03-15 17:00:00 2023-03-15 18:00:00 America/New_York The search for evidence of quantum advantage Abstract: The field of quantum computation heavily relies on the belief that quantum computation violates the extended Church Turing thesis, namely, that quantum many-body systems cannot be simulated by classical ones with only polynomially growing overhead. Importantly, we must ask: what experimental evidence do we have for this assumption? A natural candidate for such evidence is quantum simulations of highly complex many body quantum evolutions. However, there are inherent difficulties in viewing such simulations as providing evidence for quantum advantage.Assuming the high complexity regimes of quantum systems are hard to simulate classically, verification of quantum evolution requires either exponential time or sophisticated interactive protocols; unfortunately, so far no (conjectured to be) computationally hard problem was identified and verified to be solved by quantum simulations. A major effort towards providing such evidence for scalable quantum advantage have concentrated in the past decade on quantum random circuit sampling (RCS); such is the famous supremacy experiment by Google from 2019. The RCS experiments can be modeled as sampling from a random quantum circuit where each gate is subject to a small amount of noise. I will describe a recent work in which we give a polynomial time classical algorithm for sampling from the output distribution of a noisy random quantum circuit under an anti-concentration assumption, to within inverse polynomial total variation distance. It should be noted that our algorithm is not practical in its current form, and does not address finite-size RCS based quantum supremacy experiments. Never the less our result gives strong evidence that random circuit sampling (RCS) cannot be the basis of a scalable experimental violation of the extended Church-Turing thesis. I will end with a discussion regarding the prospects of providing evidence for scalable violation of the ECTT in the pre-quantum fault tolerance era (also known as NISQ era). Based on recent joint work with Xun Gao, Zeph Landau Yunchao Liu and Umesh Vazirani, arxiv: 2211.03999, QIP 2023, STOC 2023Bio:Dorit Aharonov is a Professor at the school of computer science and engineering at the Hebrew university of Jerusalem and the CSO of Qedma quantum computing. In her PhD, Aharonov proved the quantum fault tolerance theorem together with her advisor Ben-Or; this theorem is one of the main pillars of quantum computation today. She later contributed several pioneering works in a variety of areas, including quantum algorithms, specifically quantum walks. quantum adiabatic computation and topologically related algorithms; as well as Hamiltonian complexity, quantum cryptography and quantum verification. Much of her research can be viewed as creating a bridge between physics and computer science, attempting to study fundamental physics questions using computational language. Aharonov was educated at the Hebrew university in Jerusalem (BSc in Mathematics and Physics, PhD in Computer Science and Physics) and then continued to a postdoc at IAS Princeton (Mathematics) and UC Berkeley (Computer Science). She had joined the faculty of the computer science department of the Hebrew university of Jerusalem in 2001. In 2005 Aharonov was featured by the journal Nature as one of four theoreticians making waves in their chosen field; In 2006 she won the Krill prize, and in 2014 she was awarded the Michael Bruno award. In 2020 she joined forces with Dr. Asif Sinay and Prof. Netanel Lindner to co-found QEDMA quantum computing. 32-123

November 08

Add to Calendar 2023-11-08 17:00:00 2023-11-08 18:00:00 America/New_York TURING LECTURE: CONNECTIVITY is a thing, is THE thing Abstract:Hear stories from Arpanet, Alohanet, Ethernet, and Internet, many of which are true. After 50+ years since the first Internet packets were switched, the most important new fact about the human condition is that we are now suddenly connected. The suddenness has caused pathologies -- hacking, porno, advertising, censorship, polarization, fake news -- but then extreme poverty has suddenly declined for the first time, thanks to the World Wide Web? Connectivity has evolved with various transitions and reversals, of which we can expect many more. And some of our AIs, still short of AGI, are approaching 100 trillion connection parameters.Bio:Bob Metcalfe '68 was an Internet pioneer starting at MIT in 1970. He continued switching Internet packets at Harvard, Xerox Parc, Stanford, and 3Com Corporation, which he founded in 1979. He invented Ethernet at Xerox Parc in 1973. He took 3Com public in 1984. He's been defending Metcalfe's Law (V~N^2) since the 1980s. This year Bobreceived the million-dollar ACM Turing Award for the invention, standardization, and commercialization of Ethernet. 32-123

March 20

Add to Calendar 2024-03-20 17:00:00 2024-03-20 18:00:00 America/New_York Algorithmic Discrepancy Theory and Randomized Controlled Trials Abstract:Theorems in discrepancy theory tell us that it is possible to partition a set of vectors into two sets that look surprisingly similar to each other. In particular, these sets can be much more similar to each other than those produced by a random partition. For many measures of similarity, computer scientists have developed algorithms that produce these partitions efficiently.A natural application for these algorithms is the design of randomized controlled trials (RCTs). RandomizedControlled Trials are used to test the effectiveness of interventions, like medical treatments and educationalinnovations. Randomization is used to ensure that the test and control groups are probably similar. When we know nothing about the experimental subjects, a random partition into test and control groups is the best choice. When experimenters have measured quantities about the experimental subjects that they expect could influence a subject's response to a treatment, the experimenters try to ensure that these quantities are evenly distributed between the test and control groups. That is, they want a random partition of low discrepancy.In this talk, I will survey some fundamental results in discrepancy theory, present a model for the analysis of RCTs, and summarize results from my joint work with Chris Harshaw, Fredrik Sävje, and Peng Zhang that uses algorithmic discrepancy theory to improve the design of randomized controlled trials.Bio:Daniel Alan Spielman is the Sterling Professor of Computer Science, and Professor of Statistics and Data Science, and of Mathematics at Yale. He received his B.A. in Mathematics and Computer Science from Yale in 1992, and his Ph.D in Applied Mathematics from M.I.T. in1995. After spending a year as an NSF Mathematical Sciences Postdoctoral Fellow in the Computer Science Department at U.C. Berkeley, he became a professor in the Applied Mathematics Department at M.I.T. He moved to Yale in 2005.He has received many awards, including the 1995 ACM Doctoral Dissertation Award, the 2002 IEEE Information Theory Paper Award, the 2008 and 2015 Godel Prizes, the 2009 Fulkerson Prize, the 2010 Nevanlinna Prize, the 2014 Polya Prize, the 2021 NAS Held Prize, the 2023 Breakthrough Prize in Mathematics, a Simons Investigator Award, and a MacArthur Fellowship. He is a Fellow of the Association for Computing Machinery and a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and the Connecticut Academy of Science and Engineering.His main research interests include the design andanalysis of algorithms, network science, machine learning, digital communications and scientific computing. 32-123