Charles E. Leiserson






Parallel computing is tremendously underutilized in today’s computer systems, especially for application programming, due to its complexity. As Moore’s law comes to an end, however, software strategies like parallel programming will be relied upon to continue the trend of ever more powerful computing. Professor Charles E. Leiserson, Professor of Computer Science and Engineering at MIT and Webster Professor in Electrical Engineering and Computer Science, envisions making parallel computing a seamless extension of commodity serial computingand ultimately, a world where every programmer can easily and efficiently produce fast code.

Since the 1960s, we have relied on Moore’s law as the fuel we need to drive continuous innovation in computing. Moore’s law is an economic and technology predictive trend that has enabled computational scientists and engineers to steadily develop more powerful computers with increasingly advanced computing capabilities at a lower cost. As the driver behind the miniaturization trend of processor chips, Moore’s law has served us well for over 60 years.

The reality of physics, though, has spelled the end of this golden age: Moore’s law is over, and only a couple generations of miniaturization remain. Unfortunately, semiconductor hardware cannot become smaller forever, as it is implausible that semiconductor technologists can make wires thinner than atoms. The end of Moore’s law has attenuated the computing capabilities of semiconductor processors, threatening the trend of ever more powerful computer applications and products.

How much of an impact will the end to this trend have? To put it in perspective, if Moore’s law had ended 20 years ago, many of the innovations we use and enjoy today—electric vehicles, smart phones, high-resolution medical imaging, machine learning, and many more—would not have been possible. After all, microprocessor performance increased 1,000-fold over that time period, enabling the capabilities embodied in these inventions. 

Prof. Leiserson leads CSAIL’s Supertech Research Group, which investigates alternatives to semiconductor miniaturization as drivers for performance. Specifically, Leiserson’s research focuses on performance engineering: creating technologies—algorithms, software, and hardware—for developing fast code easily. Leiserson’s group builds computing infrastructure that makes it easier for programmers to obtain performance for their applications. Application performance is important, because it enables new capabilities for users of those applications. Moreover, for scientists and engineers who use computation to simulate physical or social phenomena, performance can provide them with a stronger “lens,” allowing them to “see” the objects of their studies more clearly.  

Parallel computing is one of the most important strategies for performance engineering. Parallel programming allows many tasks to be performed simultaneously by breaking them down into subtasks that can be executed simultaneously. Although virtually all computers today have parallel-computing capabilities, most applications today are serial: They only perform one operation at a time. But parallel programming introduces the problem of how to coordinate the parallel tasks. How much more capability does a business or organization have than a single individual for accomplishing their mission? Similarly, a parallel program has much more capability than a comparable serial program, but managing the many tasks a parallel program must perform can be a daunting chore. Programmers have a choice: a simple but slow code they can understand, or a fast but complicated code they struggle to get right.

Leiserson’s approach is to use mathematical rigor and deep knowledge of performance to understand how to obtain the best of both worlds: fast parallel code that is easy to develop and understand. Besides just parallelism, his group studies all other aspects of performance engineering, such as caching, compiler optimization, and algorithms. Leiserson hopes to put fast code within the close reach of average programmers so that society can continue to enjoy steadily increasing computing performance and the benefits it engenders.


Charles E. Leiserson is Professor of Computer Science and Engineering at the Massachusetts Institute of Technology in Cambridge, Massachusetts, USA.  He joined the faculty of MIT in 1981, where he is now the Edwin Sibley Webster Professor in MIT’s Electrical Engineering and Computer Science Department.  He is former Associate Director and Chief Operating Officer of the MIT Computer Science and Artificial Intelligence Laboratory, the largest on-campus laboratory at MIT, where he leads the Supertech research group and is a member of the Theory of Computation group.  He is Faculty Director of the MIT-Air Force AI Accelerator, which is designed to make fundamental advances in artificial intelligence to improve Department of the Air Force operations while also addressing broader societal needs.  He received his B.S. from Yale University in 1975 and his Ph.D. from Carnegie Mellon University in 1981.
Professor Leiserson's research centers on the theory of parallel computing, especially as it relates to engineering reality.  He coauthored the first paper on systolic architectures.  He invented the retiming method of digital-circuit optimization and developed the algorithmic theory behind it.  On leave from MIT at Thinking Machines Corporation, he designed and led the implementation of the network architecture for the Connection Machine Model CM-5 Supercomputer. This machine was the world's most powerful supercomputer in the early 1990's, and it incorporated the "universal" fat-tree interconnection network he developed at MIT.  Fat-trees are now the preferred interconnect strategy for Infiniband technology.  He introduced the notion of cache-oblivious algorithms, which exploit the memory hierarchy near optimally while containing no tuning parameters for cache size or cache-line length.  He developed the Cilk multithreaded programming language and runtime system, which featured the first provably efficient work-stealing scheduler.  He led the development of several Cilk-based parallel chess-playing programs, including StarSocrates and Cilkchess, which won numerous prizes in international competition.  On leave from MIT as Director of System Architecture at Akamai Technologies, he led the engineering team that developed a worldwide content-distribution network with tens of thousands of Internet servers.  He founded Cilk Arts, Inc., which developed the Cilk++ multicore concurrency platform.  Intel Corporation acquired Cilk Arts in 2009.  He leads the NSF and Air Force-sponsored OpenCilk project, which is making a comprehensive open-source platform for Cilk programming widely available.

Professor Leiserson has made numerous contributions to computer-science education.  He is well known as a coauthor of the textbook, Introduction to Algorithms (The MIT Press), which was named ``Best 1990 Professional and Scholarly Book in Computer Science and Data Processing'' by the Association of American Publishers.  Currently in its third edition, it is the leading textbook on computer algorithms, having sold over 850,000 copies, and is one of the most cited publications in all of computer science.  He developed the MIT undergraduate courses on algorithms and discrete mathematics for computer science.  He was for many years the head of the computer-science program for the Singapore-MIT Alliance, one of the first distance-education collaborations, which produced popular video lectures of his undergraduate course on algorithms, viewable through MIT OpenCourseWare.  He developed MIT's undergraduate class on software performance engineering, which teaches parallel programming not as an end in itself, but as one of several software technologies for writing fast code.  His annual workshop on Leadership Skills for Engineering and Science Faculty has educated hundreds of faculty at MIT and around the world in the human issues involved in leading technical teams in academia.  He was the founding Workshop Chair for the MIT Undergraduate Practice Opportunities Program (UPOP), which teaches MIT Engineering sophomores how leadership skills can leverage their technical skills in professional environments.  He has graduated over two dozen Ph.D. students and supervised more than 60 Master's and Bachelor's theses.

Professor Leiserson has won many academic awards.  He received the 2014 ACM-IEEE Computer Society Ken Kennedy Award for his "enduring influence on parallel computing systems and their adoption into mainstream use through scholarly research and development."  He was also cited for "distinguished mentoring of computer science leaders and students."  He received the IEEE Computer Society 2014 Taylor L. Booth Education Award "for worldwide computer science education impact through writing a best-selling algorithms textbook, and developing courses on algorithms and parallel programming."  He received the ACM 2013 Paris Kanellakis Theory and Practice Award "for contributions to efficient and robust parallel computation through both provably efficient randomized scheduling protocols and a set of parallel-language primitives constituting the Cilk framework."  He has received numerous Best Paper awards at prestigious conferences.  He received the 1982 ACM Doctoral Dissertation Award for his Ph.D. thesis, Area-Efficient VLSI Computation.  He is a Margaret MacVicar Faculty Fellow at MIT, the highest recognition at MIT for undergraduate teaching.  He has been elected Fellow of four professional societies — AAAS, ACM, IEEE, and SIAM — and he is a member of the National Academy of Engineering.



Determining Wikipedia's Influence on Science

Wikipedia is one of the most widely accessed encyclopedia sites in the world, including by scientists. Our project aims to investigate just how far Wikipedia’s influence goes in shaping science.


Performance Engineering of Cache Profilers

Our goal is to develop lightweight tools that allow programmers to better understand the cache performance of their applications. Tasks include designing profilers, performance engineering existing ones, and exploring different metrics for cache interactions.

 2 More


Community of Research

Systems Community of Research

The Systems CoR is focused on building and investigating large-scale software systems that power modern computers, phones, data centers, and networks, including operating systems, the Internet, wireless networks, databases, and other software infrastructure.

Community of Research

Theory of Computation Community of Research

The goal of the Theory of Computation CoR is to study the fundamental strengths and limits of computation as well as how these interact with mathematics, computer science, and other disciplines.

 1 More