PI
Core/Dual

Charles E. Leiserson

Professor

Phone

253-5833

Room

32-G768

Parallel computing is tremendously underutilized in today’s computer systems, especially for application programming, due to its complexity. As Moore’s law comes to an end, however, software strategies like parallel programming will be relied upon to continue the trend of ever more powerful computing. Professor Charles E. Leiserson, Professor of Computer Science and Engineering at MIT and Webster Professor in Electrical Engineering and Computer Science, envisions making parallel computing a seamless extension of commodity serial computingand ultimately, a world where every programmer can easily and efficiently produce fast code.

Since the 1960s, we have relied on Moore’s law as the fuel we need to drive continuous innovation in computing. Moore’s law is an economic and technology predictive trend that has enabled computational scientists and engineers to steadily develop more powerful computers with increasingly advanced computing capabilities at a lower cost. As the driver behind the miniaturization trend of processor chips, Moore’s law has served us well for over 60 years.

The reality of physics, though, has spelled the end of this golden age: Moore’s law is over, and only a couple generations of miniaturization remain. Unfortunately, semiconductor hardware cannot become smaller forever, as it is implausible that semiconductor technologists can make wires thinner than atoms. The end of Moore’s law has attenuated the computing capabilities of semiconductor processors, threatening the trend of ever more powerful computer applications and products.

How much of an impact will the end to this trend have? To put it in perspective, if Moore’s law had ended 20 years ago, many of the innovations we use and enjoy today—electric vehicles, smart phones, high-resolution medical imaging, machine learning, and many more—would not have been possible. After all, microprocessor performance increased 1,000-fold over that time period, enabling the capabilities embodied in these inventions. 

Prof. Leiserson leads CSAIL’s Supertech Research Group, which investigates alternatives to semiconductor miniaturization as drivers for performance. Specifically, Leiserson’s research focuses on performance engineering: creating technologies—algorithms, software, and hardware—for developing fast code easily. Leiserson’s group builds computing infrastructure that makes it easier for programmers to obtain performance for their applications. Application performance is important, because it enables new capabilities for users of those applications. Moreover, for scientists and engineers who use computation to simulate physical or social phenomena, performance can provide them with a stronger “lens,” allowing them to “see” the objects of their studies more clearly.  

Parallel computing is one of the most important strategies for performance engineering. Parallel programming allows many tasks to be performed simultaneously by breaking them down into subtasks that can be executed simultaneously. Although virtually all computers today have parallel-computing capabilities, most applications today are serial: They only perform one operation at a time. But parallel programming introduces the problem of how to coordinate the parallel tasks. How much more capability does a business or organization have than a single individual for accomplishing their mission? Similarly, a parallel program has much more capability than a comparable serial program, but managing the many tasks a parallel program must perform can be a daunting chore. Programmers have a choice: a simple but slow code they can understand, or a fast but complicated code they struggle to get right.

Leiserson’s approach is to use mathematical rigor and deep knowledge of performance to understand how to obtain the best of both worlds: fast parallel code that is easy to develop and understand. Besides just parallelism, his group studies all other aspects of performance engineering, such as caching, compiler optimization, and algorithms. Leiserson hopes to put fast code within the close reach of average programmers so that society can continue to enjoy steadily increasing computing performance and the benefits it engenders.

Research Areas

Impact Areas

Projects

Project

Determining Wikipedia's Influence on Science

Wikipedia is one of the most widely accessed encyclopedia sites in the world, including by scientists. Our project aims to investigate just how far Wikipedia’s influence goes in shaping science.

Project

Performance Engineering of Cache Profilers

Our goal is to develop lightweight tools that allow programmers to better understand the cache performance of their applications. Tasks include designing profilers, performance engineering existing ones, and exploring different metrics for cache interactions.

 2 More

Groups

Community of Research

Theory of Computation Community of Research

The goal of the Theory of Computation CoR is to study the fundamental strengths and limits of computation as well as how these interact with mathematics, computer science, and other disciplines.