Parallel computing is tremendously underutilized in today’s computer systems, especially for application programming, due to its complexity. As Moore’s law comes to an end, however, software strategies like parallel programming will be relied upon to continue the trend of ever more powerful computing. Professor Charles E. Leiserson, Professor of Computer Science and Engineering at MIT and Webster Professor in Electrical Engineering and Computer Science, envisions making parallel computing a seamless extension of commodity serial computing—and ultimately, a world where every programmer can easily and efficiently produce fast code.
Since the 1960s, we have relied on Moore’s law as the fuel we need to drive continuous innovation in computing. Moore’s law is an economic and technology predictive trend that has enabled computational scientists and engineers to steadily develop more powerful computers with increasingly advanced computing capabilities at a lower cost. As the driver behind the miniaturization trend of processor chips, Moore’s law has served us well for over 60 years.
The reality of physics, though, has spelled the end of this golden age: Moore’s law is over, and only a couple generations of miniaturization remain. Unfortunately, semiconductor hardware cannot become smaller forever, as it is implausible that semiconductor technologists can make wires thinner than atoms. The end of Moore’s law has attenuated the computing capabilities of semiconductor processors, threatening the trend of ever more powerful computer applications and products.
How much of an impact will the end to this trend have? To put it in perspective, if Moore’s law had ended 20 years ago, many of the innovations we use and enjoy today—electric vehicles, smart phones, high-resolution medical imaging, machine learning, and many more—would not have been possible. After all, microprocessor performance increased 1,000-fold over that time period, enabling the capabilities embodied in these inventions.
Prof. Leiserson leads CSAIL’s Supertech Research Group, which investigates alternatives to semiconductor miniaturization as drivers for performance. Specifically, Leiserson’s research focuses on performance engineering: creating technologies—algorithms, software, and hardware—for developing fast code easily. Leiserson’s group builds computing infrastructure that makes it easier for programmers to obtain performance for their applications. Application performance is important, because it enables new capabilities for users of those applications. Moreover, for scientists and engineers who use computation to simulate physical or social phenomena, performance can provide them with a stronger “lens,” allowing them to “see” the objects of their studies more clearly.
Parallel computing is one of the most important strategies for performance engineering. Parallel programming allows many tasks to be performed simultaneously by breaking them down into subtasks that can be executed simultaneously. Although virtually all computers today have parallel-computing capabilities, most applications today are serial: They only perform one operation at a time. But parallel programming introduces the problem of how to coordinate the parallel tasks. How much more capability does a business or organization have than a single individual for accomplishing their mission? Similarly, a parallel program has much more capability than a comparable serial program, but managing the many tasks a parallel program must perform can be a daunting chore. Programmers have a choice: a simple but slow code they can understand, or a fast but complicated code they struggle to get right.
Leiserson’s approach is to use mathematical rigor and deep knowledge of performance to understand how to obtain the best of both worlds: fast parallel code that is easy to develop and understand. Besides just parallelism, his group studies all other aspects of performance engineering, such as caching, compiler optimization, and algorithms. Leiserson hopes to put fast code within the close reach of average programmers so that society can continue to enjoy steadily increasing computing performance and the benefits it engenders.