PI
Core/Dual
Ryan Williams
Room
32-G638Computer science has many layers of complexity that have yet to be discovered, much less mathematically proved. Understanding what is easy and what is hard to compute can help us answer questions about the power of computing itself — we know a lot about what computers can currently do, but have little idea of what they can’t do. For example, problems that are difficult for computers to solve today may be easy to solve tomorrow, if computer scientists could only find the right program; alternatively, these problems could just be intrinsically difficult, in a way that no program can efficiently solve them. Until we have a better understanding of the reasoning behind computation, we won’t know for sure.
Ryan Williams, Professor of Electrical Engineering and Computer Science of MIT CSAIL and EECS, wants to uncover and develop new forms of reasoning for understanding computation. He is interested in the connections between algorithm design and complexity theory, and works with other researchers and students in the Algorithms and Complexity Theory groups to find new algorithms and limitations in computing. Professor Williams previously taught at Stanford and earned a PhD from Carnegie Mellon. Throughout his career and through his research at MIT, he has taken key steps toward solving some of the biggest problems in theoretical computer science, including the famous P vs. NP problem, a problem about the limits of computation that we still don’t know much about.
Prof. Williams’s overall research vision is to understand the true power of algorithms by uncovering their limitations. These limitations are called complexity lower bounds. Questions about lower bounds seem very fundamental to the way in which we think. With P vs. NP, for example, NP is the class of problems whose solutions can be quickly recognized, while P is the class of problems whose solutions can be quickly generated. Intuition says that P ≠ NP (that quality recognition does not imply quality generation), because if P = NP, that would strongly suggest computers are far more powerful than we expect. But so far, by all measures, mathematics is extraordinarily far from proving P ≠ NP.
Lower bounds mathematically prove limitations on the power of algorithms, but we don’t know much about them compared to our knowledge of efficient algorithms. Prof. Williams considers the problem of proving lower bounds to be among the greatest scientific mysteries of our time. There is so little known about what is impossible. By proving that no clever algorithm can be used to solve certain problems, we can obtain a deeper understanding of computing and a stronger foundation for an information society.
Among his research groups’ current work to contribute to this understanding are projects in fine-grained complexity, the minimum circuit size problem, and hardness magnification.
1. Fine-grained complexity studies problems that, in theory, can be solved efficiently, but in practice, the algorithms for the problem still take too long, because the size of inputs used is enormous. His group works to understand what are the implications of seemingly minor-looking fine-grained improvements on fundamental algorithms, with quite surprising consequences in many cases. For instance, for many of the best-known circuit complexity limitations known, the only mathematical proofs of these limitations utilize a (minor-looking) slightly-faster algorithm for the satisfiability problem.
2. The minimum circuit size problem (MCSP) has been studied for decades, but many questions about its complexity are still open. MCSP is a “meta-computational” problem in that it is a computational problem about computation itself: Imagine a computer is given a table of inputs and outputs to a function, and the computer’s goal is to determine if there is an efficient program (a circuit) that implements the function. Is there a computer that can always achieve this goal efficiently, no matter what function is given? That is the mystery of MCSP, and it is a major open problem. The question of whether MCSP can be solved quickly is effectively asking whether there are methods for learning how to compute generic functions in an ultra-efficient way.
3. Hardness magnification is a general phenomenon in which any "good" method for solving a problem can be provably "accelerated" into a hyper-efficient method for solving the same problem. The circumstances under which such acceleration phenomena exist are not well understood. Prof. Williams believes it is possible proofs of major lower bounds (such as P ≠ NP) may be achieved by exploiting such phenomena.
Potentially, proving limitations (lower bounds) on computers and gaining a full understanding of computing will help us eventually know precisely how much time, space, and resources are required to solve given problems in the real world. In the future, this means we can go confidently forward knowing that the program we are running is optimal, with the best possible algorithm at any point in time. Such deep knowledge would have a major impact on how we think and use computing in the long term, such as significant applications to cryptography. (For example, “proofs of work” in blockchain technologies require that we have some problem that needs at least a certain amount of resources to be solved.)
Related Links
Last updated Feb 27 '23