ML Tea: Algorithm Design with Learned Predictions

Speakers: Justin Chen

Abstract: The classic framing of algorithm design goes something like this: I give you a mathematical formulation of a problem by specifying valid inputs and outputs, and you give me an algorithm which, on any input, will produce a valid output using limited resources (e.g., time or memory). This “worst-case” analysis, which guarantees algorithmic correctness and efficiency over all possible inputs, is the foundation of computer science theory. This is for good reason: algorithms designed to perform well in the worst-case are reliable, composable, and require no assumptions or prior knowledge of their applications.

In modern applications, algorithms are not run once in a vacuum on an unknown input. Procedures which process huge amounts of data are run hour after hour, day after day, on similar input datasets. These data may contain structure which is hard to formally incorporate into the algorithmic problem description. On the other hand, machine learning excels at extracting patterns which do not necessarily conform to simple-to-state, mathematical rules. Learned models make probabilistic predictions of outputs given inputs without a formal problem description or algorithm designer; they tune themselves on training data.

In this talk, I will overview the developing area of "algorithms with predictions" as well as several opportunities and challenges with approaching ML for algorithms.

Bios: Justin Y. Chen is a fifth-year PhD student studying theoretical computer science in the Electrical Engineering and Computer Science department at MIT, where he is advised by Piotr Indyk. He works on problems at the intersection of algorithm design, data analysis and machine learning.