Thesis Defense: Below P vs NP: Fine-Grained Hardness for Big Data Problems

Speaker

Arturs Backurs
CSAIL MIT

Host

Piotr Indyk
MIT CSAIL
Abstract: The theory of NP-hardness has been remarkably successful in identifying problems that are unlikely to be solvable in polynomial time. However, many other important problems do have polynomial-time algorithms, but large exponents in their runtime bounds can make them inefficient in practice. For example, quadratic-time algorithms, although practical on moderately sized inputs, can become inefficient on big data problems that involve gigabytes or more of data. Although for many data analysis problems no sub-quadratic time algorithms are known, any evidence of quadratic-time hardness has remained elusive.

In this thesis we present hardness for several text analysis and machine learning tasks:
* Lower bounds for edit distance, regular expression matching and other pattern matching and string processing problems.
* Lower bounds for empirical risk minimization such as kernel support vectors machines and other kernel machine learning problems.
All of these problems have polynomial time algorithms, but despite extensive amount of research, no near-linear time algorithms have been found. We show that, under a natural complexity-theoretic conjecture, such algorithms do not exist. We also show how these lower bounds have inspired the development of efficient algorithms for some variants of these problems.