Using Reductions to Understand Polynomial Time Algorithms
For many problems the best known algorithms take polynomial time but super linear time, we want to understand why problems like diameter, longest common sub-sequence and co-linear point detection can't be solved in linear time.
Fine-grained complexity is a bridge between classical complexity and classical algorithms. Classical complexity seeks to classify problems as being members of complexity classes like P, NP, PSPACE, etc. These are coarse metrics that do not distinguish between problems solvable in time n, n^2, n^3 and n^10. On most computational models (RAM, Turing machine, etc.) there are time hierarchy theorems; there exist problems solvable in t(n) time which are not solvable in time t(n)^(1-epsilon) for all epsilon>0. Fine-grained complexity seeks to classify specific problems as being in a given level of the hierarchy. For example, showing a O(n^2) algorithm exists for a problem and proving a lower bound for that problem showing it must take n^(2-o(1)) time to solve. We lack the tools to do this unconditionally, so instead, we build networks of reductions showing equivalences between problems or conditional hardness of problems.