Optimal PAC Bounds without Uniform Convergence

Host

Noah Golowich
MIT
Abstract: In statistical learning theory, the problem of determining the optimal sample complexity of binary classification was a longstanding challenge only recently resolved by Hanneke. Unfortunately, these arguments rely on the so-called uniform convergence principle which limits its applicability to more general learning settings. In this talk, we will discuss a new technique that overcomes this limitation and allows us to derive optimal sample complexity bounds for settings where uniform convergence is provably suboptimal and in some cases, does not yield any quantitative finite sample guarantees, e.g. multiclass classification, partial concept classification, and bounded regression. Our bounds resolve a few open questions in the literature, and our learning algorithm is based on a novel analysis of the online-to-batch framework of Littlestone and the celebrated one-inclusion graph algorithm of Haussler, Littlestone, and Warmuth.
Based on joint work with Ishaq Aden-Ali, Abhishek Shetty, and Nikita Zhivotovskiy.