Algorithmic Stability in Learning Theory
Speaker: Sasha Rakhlin , CBCL, MIT
Two key properties of learning algorithms are consistency (convergence to the smallest possible error) and stability (robustness to small changes). Surprisingly, these properties are related and, in certain situations, equivalent. In this talk, I will discuss some of the stability results in Statistical Learning Theory. In particular, I will focus on our recent proof of L_1 stability for the most-studied learning algorithm called empirical risk minimization (ERM). We will see how the analysis of the behavior of an empirical process is essential in understanding robustness of ERM. Applications of this result to clustering methods and other optimization procedures will be discussed.