Accurate Model Compression at GPT Scale

Speaker

Dan Alistarh
IST Austria

Host

Nir Shavit
CSAIL MIT
Abstract: Abstract: A key barrier to the wide deployment of highly-accurate machine learning models is their high computational and memory overhead. Although we have long had the mathematical tools for highly-accurate compression of such models (LeCun et al., 1990), these theoretically-elegant techniques require second-order (curvature) information of the model's loss function, which is hard to even approximate efficiently at scale. In this talk, I will describe our work on bridging this computational divide, which enables for the first time accurate second-order pruning and quantization of models at truly massive scale. Our running example will be the 175-Billion-parameter GPT-3/OPT language generation model: compressed using our techniques, it can now be run efficiently on a single GPU, with speedups of up to 5x, and negligible accuracy loss.

Bio: Dan Alistarh is a Professor at IST Austria, in Vienna. Previously, he was a Researcher with Microsoft, a Postdoc at MIT CSAIL, and received his PhD from the EPFL. His research is on algorithms for efficient machine learning and high-performance computing, with a focus on scalable DNN inference and training, for which he was awarded an ERC Starting Grant in 2018. In his spare time, he works with the ML research team at Neural Magic, a startup based in Somerville, on making compression faster, more accurate and accessible to practitioners.