This project explores the Lottery Ticket Hypothesis: the conjecture that neural networks contain much smaller sparse subnetworks capable of training to full accuracy. In the course of this project, we have demonstrated that these subnetworks existed at initialization in small networks and early in training in larger networks. In addition, we have shown that these lottery ticket subnetworks are state-of-the-art pruned neural networks.
The practical goal of this project is to develop sparse neural networks that we can train from scratch or from early in training, creating the opportunity to dramatically reduce the cost of training. The scientific goal of this project is to better understand neural network optimization by empirically studying the behavior of practical, large-scale networks.