MIT hosts workshop on theoretical foundations of deep learning

CSAIL1

Last week MIT’s Institute for Foundations of Data Science (MIFODS), supported by the NSF TRIPODS program, held an interdisciplinary workshop aimed at tackling the underlying theory behind deep learning. Led by MIT professor Aleksander Madry, the event focused on a number of research discussions at the intersection of math, statistics, and theoretical computer science.

Unless you’ve been living under a rock, you’ve probably heard of the term “deep learning.” It’s a machine learning technique that teaches computers how to learn by example - a skill that comes easily to us humans. Deep learning has reshaped how machines compute information, and perceive and understand language - but much of how this works still escapes meaningful understanding.

To get a step closer to this, the workshop brought together theory experts to attack the question from a statistical, computational and representational angle. Specific discussions covered the strengths and limitations of various concepts underlying deep learning, such as generalization bounds for deep learning, robustness to adversarial corruptions, and representation abilities of deep neural nets.

“The goal of this multidisciplinary workshop is to include, not only well defined strategies to tackle the challenges of deep learning, but also the exchange of mathematical techniques that may be the key to unlocking various problems,” says Madry.

The workshop also included a bootcamp to showcase the basic techniques, definitions and goals of several of the communities involved with the event.