Last week MIT’s Institute for Foundations of Data Science (MIFODS) held an interdisciplinary workshop aimed at tackling the underlying theory behind deep learning. Led by MIT professor Aleksander Madry, the event focused on a number of research discussions at the intersection of math, statistics, and theoretical computer science.
We focus on finding novel approaches to improve the performance of modern computer systems without unduly increasing the complexity faced by application developers, compiler writers, or computer architects.
Our interests span quantum complexity theory, barriers to solving P versus NP, theoretical computer science with a focus on probabilistically checkable proofs (PCP), pseudo-randomness, coding theory, and algorithms.
We develop techniques for designing, implementing, and reasoning about multiprocessor algorithms, in particular concurrent data structures for multicore machines and the mathematical foundations of the computation models that govern their behavior.
We work on a wide range of problems in distributed computing theory. We study algorithms and lower bounds for typical problems that arise in distributed systems---like resource allocation, implementing shared memory abstractions, and reliable communication.
(This project is no longer active.) The T-1000, a prototype system of a thousand realistic processors embedded throughout an ensemble of interconnected FPGAs, seeks to demonstrate the scalability of timestamp-based cache coherence protocols on distributed shared memory systems.
We propose a novel aspect-augmented adversarial network for cross-aspect and cross-domain adaptation tasks. The effectiveness of our approach suggests the potential application of adversarial networks to a broader range of NLP tasks for improved representation learning, such as machine translation and language generation.
We aim to base a variety of cryptographic primitives on complexity theoretic assumptions. We focus on the assumption that there exist highly structured problems --- admitting so called "zero-knowledge" protocols --- that are nevertheless hard to compute
We study the fundamentals of Bayesian optimization and develop efficient Bayesian optimization methods for global optimization of expensive black-box functions originated from a range of different applications.
BlueDBM is an architecture of computer clusters consisting of fast distributed flash storage and in-storage accelerators, which often outperforms larger and more expensive clusters in applications such as graph analytics.
To further parallelize co-prime sampling based sparse sensing, we introduce Diophantine Equation in different algebraic structures to build generalized lattice arrays.
With strong relationship to generalized Chinese Remainder Theorem, the geometry properties in the remainder code space, a special lattice space, are explored.
We are interested in applying insights from distributed computing theory to understand how ants and other social insects work together to perform complex tasks such as foraging for food, allocating tasks to workers, and choosing high quality nest sites.
We aim to understand theory and applications of diversity-inducing probabilities (and, more generally, "negative dependence") in machine learning, and develop fast algorithms based on their mathematical properties.