The U.S. Department of Defense (DoD) recently announced the recipients of its Multidisciplinary University Research Initiative (MURI) awards for 2023. This year, MIT Department of Electrical Engineering and Computer Science (EECS) Assistant Professor and CSAIL member Pulkit Agrawal, as well as MIT Department of Mechanical Engineering (MechE) professors George Barbasthasis and John Hart, and MIT Department of Materials Science and Engineering Associate Professor Rob Macfarlane are principal investigators on projects selected for MURI Awards. Agrawal will work alongside Aude Oliva, MIT director of the MIT–IBM Watson AI Lab, director of strategic industry engagement in the MIT Schwarzman College of Computing, and a senior research scientist at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
In addition, three MURI projects led by faculty at other institutions will be collaborating with other MIT researchers. The 2023 MURI awards total $220 million and will fund 31 research projects at an extensive list of institutions.
The MURI program is designed to support research in areas of critical importance to national defense, and brings together teams of researchers from multiple universities to collaborate on projects that are expected to lead to significant advances in science and technology. The program is highly competitive, with only a small fraction of proposals receiving funding each year, and it has a strong track record of supporting research that has led to breakthroughs in fields ranging from materials science to information technology.
Neuro‐inspired distributed deep learning
Pulkit Agrawal, assistant professor in EECS and an affiliate of CSAIL and the MIT Laboratory for Information and Decision Systems (LIDS), leads a third MURI project. Agrawal's team, which includes Ila Fiete and Aude Oliva of MIT as well as researchers from Harvard University and the University of California at Berkeley, proposes an alternative to the mainstream machine-learning practice of condensing large datasets into the weights of deep neural network and discarding the training data itself. Such an approach has fundamental limitations when it comes to lifelong learning and the associated questions of generalization, long-term reasoning, and catastrophic forgetting. As such, the proposal suggests avoiding compressing data ahead of time and instead combining data on-the-fly for the environment or task encountered by the agent, using memory retrieval to improve generalization.
The work aims to articulate a set of high-level computational principles for the design of memory systems, leveraging knowledge about how the brain encodes and retrieves information from memory. It aims to determine how these principles can be leveraged to tackle challenging machine learning tasks, understand how biological memory systems represent and retrieve naturalistic inputs, and help in the integration of AI into a wide variety of real-world systems. Ideally, the end result will yield practical algorithms for generalization to new tasks, lifelong learning without catastrophic forgetting, and transfer across sensory modalities.