This CoR aims to develop AI technology that synthesizes symbolic reasoning, probabilistic reasoning for dealing with uncertainty in the world, and statistical methods for extracting and exploiting regularities in the world, into an integrated picture of intelligence that is informed by computational insights and by cognitive science.
We focus on finding novel approaches to improve the performance of modern computer systems without unduly increasing the complexity faced by application developers, compiler writers, or computer architects.
The Systems CoR is focused on building and investigating large-scale software systems that power modern computers, phones, data centers, and networks, including operating systems, the Internet, wireless networks, databases, and other software infrastructure.
(This project is no longer active.) The T-1000, a prototype system of a thousand realistic processors embedded throughout an ensemble of interconnected FPGAs, seeks to demonstrate the scalability of timestamp-based cache coherence protocols on distributed shared memory systems.
Self-driving cars are likely to be safer, on average, than human-driven cars. But they may fail in new and catastrophic ways that a human driver could prevent. This project is designing a new architecture for a highly dependable self-driving car.
BlueDBM is an architecture of computer clusters consisting of fast distributed flash storage and in-storage accelerators, which often outperforms larger and more expensive clusters in applications such as graph analytics.
Our goal is to develop collaborative agents (software or robots) that can efficiently communicate with their human teammates. Key threads involve designing algorithms for inferring human behavior and for decision-making under uncertainty.
Our goal is to develop unsupervised or minimally supervised marine learning frameworks that allow autonomous underwater vehicles (AUVs) to explore unknown marine environments and communicate their findings in a semantically meaningful manner.
Our goal is to create an online risk-aware planner for vehicle maneuvers that can make driving safer and less stressful through a “parallel” autonomous system that assists the driver by watching for risky situations, and by helping the driver take proactive, compensating actions before they become crises.
Developed at MIT’s Computer Science and Artificial Intelligence Laboratory, a team of robots can self-assemble to form different structures with applications in inspection, disaster response, and manufacturing
For all the progress made in self-driving technologies, there still aren’t many places where they can actually drive. Companies like Google only test their fleets in major cities where they’ve spent countless hours meticulously labeling the exact 3-D positions of lanes, curbs, off-ramps, and stop signs.
Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface. Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.
Most robots are programmed using one of two methods: learning from demonstration, in which they watch a task being done and then replicate it, or via motion-planning techniques such as optimization or sampling, which require a programmer to explicitly specify a task’s goals and constraints.
In experiments involving a simulation of the human esophagus and stomach, researchers at CSAIL, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origami robots from the research group of CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science.
One reason we don’t yet have robot personal assistants buzzing around doing our chores is because making them is hard. Assembling robots by hand is time-consuming, while automation — robots building other robots — is not yet fine-tuned enough to make robots that can do complex tasks.But if humans and robots can’t do the trick, what about 3-D printers?In a new paper, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) present the first-ever technique for 3-D printing robots that involves printing solid and liquid materials at the same time.The new method allows the team to automatically 3-D print dynamic robots in a single step, with no assembly required, using a commercially-available 3-D printer.
A team of CSAIL researchers have developed a printable origami robot that folds itself up from a flat sheet of plastic when heated and measures about a centimeter from front to back.Weighing only a third of a gram, the robot can swim, climb an incline, traverse rough terrain, and carry a load twice its weight. Other than the self-folding plastic sheet, the robot’s only component is a permanent magnet affixed to its back. Its motions are controlled by external magnetic fields.“The entire walking motion is embedded into the mechanics of the robot body,” says Cynthia R. Sung, a CSAIL graduate student and one of the robot’s co-developers. “In previous [origami] robots, they had to design electronics and motors to actuate the body itself.”