Self-driving cars are likely to be safer, on average, than human-driven cars. But they may fail in new and catastrophic ways that a human driver could prevent. This project is designing a new architecture for a highly dependable self-driving car.
In order to be able to design synthetic organs that function autonomously, we will need to engineer artificial tissue homeostasis. To control the size of these artificial tissues, two major mechanisms will have to be engineered.
Our goal is to develop collaborative agents (software or robots) that can efficiently communicate with their human teammates. Key threads involve designing algorithms for inferring human behavior and for decision-making under uncertainty.
Our goal is to develop unsupervised or minimally supervised marine learning frameworks that allow autonomous underwater vehicles (AUVs) to explore unknown marine environments and communicate their findings in a semantically meaningful manner.
Existing methods for cloning and recombination of DNA enable construction of arbitrary sequences. However, the sequential nature of these techniques makes them time-consuming and expensive. Furthermore, while the transformation of an existing plasmid into a host strain can be reliable when a selection marker is used, there are many current limitations: the number of different plasmids that can be co-transformed is limited by the choice of markers and compatible origins of replication; plasmids are less stable than chromosomal DNA and are difficult to maintain indefinitely without mutation; and cistronic interactions cannot be designed since each new nucleotide sequence added is on an unconnected DNA molecule. To overcome these limitations, we are designing reconfigurable chromosomes consisting of both fixed and variable regions. While the fixed region is carefully optimized and tuned ahead of time, the variable region can be modified in the field, at the point-of-use, leading to rapid and on-demand realization of novel biocircuits with many different phenotypes.
Our goal is to create an online risk-aware planner for vehicle maneuvers that can make driving safer and less stressful through a “parallel” autonomous system that assists the driver by watching for risky situations, and by helping the driver take proactive, compensating actions before they become crises.
For all the progress made in self-driving technologies, there still aren’t many places where they can actually drive. Companies like Google only test their fleets in major cities where they’ve spent countless hours meticulously labeling the exact 3-D positions of lanes, curbs, off-ramps, and stop signs.
Despite what you might see in movies, today’s robots are still very limited in what they can do. They can be great for many repetitive tasks, but their inability to understand the nuances of human language makes them mostly useless for more complicated requests.
When organic chemists identify a useful chemical compound — a new drug, for instance — it’s up to chemical engineers to determine how to mass-produce it. There could be 100 different sequences of reactions that yield the same end product. But some of them use cheaper reagents and lower temperatures than others, and perhaps most importantly, some are much easier to run continuously, with technicians occasionally topping up reagents in different reaction chambers.
Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface. Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.
In recent years, a host of Hollywood blockbusters — including “The Fast and the Furious 7,” “Jurassic World,” and “The Wolf of Wall Street” — have included aerial tracking shots provided by drone helicopters outfitted with cameras. Those shots required separate operators for the drones and the cameras, and careful planning to avoid collisions. But a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and ETH Zurich hope to make drone cinematography more accessible, simple, and reliable.
Most robots are programmed using one of two methods: learning from demonstration, in which they watch a task being done and then replicate it, or via motion-planning techniques such as optimization or sampling, which require a programmer to explicitly specify a task’s goals and constraints.
Every other year, the International Conference on Automated Planning and Scheduling hosts a competition in which computer systems designed by conference participants try to find the best solution to a planning problem, such as scheduling flights or coordinating tasks for teams of autonomous satellites. On all but the most straightforward problems, however, even the best planning algorithms still aren’t as effective as human beings with a particular aptitude for problem-solving — such as MIT students.
In experiments involving a simulation of the human esophagus and stomach, researchers at CSAIL, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origami robots from the research group of CSAIL Director Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science.
One reason we don’t yet have robot personal assistants buzzing around doing our chores is because making them is hard. Assembling robots by hand is time-consuming, while automation — robots building other robots — is not yet fine-tuned enough to make robots that can do complex tasks.But if humans and robots can’t do the trick, what about 3-D printers?In a new paper, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) present the first-ever technique for 3-D printing robots that involves printing solid and liquid materials at the same time.The new method allows the team to automatically 3-D print dynamic robots in a single step, with no assembly required, using a commercially-available 3-D printer.
Autonomous robots performing a joint task send each other continual updates: “I’ve passed through a door and am turning 90 degrees right.” “After advancing 2 feet I’ve encountered a wall. I’m turning 90 degrees right.” “After advancing 4 feet I’ve encountered a wall.” And so on.Computers, of course, have no trouble filing this information away until they need it. But such a barrage of data would drive a human being crazy.
A team of CSAIL researchers have developed a printable origami robot that folds itself up from a flat sheet of plastic when heated and measures about a centimeter from front to back.Weighing only a third of a gram, the robot can swim, climb an incline, traverse rough terrain, and carry a load twice its weight. Other than the self-folding plastic sheet, the robot’s only component is a permanent magnet affixed to its back. Its motions are controlled by external magnetic fields.“The entire walking motion is embedded into the mechanics of the robot body,” says Cynthia R. Sung, a CSAIL graduate student and one of the robot’s co-developers. “In previous [origami] robots, they had to design electronics and motors to actuate the body itself.”