For all the progress made in self-driving technologies, there still aren’t many places where they can actually drive. Companies like Google only test their fleets in major cities where they’ve spent countless hours meticulously labeling the exact 3-D positions of lanes, curbs, off-ramps, and stop signs.
Light lets us see the things that surround us, but what if we could also use it to see things hidden around corners? It sounds like science fiction, but that’s the idea behind a new algorithm out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) — and its discovery has implications for everything from emergency response to self-driving cars.
In recent years, a host of Hollywood blockbusters — including “The Fast and the Furious 7,” “Jurassic World,” and “The Wolf of Wall Street” — have included aerial tracking shots provided by drone helicopters outfitted with cameras. Those shots required separate operators for the drones and the cameras, and careful planning to avoid collisions. But a team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and ETH Zurich hope to make drone cinematography more accessible, simple, and reliable.
Researchers at CSAIL, New York University, and the University of Toronto have developed a computer system whose ability to produce a variation of a character in an unfamiliar writing system, on the first try, is indistinguishable from that of humans. That means that the system in some sense discerns what’s essential to the character — its general structure — but also what’s inessential — the minor variations characteristic of any one instance of it.
Self-driving cars are likely to be safer, on average, than human-driven cars. But they may fail in new and catastrophic ways that a human driver could prevent. This project is designing a new architecture for a highly dependable self-driving car.
The Arabic language is spoken by over one billion people around the world. Arabic presents a variety of challenges for speech and language processing technologies. In our group, we have several research topics examining Arabic, including dialect identification, speech recognition, machine translation, and language processing.
Automatic speech recognition (ASR) has been a grand challenge machine learning problem for decades. Our ongoing research in this area examines the use of deep learning models for distant and noisy recording conditions, multilingual, and low-resource scenarios.
Our main goal is to automatically search for relevant answers among many responses provided for a given question (Answer Selection), and search for relevant questions to reuse their existing answers (Question Retrieval).
Knitting is the new 3d printing. It has become popular again with the widespread availability of patterns and templates, together with the maker movements. Lower-cost industrial knitting machines are starting to emerge, but we are still missing the corresponding design tools. Our goal is to fill this gap.
Our main goal is to develop fact checking algorithms that can assess the credibility of claims mentioned in the textual statements and provide interpretable valid evidence that explains why a certain claim is considered as factually true or fake.
One of the challenges of processing real-world spoken content, such as automatic speech recognition, is the potential presence of different languages and dialects. Language and Dialect identification can be a useful capability to identify which language is being spoken during a recording.
Our project focuses on developing a general human motion prediction framework that can be applied in a variety of domains, ranging from manufacturing to space robotics, in order to improve the safety and efficiency of human-robot interaction.
Our goal is to create a theoretical framework and effective machine learning algorithms for robust, reliable control of autonomous vehicles. Key threads include developing metrics of confidence; and designing deep learning algorithms for parallel autonomy.
A robot's physical form and its motion are innately coupled - in order to change its physical design, one must often change the way it moves, and vice versa. Can computers automatically and simultaneously design robot structure and motion?
ACM, the Association for Computing Machinery announced this week that MIT CSAIL PhD student ‘19 Jiajun Wu was selected for an honorable mention for his dissertation “Learning to See the Physical World.”
This week it was announced that MIT professor and CSAIL principal investigator Tomas Lozano-Perez has been awarded the 2021 IEEE Robotics and Automation Award for his “foundational contributions to robot motion planning and visionary leadership in the field.”
A new MIT study finds “health knowledge graphs,” which show relationships between symptoms and diseases and are intended to help with clinical diagnosis, can fall short for certain conditions and patient populations. The results also suggest ways to boost their performance.
Developed at MIT’s Computer Science and Artificial Intelligence Laboratory, a team of robots can self-assemble to form different structures with applications in inspection, disaster response, and manufacturing