Neural network controllers provide complex robots with stability guarantees, paving the way for the safer deployment of autonomous vehicles and industrial machines.
LLMs trained primarily on text can generate complex visual concepts through code with self-correction. Researchers used these illustrations to train an image-free computer vision system to recognize real photos.
MIT CSAIL’s frugal deep-learning model infers the hidden physical properties of objects, then adapts to find the most stable grasps for robots in unstructured environments like homes and fulfillment centers.
“Alchemist” system adjusts the material attributes of specific objects within images to potentially modify video game models to fit different environments, fine-tune VFX, and diversify robotic training.
Three neurosymbolic methods help language models find better abstractions within natural language, then use those representations to execute complex tasks.
A CSAIL study highlights why it is so challenging to program a quantum computer to run a quantum algorithm, and offers a conceptual model for a more user-friendly quantum computer.
Adaptive smart glove from MIT CSAIL researchers can send tactile feedback to teach users new skills, guide robots with more precise manipulation, and help train surgeons and pilots.
The ambient light sensors responsible for smart devices’ brightness adjustments can capture images of touch interactions like swiping and tapping for hackers.
A multimodal system uses models trained on language, vision, and action data to help robots develop and execute plans for household, construction, and manufacturing tasks.
MIT CSAIL researchers established new connections between combinatorial and continuous optimization, which can find global solutions for complex motion planning puzzles.
By blending 2D images with foundation models to build 3D feature fields, a new MIT method helps robots understand and manipulate nearby objects with open-ended language prompts.
“Lightning” system connects photons to the electronic components of computers using a novel abstraction, creating the first photonic computing prototype to serve real-time machine-learning inference requests.
Developed by MIT researchers, BrightMarkers are invisible fluorescent tags embedded in physical objects to enhance motion tracking, virtual reality, and object detection.
FlexBoard is a flexible breadboard that enables rapid prototyping of objects with interactive sensors, actuators, and displays on curved and deformable surfaces.
The Association for Computing Machinery (ACM) recently awarded Yael Tauman Kalai, MIT Department of Electrical Engineering and Computer Science (EECS) adjunct professor, CSAIL member, and Senior Principal Researcher at Microsoft Research with the 2022 ACM Prize in Computing for her cryptography research.
Passionate about creating educational opportunities in India, PhD student Siddhartha Jayanti recently explored multiprocessor speed limits, in a paper written in the Indian language Telugu.