Sometimes it’s easy to forget how good we humans are at understanding our surroundings. Without much thinking, we can describe objects and how they interact with each other.
EECS faculty head of artificial intelligence and decision making honored for significant and extended contributions to the field of AI.
A multimodal system uses models trained on language, vision, and action data to help robots develop and execute plans for household, construction, and manufacturing tasks.
MIT’s “PhysicsGen” helps robots handle items in homes and factories by tailoring training data to a particular machine. The system turns demonstrations captured in VR into thousands of simulations of the robot's most optimal movements.
A new “common-sense” approach to computer vision enables artificial intelligence that interprets scenes more accurately than other systems do.
Yilun Du, a PhD student and MIT CSAIL affiliate, discusses the potential applications of generative art beyond the explosion of images that put the web into creative hysterics.
Joining three teams backed by a total of $75 million, MIT researchers will tackle some of cancer’s toughest challenges.
Scientists employ an underused resource — radiology reports that accompany medical images — to improve the interpretive abilities of machine learning algorithms.
New research reveals a scalable technique that uses synthetic data to improve the accuracy of AI models that recognize images.
New CSAIL research highlights how LLMs excel in familiar scenarios but struggle in novel ones, questioning their true reasoning abilities versus reliance on memorization.