The Tree-D Fusion system integrates generative AI and genus-conditioned algorithms to create precise simulation-ready models of 600,000 existing urban trees across North America.
CSAIL researchers used AI-generated images to train a robot dog in parkour, without real-world data. Their LucidSim system demonstrates GenAI's potential for creating relevant robotics training data, enabling expert-level performance in complex tasks like obstacle navigation and stair climbing.
New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.
CSAIL researchers introduce a novel approach allowing robots to be trained in simulations of scanned home environments, paving the way for customized household automation accessible to anyone.
DenseAV, developed at MIT, learns to parse and understand the meaning of language just by watching videos of people talking, with potential applications in multimedia search, language learning, and robotics.
MIT CSAIL researchers enhance robotic precision with sophisticated tactile sensors in the palm and agile fingers, setting the stage for improvements in human-robot interaction and prosthetic technology
MIT CSAIL postdoc Nauman Dawalatabad explores ethical considerations, challenges in spear-phishing defense, and the optimistic future of AI-created voices across various sectors.
Designed to ensure safer skies, “Air-Guardian” blends human intuition with machine precision, creating a more symbiotic relationship between pilot and aircraft.
“PhotoGuard,” developed by MIT CSAIL researchers, prevents unauthorized image manipulation, safeguarding authenticity in the era of advanced generative models.