Scientists working at the intersection of AI and cancer care need to be more transparent about their methods and publish research that is reproducible, according to a new commentary co-authored by CSAIL's Tamara Breoderick.
20 years ago, the SCIgen paper generator took advantage of machine-learning algorithms’ pattern recognition skills to fool the World Multiconference on Systemics, Cybernetics and Informatics.
MosaicML, co-founded by an MIT alumnus and a professor, made deep-learning models faster and more efficient. Its acquisition by Databricks broadened that mission.
“PhotoGuard,” developed by MIT CSAIL researchers, prevents unauthorized image manipulation, safeguarding authenticity in the era of advanced generative models.
Ana Trišović, who studies the democratization of AI, reflects on a career path that she began as a student downloading free MIT resources in Serbia.
PhD students interning with the MIT-IBM Watson AI Lab look to improve natural language usage.
Researchers create a curious machine-learning model that finds a wider variety of prompts for training a chatbot to avoid hateful or harmful output.
The confluence of medicine and artificial intelligence stands to create truly high-performance, specialized care for patients, with enhanced precision diagnosis and personalized disease management. But to supercharge these systems we need massive amounts of personal health data, coupled with a delicate balance of privacy, transparency, and trust.
A new approach could streamline virtual training processes or aid clinicians in reviewing diagnostic videos.
A new technique can be used to predict the actions of human or AI agents who behave suboptimally while working toward unknown goals.