We’ve long known that blood pressure, breathing, body temperature and pulse provide an important window into the complexities of human health. But a growing body of research suggests that another vital sign – how fast you walk – could be a better predictor of health issues like cognitive decline, falls, and even certain cardiac or pulmonary diseases.
Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.
For robots to do what we want, they need to understand us. Too often, this means having to meet them halfway: teaching them the intricacies of human language, for example, or giving them explicit commands for very specific tasks. But what if we could develop robots that were a more natural extension of us and that could actually do whatever we are thinking?
It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?
Machines that predict the future, robots that patch wounds, and wireless emotion-detectors are just a few of the exciting projects that came out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) this year. Here’s a sampling of 16 highlights from 2016 that span the many computer science disciplines that make up CSAIL. Robots for exploring Mars — and your stomach
This fall’s new FAA regulations have made drone flight easier than ever for both companies and consumers. But what if the drones out on the market aren’t exactly what you want?A new system from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is the first to allow users to design, simulate, and build their own custom drone. Users can change the size, shape, and structure of their drone based on the specific needs they have for payload, cost, flight time, battery usage, and other factors.
When the filmmaking pioneers Auguste and Louis Lumière screened their 1895 film, "The Arrival of a Train at La Ciotat," audiences were so frightened by the real appearance of the image that they screamed and got out of the way — or so a well-known anecdote goes. Today, as one enters a virtual reality (VR) space — such as that conjured by MIT Visiting Artist Karim Ben Khelifa in his vanguard project "The Enemy" — it is not uncommon for participants to experience a similar shock at the sounds of footsteps, then sudden presence of two soldiers in the room.
In recent years, computers have gotten remarkably good at recognizing speech and images: Think of the dictation software on most cellphones, or the algorithms that automatically identify people in photos posted to Facebook.But recognition of natural sounds — such as crowds cheering or waves crashing — has lagged behind. That’s because most automated recognition systems, whether they process audio or visual information, are the result of machine learning, in which computers search for patterns in huge compendia of training data. Usually, the training data has to be first annotated by hand, which is prohibitively expensive for all but the highest-demand applications.
MIT researchers and their colleagues have developed a new computational model of the human brain’s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face’s degree of rotation — say, 45 degrees from center — but not the direction — left or right.
Traffic is not just a nuisance for drivers: it’s also a public-health hazard and bad news for the economy.Transportation studies put the annual cost of congestion at $160 billion, which includes 7 billion hours of time lost to sitting in traffic and an extra 3 billion gallons of fuel burned. One way to improve traffic is through ride-sharing - and a new MIT study suggests that using carpooling options from companies like Uber and Lyft could reduce the number of taxis on the road 75 percent without significantly impacting travel time.
Living in a dynamic physical world, it’s easy to forget how effortlessly we understand our surroundings. With minimal thought, we can figure out how scenes change and objects interact.But what’s second nature for us is still a huge problem for machines. With the limitless number of ways that objects can move, teaching computers to predict future actions can be difficult.Recently, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have moved a step closer, developing a deep-learning algorithm that, given a still image from a scene, can create a brief video that simulates the future of that scene.
CoolThink@JC, a four-year initiative of The Hong Kong Jockey Club Charities Trust, was launched today to empower the city’s primary school teachers and students with computational thinking skills, including coding.Developed through a collaboration with MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), the Education University of Hong Kong, and City University of Hong Kong, the eventual aim is to integrate computational thinking into all Hong Kong primary schools. Initially, CoolThink@JC will target over 16,500 students at 32 primary schools across the city.
Voters can then go to an online database that lists their encrypted receipt and shows that it matches up with the one they picked up at the ballot box.Watch Professor Rivest explain the concept on Numberphile:
At MIT’s 2016 Open House last spring, more than 100 visitors took rides on an autonomous mobility scooter in a trial of software designed by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the National University of Singapore, and the Singapore-MIT Alliance for Research and Technology (SMART).
Sarah Hensley is preparing an astronaut named Valkyrie for a mission to Mars. It is 6 feet tall, weighs 300 pounds, and is equipped with an extended chest cavity that makes it look distinctly female. Hensley spends much of her time this semester analyzing the movements of one of Valkyrie's arms.As a fourth-year electrical engineering student at MIT, Hensley is working with a team of CSAIL researchers to prepare Valkyrie, a humanoid robot also known as R5, for future space missions. As a teenager in New Jersey, Hensley loved to read in her downtime, particularly Isaac Asimov’s classic robot series. “I’m a huge science fiction nerd — and now I’m actually getting to work with a robot that’s real and not just in books. That’s like, wow.”
3-D printing has progressed over the last decade to include multi-material fabrication, enabling production of powerful, functional objects. While many advances have been made, it still has been difficult for non-programmers to create objects made of many materials (or mixtures of materials) without a more user-friendly interface.But this week, a team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) will present “Foundry,” a system for custom-designing a variety of 3-D printed objects with multiple materials.
Anyone who’s watched drone videos or an episode of “BattleBots” knows that robots can break — and often it’s because they don’t have the proper padding to protect themselves.But this week researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new method for 3-D printing soft materials that make robots safer and more precise in their movements — and that could be used to improve the durability of drones, phones, shoes, helmets, and more.The team’s “programmable viscoelastic material” (PVM) technique allows users to program every single part of a 3D-printed object to the exact levels of stiffness and elasticity they want, depending on the task they need for it.
As many a relationship book can tell you, understanding someone else’s emotions can be a difficult task. Facial expressions aren’t always reliable: a smile can conceal frustration, while a poker face might mask a winning hand.But what if technology could tell us how someone is really feeling?Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed “EQ-Radio,” a device that can detect a person’s emotions using wireless signals.
In the spring CSAIL received a six-foot-tall, 300-pound humanoid robot that NASA hopes to have serve on future space missions to Mars and beyond.This week, NASA formally opened registration for its Space Robotics Challenge, which involves research teams programming Valkyrie for a variety of tasks in the hopes of winning a cut of NASA's $1 million prize. From NASA:
Computer simulations of physical systems are common in science, engineering, and entertainment, but they use several different types of tools.If, say, you want to explore how a crack forms in an airplane wing, you need a very precise physical model of the crack’s immediate vicinity. But if you want to simulate the flexion of an airplane wing under different flight conditions, it’s more practical to use a simpler, higher-level description of the wing.If, however, you want to model the effects of wing flexion on the crack’s propagation, or vice versa, you need to switch back and forth between these two levels of description, which is difficult not only for computer programmers but for computers, too.
September 26, 2018 - Vladimir Vapnik of University of London and Columbia University gave a Dertouzos Distinguished Lecture titled "Learning Using Statistical Invariants (Revision of Machine Learning Problem)"