A new MIT study finds “health knowledge graphs,” which show relationships between symptoms and diseases and are intended to help with clinical diagnosis, can fall short for certain conditions and patient populations. The results also suggest ways to boost their performance.
Almost every object we use is developed with computer-aided design (CAD). Ironically, while CAD programs are good for creating designs, using them is actually very difficult and time-consuming if you’re trying to improve an existing design to make the most optimal product.
Being able to both walk and take flight is typical in nature — many birds, insects, and other animals can do both. If we could program robots with similar versatility, it would open up many possibilities: Imagine machines that could fly into construction areas or disaster zones that aren’t near roads and then squeeze through tight spaces on the ground to transport objects or rescue people.
Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface. Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.
Computer scientists have been working for decades on automatic navigation systems to aid the visually impaired, but it’s been difficult to come up with anything as reliable and easy to use as the white cane, the type of metal-tipped cane that visually impaired people frequently use to identify clear walking paths. White canes have a few drawbacks, however. One is that the obstacles they come in contact with are sometimes other people. Another is that they can’t identify certain types of objects, such as tables or chairs, or determine whether a chair is already occupied.
A reaction to the 2008 financial crisis, Bitcoin is a digital-currency scheme designed to wrest control of the monetary system from central banks. With Bitcoin, anyone can mint money, provided he or she can complete a complex computation quickly enough. Through a set of clever protocols, that computational hurdle prevents the system from being coopted by malicious hackers.
Most robots are programmed using one of two methods: learning from demonstration, in which they watch a task being done and then replicate it, or via motion-planning techniques such as optimization or sampling, which require a programmer to explicitly specify a task’s goals and constraints.
When Alphabet executive chairman Eric Schmidt started programming in 1969 at the age of 14, there was no explicit title for what he was doing. “I was just a nerd,” he says. But now computer science has fundamentally transformed fields like transportation, health care and education, and also provoked many new questions. What will artificial intelligence (AI) be like in 10 years? How will it impact tomorrow’s jobs? What’s next for autonomous cars?
We’ve long known that blood pressure, breathing, body temperature and pulse provide an important window into the complexities of human health. But a growing body of research suggests that another vital sign – how fast you walk – could be a better predictor of health issues like cognitive decline, falls, and even certain cardiac or pulmonary diseases.
Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.
For robots to do what we want, they need to understand us. Too often, this means having to meet them halfway: teaching them the intricacies of human language, for example, or giving them explicit commands for very specific tasks. But what if we could develop robots that were a more natural extension of us and that could actually do whatever we are thinking?
It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?
Machines that predict the future, robots that patch wounds, and wireless emotion-detectors are just a few of the exciting projects that came out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) this year. Here’s a sampling of 16 highlights from 2016 that span the many computer science disciplines that make up CSAIL. Robots for exploring Mars — and your stomach
This fall’s new FAA regulations have made drone flight easier than ever for both companies and consumers. But what if the drones out on the market aren’t exactly what you want?A new system from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is the first to allow users to design, simulate, and build their own custom drone. Users can change the size, shape, and structure of their drone based on the specific needs they have for payload, cost, flight time, battery usage, and other factors.
When the filmmaking pioneers Auguste and Louis Lumière screened their 1895 film, "The Arrival of a Train at La Ciotat," audiences were so frightened by the real appearance of the image that they screamed and got out of the way — or so a well-known anecdote goes. Today, as one enters a virtual reality (VR) space — such as that conjured by MIT Visiting Artist Karim Ben Khelifa in his vanguard project "The Enemy" — it is not uncommon for participants to experience a similar shock at the sounds of footsteps, then sudden presence of two soldiers in the room.
In recent years, computers have gotten remarkably good at recognizing speech and images: Think of the dictation software on most cellphones, or the algorithms that automatically identify people in photos posted to Facebook.But recognition of natural sounds — such as crowds cheering or waves crashing — has lagged behind. That’s because most automated recognition systems, whether they process audio or visual information, are the result of machine learning, in which computers search for patterns in huge compendia of training data. Usually, the training data has to be first annotated by hand, which is prohibitively expensive for all but the highest-demand applications.
MIT researchers and their colleagues have developed a new computational model of the human brain’s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face’s degree of rotation — say, 45 degrees from center — but not the direction — left or right.
Traffic is not just a nuisance for drivers: it’s also a public-health hazard and bad news for the economy.Transportation studies put the annual cost of congestion at $160 billion, which includes 7 billion hours of time lost to sitting in traffic and an extra 3 billion gallons of fuel burned. One way to improve traffic is through ride-sharing - and a new MIT study suggests that using carpooling options from companies like Uber and Lyft could reduce the number of taxis on the road 75 percent without significantly impacting travel time.
Living in a dynamic physical world, it’s easy to forget how effortlessly we understand our surroundings. With minimal thought, we can figure out how scenes change and objects interact.But what’s second nature for us is still a huge problem for machines. With the limitless number of ways that objects can move, teaching computers to predict future actions can be difficult.Recently, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have moved a step closer, developing a deep-learning algorithm that, given a still image from a scene, can create a brief video that simulates the future of that scene.
CoolThink@JC, a four-year initiative of The Hong Kong Jockey Club Charities Trust, was launched today to empower the city’s primary school teachers and students with computational thinking skills, including coding.Developed through a collaboration with MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), the Education University of Hong Kong, and City University of Hong Kong, the eventual aim is to integrate computational thinking into all Hong Kong primary schools. Initially, CoolThink@JC will target over 16,500 students at 32 primary schools across the city.
Voters can then go to an online database that lists their encrypted receipt and shows that it matches up with the one they picked up at the ballot box.Watch Professor Rivest explain the concept on Numberphile:
On October 16, 2019, Prof. David Patterson, UC Berkeley professor emeritus, Google distinguished engineer, and RISC-V Foundation vice-chair, gave a Dertouzos Distinguished Lecture at CSAIL / MIT, entitled 'Domain Specific Architectures for Deep Neural Networks: Three Generations of Tensor Processing Units (TPUs).'
Septmeber 18, 2019 - Prof. Yoshua Bengio, Prof. University of Montreal, and Scientific Director, Mila, gave a Dertouzos Distinguished Lecture at CSAIL entitled 'Learning High-Level Representations for Agents'
September 26, 2018 - Vladimir Vapnik of University of London and Columbia University gave a Dertouzos Distinguished Lecture titled "Learning Using Statistical Invariants (Revision of Machine Learning Problem)"