News

Spotlighted News

Filter options
  • All
  • Articles
  • Videos
  • Talks
List view

Articles

Videos

Wearable system helps visually impaired users navigate

Computer scientists have been working for decades on automatic navigation systems to aid the visually impaired, but it’s been difficult to come up with anything as reliable and easy to use as the white cane, the type of metal-tipped cane that visually impaired people frequently use to identify clear walking paths. White canes have a few drawbacks, however. One is that the obstacles they come in contact with are sometimes other people. Another is that they can’t identify certain types of objects, such as tables or chairs, or determine whether a chair is already occupied.

Using Bitcoin to prevent identity theft

A reaction to the 2008 financial crisis, Bitcoin is a digital-currency scheme designed to wrest control of the monetary system from central banks. With Bitcoin, anyone can mint money, provided he or she can complete a complex computation quickly enough. Through a set of clever protocols, that computational hurdle prevents the system from being coopted by malicious hackers.

Eric Schmidt visits MIT to discuss computing, artificial intelligence, and the future of technology

When Alphabet executive chairman Eric Schmidt started programming in 1969 at the age of 14, there was no explicit title for what he was doing. “I was just a nerd,” he says. But now computer science has fundamentally transformed fields like transportation, health care and education, and also provoked many new questions. What will artificial intelligence (AI) be like in 10 years? How will it impact tomorrow’s jobs? What’s next for autonomous cars?

Learn a language while you wait for WiFi

Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.

Brain-controlled robots

For robots to do what we want, they need to understand us. Too often, this means having to meet them halfway: teaching them the intricacies of human language, for example, or giving them explicit commands for very specific tasks. But what if we could develop robots that were a more natural extension of us and that could actually do whatever we are thinking?

Ingestible robots, glasses-free 3-D, and computers that explain themselves

Machines that predict the future, robots that patch wounds, and wireless emotion-detectors are just a few of the exciting projects that came out of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) this year. Here’s a sampling of 16 highlights from 2016 that span the many computer science disciplines that make up CSAIL. Robots for exploring Mars — and your stomach

Design your own custom drone

This fall’s new FAA regulations have made drone flight easier than ever for both companies and consumers. But what if the drones out on the market aren’t exactly what you want?A new system from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is the first to allow users to design, simulate, and build their own custom drone. Users can change the size, shape, and structure of their drone based on the specific needs they have for payload, cost, flight time, battery usage, and other factors.

Face to face with "The Enemy"

When the filmmaking pioneers Auguste and Louis Lumière screened their 1895 film, "The Arrival of a Train at La Ciotat," audiences were so frightened by the real appearance of the image that they screamed and got out of the way — or so a well-known anecdote goes. Today, as one enters a virtual reality (VR) space — such as that conjured by MIT Visiting Artist Karim Ben Khelifa in his vanguard project "The Enemy" — it is not uncommon for participants to experience a similar shock at the sounds of footsteps, then sudden presence of two soldiers in the room.

Computer learns to recognize sounds by watching video

In recent years, computers have gotten remarkably good at recognizing speech and images: Think of the dictation software on most cellphones, or the algorithms that automatically identify people in photos posted to Facebook.But recognition of natural sounds — such as crowds cheering or waves crashing — has lagged behind. That’s because most automated recognition systems, whether they process audio or visual information, are the result of machine learning, in which computers search for patterns in huge compendia of training data. Usually, the training data has to be first annotated by hand, which is prohibitively expensive for all but the highest-demand applications.

How the brain recognizes faces

MIT researchers and their colleagues have developed a new computational model of the human brain’s face-recognition mechanism that seems to capture aspects of human neurology that previous models have missed.The researchers designed a machine-learning system that implemented their model, and they trained it to recognize particular faces by feeding it a battery of sample images. They found that the trained system included an intermediate processing step that represented a face’s degree of rotation — say, 45 degrees from center — but not the direction — left or right.

Study: carpooling apps could reduce taxi traffic 75%

Traffic is not just a nuisance for drivers: it’s also a public-health hazard and bad news for the economy.Transportation studies put the annual cost of congestion at $160 billion, which includes 7 billion hours of time lost to sitting in traffic and an extra 3 billion gallons of fuel burned. One way to improve traffic is through ride-sharing - and a new MIT study suggests that using carpooling options from companies like Uber and Lyft could reduce the number of taxis on the road 75 percent without significantly impacting travel time.

Creating videos of the future

Living in a dynamic physical world, it’s easy to forget how effortlessly we understand our surroundings. With minimal thought, we can figure out how scenes change and objects interact.But what’s second nature for us is still a huge problem for machines. With the limitless number of ways that objects can move, teaching computers to predict future actions can be difficult.Recently, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have moved a step closer, developing a deep-learning algorithm that, given a still image from a scene, can create a brief video that simulates the future of that scene.

Teaching Hong Kong students to embrace computational thinking

CoolThink@JC, a four-year initiative of The Hong Kong Jockey Club Charities Trust, was launched today to empower the city’s primary school teachers and students with computational thinking skills, including coding.Developed through a collaboration with MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), the Education University of Hong Kong, and City University of Hong Kong, the eventual aim is to integrate computational thinking into all Hong Kong primary schools. Initially, CoolThink@JC will target over 16,500 students at 32 primary schools across the city.

Prepping a robot for its journey to Mars

Sarah Hensley is preparing an astronaut named Valkyrie for a mission to Mars. It is 6 feet tall, weighs 300 pounds, and is equipped with an extended chest cavity that makes it look distinctly female. Hensley spends much of her time this semester analyzing the movements of one of Valkyrie's arms.As a fourth-year electrical engineering student at MIT, Hensley is working with a team of CSAIL researchers to prepare Valkyrie, a humanoid robot also known as R5, for future space missions. As a teenager in New Jersey, Hensley loved to read in her downtime, particularly Isaac Asimov’s classic robot series. “I’m a huge science fiction nerd — and now I’m actually getting to work with a robot that’s real and not just in books. That’s like, wow.”

Designing for 3-D printing

3-D printing has progressed over the last decade to include multi-material fabrication, enabling production of powerful, functional objects. While many advances have been made, it still has been difficult for non-programmers to create objects made of many materials (or mixtures of materials) without a more user-friendly interface.But this week, a team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) will present “Foundry,” a system for custom-designing a variety of 3-D printed objects with multiple materials.

3-D-printed robots with shock-absorbing skins

Anyone who’s watched drone videos or an episode of “BattleBots” knows that robots can break — and often it’s because they don’t have the proper padding to protect themselves.But this week researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) will present a new method for 3-D printing soft materials that make robots safer and more precise in their movements — and that could be used to improve the durability of drones, phones, shoes, helmets, and more.The team’s “programmable viscoelastic material” (PVM) technique allows users to program every single part of a 3D-printed object to the exact levels of stiffness and elasticity they want, depending on the task they need for it.

Detecting emotions with wireless signals

As many a relationship book can tell you, understanding someone else’s emotions can be a difficult task. Facial expressions aren’t always reliable: a smile can conceal frustration, while a poker face might mask a winning hand.But what if technology could tell us how someone is really feeling?Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed “EQ-Radio,” a device that can detect a person’s emotions using wireless signals.

Talks