This past year MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) was at the forefront of many diverse technological innovations covering a breadth of topics, from healthcare and cybersecurity to self-driving cars.
Babies as young as 10 months can assess how much someone values a particular goal by observing how hard they are willing to work to achieve it, according to a new study from MIT and Harvard University.
Artificial intelligence (AI) in the form of “neural networks” are increasingly used in technologies like self-driving cars to be able to see and recognize objects. Such systems could even help with tasks like identifying explosives in airport security lines.
Even as robots become increasingly common, they remain incredibly difficult to make. From designing and modeling to fabricating and testing, the process is slow and costly: Even one small change can mean days or weeks of rethinking and revising important hardware. But what if there were a way to let non-experts craft different robotic designs — in one sitting?
We’ve all experienced two hugely frustrating things on YouTube: our video either suddenly gets pixelated, or it stops entirely to rebuffer. Both happen because of special algorithms that break videos into small chunks that load as you go. If your internet is slow, YouTube might make the next few seconds of video lower resolution to make sure you can still watch uninterrupted — hence, the pixelation. If you try to skip ahead to a part of the video that hasn’t loaded yet, your video has to stall in order to buffer that part.
More than 50 million Americans suffer from sleep disorders, and diseases including Parkinson’s and Alzheimer’s can also disrupt sleep. Diagnosing and monitoring these conditions usually requires attaching electrodes and a variety of other sensors to patients, which can further disrupt their sleep.
Today’s 3-D printers have a resolution of 600 dots per inch, which means that they could pack a billion tiny cubes of different materials into a volume that measures just 1.67 cubic inches. Such precise control of printed objects’ microstructure gives designers commensurate control of the objects’ physical properties — such as their density or strength, or the way they deform when subjected to stresses. But evaluating the physical effects of every possible combination of even just two materials, for an object consisting of tens of billions of cubes, would be prohibitively time consuming.
The data captured by today’s digital cameras is often treated as the raw material of a final image. Before uploading pictures to social networking sites, even casual cellphone photographers might spend a minute or two balancing color and tuning contrast, with one of the many popular image-processing programs now available.
In recent years engineers have been developing new technologies to enable robots and humans to move faster and jump higher. Soft, elastic materials store energy in these devices, which, if released carefully, enable elegant dynamic motions. Robots leap over obstacles and prosthetics empower sprinting. A fundamental challenge remains in developing these technologies. Scientists spend long hours building and testing prototypes that can reliably move in specific ways so that, for example, a robot lands right-side up upon landing a jump.
Almost every object we use is developed with computer-aided design (CAD). Ironically, while CAD programs are good for creating designs, using them is actually very difficult and time-consuming if you’re trying to improve an existing design to make the most optimal product.
Being able to both walk and take flight is typical in nature — many birds, insects, and other animals can do both. If we could program robots with similar versatility, it would open up many possibilities: Imagine machines that could fly into construction areas or disaster zones that aren’t near roads and then squeeze through tight spaces on the ground to transport objects or rescue people.
Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface. Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.
Computer scientists have been working for decades on automatic navigation systems to aid the visually impaired, but it’s been difficult to come up with anything as reliable and easy to use as the white cane, the type of metal-tipped cane that visually impaired people frequently use to identify clear walking paths. White canes have a few drawbacks, however. One is that the obstacles they come in contact with are sometimes other people. Another is that they can’t identify certain types of objects, such as tables or chairs, or determine whether a chair is already occupied.
A reaction to the 2008 financial crisis, Bitcoin is a digital-currency scheme designed to wrest control of the monetary system from central banks. With Bitcoin, anyone can mint money, provided he or she can complete a complex computation quickly enough. Through a set of clever protocols, that computational hurdle prevents the system from being coopted by malicious hackers.
Most robots are programmed using one of two methods: learning from demonstration, in which they watch a task being done and then replicate it, or via motion-planning techniques such as optimization or sampling, which require a programmer to explicitly specify a task’s goals and constraints.
When Alphabet executive chairman Eric Schmidt started programming in 1969 at the age of 14, there was no explicit title for what he was doing. “I was just a nerd,” he says. But now computer science has fundamentally transformed fields like transportation, health care and education, and also provoked many new questions. What will artificial intelligence (AI) be like in 10 years? How will it impact tomorrow’s jobs? What’s next for autonomous cars?
September 26, 2018 - Vladimir Vapnik of University of London and Columbia University gave a Dertouzos Distinguished Lecture titled "Learning Using Statistical Invariants (Revision of Machine Learning Problem)"