It can be hard to keep track of all the numbers, statistics, and charts swirling around the internet -- we’re inundated with information that can be rapidly disseminated and dissected. To carve through some of the sludge, here’s a selected highlight of recent computer science related efforts to fight COVID-19.
A new MIT study finds “health knowledge graphs,” which show relationships between symptoms and diseases and are intended to help with clinical diagnosis, can fall short for certain conditions and patient populations. The results also suggest ways to boost their performance.
Artificial intelligence (AI) in the form of “neural networks” are increasingly used in technologies like self-driving cars to be able to see and recognize objects. Such systems could even help with tasks like identifying explosives in airport security lines.
Even as robots become increasingly common, they remain incredibly difficult to make. From designing and modeling to fabricating and testing, the process is slow and costly: Even one small change can mean days or weeks of rethinking and revising important hardware. But what if there were a way to let non-experts craft different robotic designs — in one sitting?
We’ve all experienced two hugely frustrating things on YouTube: our video either suddenly gets pixelated, or it stops entirely to rebuffer. Both happen because of special algorithms that break videos into small chunks that load as you go. If your internet is slow, YouTube might make the next few seconds of video lower resolution to make sure you can still watch uninterrupted — hence, the pixelation. If you try to skip ahead to a part of the video that hasn’t loaded yet, your video has to stall in order to buffer that part.
More than 50 million Americans suffer from sleep disorders, and diseases including Parkinson’s and Alzheimer’s can also disrupt sleep. Diagnosing and monitoring these conditions usually requires attaching electrodes and a variety of other sensors to patients, which can further disrupt their sleep.
Today’s 3-D printers have a resolution of 600 dots per inch, which means that they could pack a billion tiny cubes of different materials into a volume that measures just 1.67 cubic inches. Such precise control of printed objects’ microstructure gives designers commensurate control of the objects’ physical properties — such as their density or strength, or the way they deform when subjected to stresses. But evaluating the physical effects of every possible combination of even just two materials, for an object consisting of tens of billions of cubes, would be prohibitively time consuming.
The data captured by today’s digital cameras is often treated as the raw material of a final image. Before uploading pictures to social networking sites, even casual cellphone photographers might spend a minute or two balancing color and tuning contrast, with one of the many popular image-processing programs now available.
In recent years engineers have been developing new technologies to enable robots and humans to move faster and jump higher. Soft, elastic materials store energy in these devices, which, if released carefully, enable elegant dynamic motions. Robots leap over obstacles and prosthetics empower sprinting. A fundamental challenge remains in developing these technologies. Scientists spend long hours building and testing prototypes that can reliably move in specific ways so that, for example, a robot lands right-side up upon landing a jump.
Almost every object we use is developed with computer-aided design (CAD). Ironically, while CAD programs are good for creating designs, using them is actually very difficult and time-consuming if you’re trying to improve an existing design to make the most optimal product.
Being able to both walk and take flight is typical in nature — many birds, insects, and other animals can do both. If we could program robots with similar versatility, it would open up many possibilities: Imagine machines that could fly into construction areas or disaster zones that aren’t near roads and then squeeze through tight spaces on the ground to transport objects or rescue people.
Eight years ago, Ted Adelson’s research group at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new sensor technology, called GelSight, that uses physical contact with an object to provide a remarkably detailed 3-D map of its surface. Now, by mounting GelSight sensors on the grippers of robotic arms, two MIT teams have given robots greater sensitivity and dexterity. The researchers presented their work in two papers at the International Conference on Robotics and Automation last week.
Computer scientists have been working for decades on automatic navigation systems to aid the visually impaired, but it’s been difficult to come up with anything as reliable and easy to use as the white cane, the type of metal-tipped cane that visually impaired people frequently use to identify clear walking paths. White canes have a few drawbacks, however. One is that the obstacles they come in contact with are sometimes other people. Another is that they can’t identify certain types of objects, such as tables or chairs, or determine whether a chair is already occupied.
A reaction to the 2008 financial crisis, Bitcoin is a digital-currency scheme designed to wrest control of the monetary system from central banks. With Bitcoin, anyone can mint money, provided he or she can complete a complex computation quickly enough. Through a set of clever protocols, that computational hurdle prevents the system from being coopted by malicious hackers.
Most robots are programmed using one of two methods: learning from demonstration, in which they watch a task being done and then replicate it, or via motion-planning techniques such as optimization or sampling, which require a programmer to explicitly specify a task’s goals and constraints.
When Alphabet executive chairman Eric Schmidt started programming in 1969 at the age of 14, there was no explicit title for what he was doing. “I was just a nerd,” he says. But now computer science has fundamentally transformed fields like transportation, health care and education, and also provoked many new questions. What will artificial intelligence (AI) be like in 10 years? How will it impact tomorrow’s jobs? What’s next for autonomous cars?
We’ve long known that blood pressure, breathing, body temperature and pulse provide an important window into the complexities of human health. But a growing body of research suggests that another vital sign – how fast you walk – could be a better predictor of health issues like cognitive decline, falls, and even certain cardiac or pulmonary diseases.
Hyper-connectivity has changed the way we communicate, wait, and productively use our time. Even in a world of 5G wireless and “instant” messaging, there are countless moments throughout the day when we’re waiting for messages, texts, and Snapchats to refresh. But our frustrations with waiting a few extra seconds for our emails to push through doesn’t mean we have to simply stand by.
For robots to do what we want, they need to understand us. Too often, this means having to meet them halfway: teaching them the intricacies of human language, for example, or giving them explicit commands for very specific tasks. But what if we could develop robots that were a more natural extension of us and that could actually do whatever we are thinking?
It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?
On March 20, 2020, Dr. Michael Z. Lin of Stanford University provided a virtual presentation on the basic biology of coronaviruses and the disease COVID-19, projections for the current epidemic, and review of medications currently in clinical trials.
On October 16, 2019, Prof. David Patterson, UC Berkeley professor emeritus, Google distinguished engineer, and RISC-V Foundation vice-chair, gave a Dertouzos Distinguished Lecture at CSAIL / MIT, entitled 'Domain Specific Architectures for Deep Neural Networks: Three Generations of Tensor Processing Units (TPUs).'
Septmeber 18, 2019 - Prof. Yoshua Bengio, Prof. University of Montreal, and Scientific Director, Mila, gave a Dertouzos Distinguished Lecture at CSAIL entitled 'Learning High-Level Representations for Agents'
September 26, 2018 - Vladimir Vapnik of University of London and Columbia University gave a Dertouzos Distinguished Lecture titled "Learning Using Statistical Invariants (Revision of Machine Learning Problem)"