Google AI’s Jeff Dean discusses using deep learning to solve challenging problems

Jeff Dean_Google

Google AI’s Jeff Dean has a seemingly straightforward objective: he wants to use a collection of trainable mathematical units organized in layers to solve complicated tasks that will ultimately benefit many parts of society.

While this may appear difficult in practice, Dean, who leads research on the Google Brain project, is making it a reality using a tool called “deep learning.” This was the topic of conversation led by Dean this week, in a seminar discussion at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), where he spoke about how deep learning can be applied to many facets of our lives.

Dean has helped see through over 1,000 deep learning projects underlying many of Google’s products over the past few years, such as YouTube, maps, and photos. During the discussion, Dean described the recent and vast improvements to the field, including new network architectures and training math, a refined combination of computer vision and robotics, and a nearly 90 percent error improvement.

These technical advancements are now enabling deeper applications: helping solve the grand engineering challenges of our time, including securing cyberspace, advancing personalized learning, restoring and improving urban infrastructure, advancing health informatics, and engineering better medicines.

Dean further described how we can use predictive models to improve things like patient care, urban infrastructure, and even genomic data. For instance, a doctor could use deep learning models and electronic health records to better predict dangerous and life-threatening outcomes in hospitals. A vision system could help make predictions of object movements for self-driving cars. A chemist would use deep learning to predict properties of molecules. In addition, a geneticist could use these tools applied to genomic data for drug development, in terms of what genetic variants could be linked to certain illnesses.

He noted that with all this improvement, there is still no shortage of work that needs to be done to ensure these models are fair and unbiased.

“While deep learning is transforming how we make computers, things like algorithmic fairness are still something that we need to be mindful of,” says Jeff. “We get data from the world as it is, not as we’d like it to be.”

He also spoke about new advances in computational hardware called Tensor Processing Units, which are specialized for accelerating machine learning computations. TensorFlow, for example, uses computation as a graph in conjunction with things like fitness sensors for cows to provide real-time feedback about which cows need attention or help based on their physical data.

“Computers have gone from fuzzily seeing the world to now seeing it in more complex ways,” says Dean. “By improving our mathematical models, we’re able to make even faster systems with wider deployment, which will lead to many more breakthroughs across a wide range of domains.”