Human-in-the-Loop: Deep Learning for Shared Autonomy in Naturalistic Driving
Speaker
Lex Fridman
MIT
Abstract:
Localization, mapping, perception, control, and trajectory planning are components of autonomous vehicle design that each have seen considerable progress in the previous three decades and especially since the first DARPA Robotics Challenge. These are areas of robotics research focused on perceiving and interacting with the external world through outward facing sensors and actuators. However, semi-autonomous driving is in many ways a human-centric activity where the at-times distracted, irrational, drowsy human may need to be included in-the-loop of safe and intelligent autonomous vehicle operation through the driver state sensing, communication, and shared control.
In this talk, I will present deep neural network approaches for various subtasks of supervised vehicle autonomy with a special focus on driver state sensing and how those approaches helped us in (1) the collection, analysis, and understanding of human behavior over 100,000 miles and 1 billion video frames of on-road semi-autonomous driving in Tesla vehicles and (2) the design of real-time driver assistance systems that bring the human back into the loop of safe shared autonomy.
Bio:
Lex Fridman is a postdoc at MIT, working on computer vision and deep learning approaches in the context of self-driving cars with a human-in-the-loop. His work focuses on large-scale, real-world data, with the goal of building intelligent systems that have real world impact. Lex received his BS, MS, and PhD from Drexel University where he worked on applications of machine learning, computer vision, and decision fusion techniques in a number of fields including robotics, active authentication, activity recognition, and optimal resource allocation on multi-commodity networks. Before joining MIT, Lex was at Google working on deep learning and decision fusion methods for large-scale behavior-based authentication. Lex is a recipient of a CHI-17 best paper award.
Localization, mapping, perception, control, and trajectory planning are components of autonomous vehicle design that each have seen considerable progress in the previous three decades and especially since the first DARPA Robotics Challenge. These are areas of robotics research focused on perceiving and interacting with the external world through outward facing sensors and actuators. However, semi-autonomous driving is in many ways a human-centric activity where the at-times distracted, irrational, drowsy human may need to be included in-the-loop of safe and intelligent autonomous vehicle operation through the driver state sensing, communication, and shared control.
In this talk, I will present deep neural network approaches for various subtasks of supervised vehicle autonomy with a special focus on driver state sensing and how those approaches helped us in (1) the collection, analysis, and understanding of human behavior over 100,000 miles and 1 billion video frames of on-road semi-autonomous driving in Tesla vehicles and (2) the design of real-time driver assistance systems that bring the human back into the loop of safe shared autonomy.
Bio:
Lex Fridman is a postdoc at MIT, working on computer vision and deep learning approaches in the context of self-driving cars with a human-in-the-loop. His work focuses on large-scale, real-world data, with the goal of building intelligent systems that have real world impact. Lex received his BS, MS, and PhD from Drexel University where he worked on applications of machine learning, computer vision, and decision fusion techniques in a number of fields including robotics, active authentication, activity recognition, and optimal resource allocation on multi-commodity networks. Before joining MIT, Lex was at Google working on deep learning and decision fusion methods for large-scale behavior-based authentication. Lex is a recipient of a CHI-17 best paper award.