Deep learning has been successfully applied to the perception aspects of the autonomous driving task, such as lane and vehicle detection as well as full end-to-end control. While appealingly straightforward, these techniques present a form of reactionary control, and their output representation is not usable for decision making or autonomous navigation. Current approaches to autonomous navigation require either pre-computed HD maps, or extensive sensor suites (e.g. LIDAR, radar, etc). Addressing these challenges requires not only developing novel algorithms, but also requires formulating appropriate learning goals in the context of sparse and delayed reward signals, robust methods of system verification and safe operation, and metrics for uncertainty and confidence. We are currently focusing on novel deep learning based algorithms to predict a control probability distribution for an autonomous vehicle given only a single input image frame. We explore ways that this approach supports integration of parallel autonomous systems for shared human-robot navigation, path planning, and decision making. We rigorously evaluate our approach both on simulation and a real autonomous vehicle tested over a wide variety of driving conditions.
If you would like to contact us about our work, please scroll down to the people section and click on one of the group leads' people pages, where you can reach out to them directly.