Characterizing the Sensitivity of Vision-in-the-Loop Driving Controllers Using CG
We work on understanding and improving vision algorithms used for self driving cars, by introducing synthetic data-sets to the loop.
We aim to rigorously characterize the sensitivity of vision-in-the-loop driving controllers in increasingly complex visual tasks. While rooftop lidar provides a spectacular amount of high-rate geometric data about environment, there are a number of tasks in an autonomous driving system where camera-based vision will inevitably play a dominant role: dealing with lane markings and road signs, dealing with water/snow and other inclement weather conditions that can confuse a lidar, and even dealing with construction (orange cones), police officers, and pedestrians/ animals. Furthermore, vision sensors are often fused with depth returns from a laser and other sensors as a part of the vehicle and obstacle estimation algorithms.