Topics in supervised and self-supervised monocular scene understanding (4pm 12/09/2019 in 32-G449)

Speaker

Toyota Research Institute

Host

John Leonard
MIT - CSAIL
Abstract: Robust and fast scene understanding is critical for the
deployment of autonomous cars. Furthermore, the need to generalize at
test-time compels us to leverage vast amounts of driving data to train
our models. Supervised learning drives most of the models in the car
today, but curating large labeled datasets doesn't sca le with the
rate at which we collect raw data. The question is clear: to supervise
or not to supervise? The ML team at TRI is making progress on both
fronts, but we are prioritizing ways to learn more with less
supervision. I will be presenting some of our latest results on two
projects: (1) fast end-to-end panoptic segmentation and (2)
self-supervised depth and pose estimation from monocular. I will also
give a preview of some of our work towards generating large-scale
multi-modal for autonomous driving.


Bio: Allan Raventos has been a member of Machine Learning team at
Toyota Research Institute (TRI) since 2017, performing research on
scene understanding for autonomous vehicles. At TRI he has
contributed to all stages of the ML model stack: from dataset
curation, to model implementation and tuning, to deployment in a real
car. His main expertise is on object detection and extensions to
panoptic segmentation, with recent work on self-supervised monocular
depth estimation. He holds the degrees of BSEE and MSEE from Stanford
University (2017).