EI Seminar - Beomjoon Kim - Making Robots See and Manipulate
Host
Terry Suh
Robot Locomotion Group, CSAIL
Abstract: Even with the recent advances in robot AI, we still do not see robots in our lives. Why is this? I argue this is because robots are still missing the basic capabilities to see and manipulate a diverse set of objects. In this talk, I will introduce learning-based algorithms for manipulating diverse objects even in cases when grasping is not an option. I will demonstrate that estimation of the entire object shape is sufficient yet unnecessary, and introduce our contact-based object state representation that affords both prehensile and non-prehensile motions, generalizes across diverse object shapes, and enables the exploitation of object geometries when in need. The key enabler for this is large-scale GPU-based simulation that can efficiently generate big data within a short amount of time. Unfortunately, when it comes to non-convex objects, these simulators slow down significantly. I will introduce a neural network-based contact detector that, unlike classical contact detection algorithms, leverages parallel computation available in GPU. This enables us to generate data 10x faster than state-of-the-art GPU simulators in contact-rich situations.
Zoom link: https://mit.zoom.us/j/7652207066.
Zoom link: https://mit.zoom.us/j/7652207066.