The Dexterity Network: Deep Learning to Plan Robust Robot Grasps using Datasets of Synthetic Point Clouds, Analytic Grasp Metrics, and 3D Object Models
Speaker
Jeff Mahler
UC Berkeley
Abstract:
Reliable robot grasping across a wide variety of objects is challenging due to imprecision in sensing, which leads to uncertainty about properties such as object shape, pose, mass, and friction. Recent results suggest that deep learning from millions of labeled grasps and images can be used to rapidly plan successful grasps across a diverse set of objects without explicit inference of physical properties, but training typically requires tedious hand-labeling or months of execution time. In this talk I present the Dexterity Network (Dex-Net), a framework to automatically synthesize training datasets containing millions of point clouds and robot grasps labeled with robustness to perturbations by analyzing contact models across thousands of 3D object CAD models. I will describe generative models for datasets of both parallel-jaw and suction-cup grasps. Experiments suggest that Convolutional Neural Networks trained from scratch on Dex-Net datasets can be used to plan grasps for novel objects in clutter with high precision on a physical robot.
Bio:
Jeff Mahler is a Ph.D. student at the University of California at Berkeley advised by Prof. Ken Goldberg and a member of the the AUTOLAB and Berkeley Artificial Intelligence Research Lab. His current research is on the Dexterity Network (Dex-Net), a project that aims to train robot grasping policies from massive synthetic datasets of labeled point clouds and grasps generated using stochastic contact analysis across thousands of 3D object CAD models. He has also studied deep learning from demonstration and control for surgical robots. He received the National Defense Science and Engineering Fellowship in 2015 and cofounded the 3D scanning startup Lynx Laboratories in 2012 as an undergraduate at the University of Texas at Austin.