For a mobile robot to operate autonomously in an unknown environment, it must actively construct a representation of space. To do so, the robot needs to position its sensors to gather information and reason about objects, while coping with occlusions and sensor noise. In this work, we propose a distributional spatial representation based on 3D geometric shapes (such as cylinders and cuboids), which captures the structure of volumetric information compactly and provides a meaningful abstraction for reasoning about objects and viewpoints in the face of uncertainty. We develop methods for inferring shape parameters from point clouds, predicting viewpoint information over shapes, and robustly grasping objects.
If you would like to contact us about our work, please scroll down to the people section and click on one of the group leads' people pages, where you can reach out to them directly.