Indoor Segmentation and Support Inference from RGBD Images
Speaker: Nathan Silberman, NYU
Date: Monday, February 11 2013
Time: 2:00PM to 3:00PM
Host: Antonio Torralba, Jianxiong Xiao, CSAIL
Contact: Jianxiong Xiao, 6172534143, firstname.lastname@example.orgRelevant URL:
Title: Indoor Segmentation and Support Inference from RGBD Images
We present an approach to interpret the major surfaces, objects, and support relations of an indoor scene from an RGBD image. Most existing work ignores physical interactions or is applied only to tidy rooms and hallways. Our goal is to parse typical, often messy, indoor scenes into floor, walls, supporting surfaces, and object regions, and to recover support relationships. One of our main interests is to better understand how 3D cues can best inform a structured 3D interpretation. We also contribute a novel integer programming formulation to infer physical support relations. We offer a new dataset of 1449 RGBD images, capturing 464 diverse indoor scenes, with detailed annotations. Our experiments demonstrate our ability to infer support relations in complex scenes and verify that our 3D scene cues and inferred support lead to better object segmentation.
This is joint work with Derek Hoiem, Pushmeet Kohli and Rob Fergus.
Bio: Nathan Silberman is a 4th year PhD student at NYU under the superivision of Rob Fergus. He studied undergraduate Computer Science at NYU and worked for several years for DoubleClick and Google. His research interests are in Computer Vision and Machine Learning.
See other events happening in February 2013