Thesis Defense: Labeling, Discovering, and Detecting Objects in Images
Speaker: Bryan Russell , MIT CSAILContact:
Date: October 29 2007
Time: 2:45PM to 3:45AM
Location: 32-G449 (Kiva/Patil)
Host: William Freeman, MIT CSAIL
Bryan Russell, email@example.com
Recognizing the many objects that comprise our visual world is a difficult task. Confounding factors, such as intra-class object variation, clutter, pose, lighting, dealing with never-before seen objects, scale, and lack of visual experience often fool existing recognition systems. In this thesis, we explore three issues that address a few of these factors: the importance of labeled image databases for recognition, the ability to discover object categories from simply looking at many images, and the use of large labeled image databases to efficiently detect objects embedded in scenes. For each of the issues above, we will need to cope with large collections of images.
We begin by introducing LabelMe, a large labeled image database collected from users via a web annotation tool. The users of the annotation tool provided information about the identity, location, and extent of objects in images. Through this effort, we have collected about 160,000 images and 200,000 object labels to date. We show that the database spans more object categories and scenes and offers a wider range of appearance variation than most other labeled databases for object recognition. We also provide four useful extensions of the database: (i) resolving synonym ambiguities that arise in the object labels, (ii) recovering object-part relationships, (iii) extracting a depth ordering of the labeled objects in an image, and (iv) providing a semi-automatic process for the fast labeling of images.
We then seek to learn models of objects in the extreme case when no supervision is provided. We draw inspiration from the success of unsupervised topic discovery in text. We apply the Latent Dirichlet Allocation model of Blei et al. to unlabeled images to automatically discover object categories. To achieve this, we employ the visual words representation of images, which is analogous to the words in text. We show that our unsupervised model achieves comparable classification performance to a model trained with supervision on an unseen image set depicting several object classes. We also successfully localize the discovered object classes in images.
While the image representation used for the object discovery process is simple to compute and can distinguish between different object categories, it does not capture explicit spatial information about regions in different parts of the image. We describe a procedure for combining image segmentation with the object discovery process to provide increased spatial support. We show that this procedure finds object models with improved performance on the categorization and localization task for images depicting scenes containing multiple objects.
We then return to the problem of learning about objects with supervision. We describe a system that makes efficient use of a large labeled database of images for detecting objects embedded in a scene. The system first aligns the components of a scene depicted in an input image to the images in the database. It then makes use of a simple model that combines an object detector with the object knowledge contained in the labels corresponding to the aligned images. The simplicity of the model allows learning for a large number of object classes embedded in many different scenes. We demonstrate improved classification and localization performance over a standard object detector. Furthermore, our system restricts the object search space and therefore greatly increases computational efficiency.
See other events that are part of
See other events happening in October 2007