Calibrating Distributed Camera Networks
Speaker: Rich Radke , ECSE Department, Rensselaer Polytechnic InstituteContact:
Date: April 7 2006
Time: 11:00AM to 12:00PM
Location: Conference Room D507
Host: C. Mario Christoudias, Gerald Dalley, MIT CSAIL
C. Mario Christoudias, Gerald Dalley, 3-4278, 3-6095, firstname.lastname@example.org, email@example.comRelevant URL:
We discuss how to obtain the accurate and globally consistent self-calibration of a distributed camera network, in which cameras and processing nodes may be spread over a wide geographical area, with no centralized processor and limited ability to communicate a large amount of information over long distances. First, we describe how to estimate the vision graph for the network, in which each camera is represented by a node, and an edge appears between two nodes if the two cameras jointly image a sufficiently large part of the environment. We propose an algorithm in which each camera independently composes a fixed-length message that is a lossy representation of a subset of detected features, and broadcasts this "feature digest" to the rest of the network. Each receiver camera decompresses the feature digest to recover approximate feature descriptors, robustly estimates the epipolar geometry to reject outliers and grow additional matches, and decides whether sufficient evidence exists to form a vision graph edge. Second, we present a distributed camera calibration algorithm based on belief propagation, in which each camera node communicates only with its neighbors in the vision graph. The natural geometry of the system and the formulation of the estimation problem give rise to statistical dependencies that can be efficiently leveraged in a probabilistic framework. The camera calibration problem poses several challenges to information fusion, including missing data, overdetermined parameterizations, and non-aligned coordinate systems. We demonstrate the accurate and consistent performance of the vision graph generation and camera calibration algorithms using a simulated 60-node outdoor camera network.
Richard J. Radke received the B.A. degree in mathematics and the B.A. and M.A. degrees in computational and applied mathematics, all from Rice University, Houston, TX, in 1996, and the Ph.D. degree from the Electrical Engineering Department, Princeton University, Princeton, NJ, in 2001. For his Ph.D. research, he investigated several estimation problems in digital video, including the synthesis of photorealistic "virtual video", in collaboration with IBM's Tokyo Research Laboratory. He has also worked at the Mathworks, Inc., Natick, MA, developing numerical linear algebra and signal processing routines.
He joined the faculty of the Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY, in August, 2001, where he is also associated with the National Science Foundation Engineering Research Center for Subsurface Sensing and Imaging Systems (CenSSIS). His current research interests include deformable registration and segmentation of three- and four-dimensional biomedical volumes, machine learning for radiotherapy applications, distributed computer vision problems on large camera networks, and modeling 3D environments with visual and range imagery. Dr. Radke received a National Science Foundation CAREER Award in 2003.
Please set up the room seminar-style for 26 people. We will be having a talk that will use the projector and we would like the tables removed from the center of the room. Please leave one thin table in the room for refreshments. Thanks.
See other events that are part of MIT Machine Vision Colloquium 2005/2006
See other events happening in April 2006