Thesis Defense: Large-Scale Probabilistic Aerial Reconstruction
Speaker
Randi Cabezas
MIT CSAIL
Host
John W. Fisher
MIT CSAIL
Abstract:
While much emphasis has been placed on large-scale 3D scene reconstruction from a single data source such as images or distance sensors, models that jointly utilize multiple data types remain largely unexplored. In this work, we will present a Bayesian formulation of scene reconstruction from multi-modal data as well as two critical components that enable large-scale reconstructions with adaptive resolution and high-level scene understanding with meaningful prior-probability distributions.
Our first contribution is to formulate the 3D reconstruction problem within the Bayesian framework. We develop an integrated probabilistic model that allows us to naturally represent uncertainty and to fuse complementary information provided by different sensor modalities (imagery and LiDAR). Maximum-a-Posteriori inference within this model leverages GPGPUs for efficient likelihood evaluations. Our dense reconstructions (triangular mesh with texture information) are feasible with fewer observations of a given modality by relaying on others without sacrificing quality.
Secondly, to enable large-scale reconstructions our formulation supports adaptive resolutions in both appearance and geometry. This change is motivated by the need for a representation that can adjust to a wide variability in data quality and availability. By coupling edge transformations within a reversible-jump MCMC framework, we allow changes in the number of triangles and mesh connectivity. We demonstrate that these data-driven updates lead to more accurate representations while reducing modeling assumptions and utilizing fewer triangles.
Lastly, to enable high-level scene understanding, we include a categorization of reconstruction elements in our formulation. This scene-specific classification of triangles is estimated from semantic annotations (which are noisy and incomplete) and other scene features (e.g., geometry and appearance). The categorization provides a class-specific prior-probability distribution, thus helping to obtain more accurate and interpretable representations by regularizing the reconstruction.
Collectively, these models enable complex reasoning about urban scenes by fusing all available data across modalities, a crucial necessity for future autonomous agents and large-scale augmented-reality applications.
Committee: John W. Fisher, Polina Golland and John J. Leonard
While much emphasis has been placed on large-scale 3D scene reconstruction from a single data source such as images or distance sensors, models that jointly utilize multiple data types remain largely unexplored. In this work, we will present a Bayesian formulation of scene reconstruction from multi-modal data as well as two critical components that enable large-scale reconstructions with adaptive resolution and high-level scene understanding with meaningful prior-probability distributions.
Our first contribution is to formulate the 3D reconstruction problem within the Bayesian framework. We develop an integrated probabilistic model that allows us to naturally represent uncertainty and to fuse complementary information provided by different sensor modalities (imagery and LiDAR). Maximum-a-Posteriori inference within this model leverages GPGPUs for efficient likelihood evaluations. Our dense reconstructions (triangular mesh with texture information) are feasible with fewer observations of a given modality by relaying on others without sacrificing quality.
Secondly, to enable large-scale reconstructions our formulation supports adaptive resolutions in both appearance and geometry. This change is motivated by the need for a representation that can adjust to a wide variability in data quality and availability. By coupling edge transformations within a reversible-jump MCMC framework, we allow changes in the number of triangles and mesh connectivity. We demonstrate that these data-driven updates lead to more accurate representations while reducing modeling assumptions and utilizing fewer triangles.
Lastly, to enable high-level scene understanding, we include a categorization of reconstruction elements in our formulation. This scene-specific classification of triangles is estimated from semantic annotations (which are noisy and incomplete) and other scene features (e.g., geometry and appearance). The categorization provides a class-specific prior-probability distribution, thus helping to obtain more accurate and interpretable representations by regularizing the reconstruction.
Collectively, these models enable complex reasoning about urban scenes by fusing all available data across modalities, a crucial necessity for future autonomous agents and large-scale augmented-reality applications.
Committee: John W. Fisher, Polina Golland and John J. Leonard