Towards Lifelong Visual Localization and Mapping (PhD Defense)
Speaker: Hordur Johannsson, MIT CSAIL
Date: Thursday, January 31 2013
Time: 10:00AM to 11:30AM
Location: 32-G449 (Patil/Kiva)
Host: John Leonard, MIT CSAIL
Contact: John Leonard, email@example.com
Lifelong autonomy for mobile robotic systems requires algorithms that are robust and scale efficiently with time as sensor information is continually collected. For mobile robots one of the fundamental problems is navigation; which requires the robot to have a map of its environment, so it can plan its path and execute it. Having the robot use its perception sensor to do simultaneous localization and mapping
(SLAM) is beneficial for a fully autonomous system. Extending the time
horizon of operations poses problems to current SLAM algorithms, both in terms of robustness and temporal scalability. To address this problem we propose a reduced pose graph model that scales with the size of the environment instead of time. Additionally we develop a SLAM system using two different sensors modalities: imaging sonars for underwater navigation and vision based SLAM for terrestrial applications.
Underwater navigation is one application domain that benefits from SLAM, where access to a global positioning system (GPS) is not possible. In this thesis we present SLAM systems for two underwater applications. First, we describe our implementation of real-time imaging-sonar aided navigation applied to in-situ autonomous ship hull inspection using the hovering autonomous underwater vehicle (HAUV). In addition we present an architecture that enables the fusion of information of a sonar and a camera system. The system is evaluated on data collected during experiments on SS Curtiss and USCGC Seneca. Second, we develop a feature based navigation system supporting multi-session mapping and provide an algorithm for re-localizing the vehicle between missions. In addition we present a method for managing the complexity of the estimation problem as new information is received. The system is demonstrated using data collected with a REMUS vehicle equipped with a BlueView forward looking sonar.
The model we use for mapping builds on the pose graph representation which has been shown to be an efficient and accurate approach to SLAM. One of the problems with the pose graph formulation is that the state space continuously grows as more information is acquired. To address this problem we propose the reduced pose graph (RPG) model which partitions the space to be mapped and uses these partitions to reduce the number of poses used for the estimation. To evaluate our approach, we present results using an online binocular and RGB-D visual SLAM system that uses place recognition for both robustness and multi-session operations. Additionally, to enable large-scale indoor mapping, our system automatically detects elevator rides based on accelerometer data. We demonstrate long-term mapping using
approximately nine hours of data collected in the Stata Center over
the course of six months. Ground truth, derived by aligning laser scans to existing floor plans, is used to evaluate the global accuracy of the system. Our results illustrate the capability of our visual SLAM system to scale in size with the area of exploration instead of the time of exploration.
See other events happening in January 2013