Copyright © 2012 Rodrigo Munguía and Antoni Grau. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
This paper describes in a detailed manner a method to implement a simultaneous localization and mapping (SLAM) system based on monocular vision for applications of visual odometry, appearance-based sensing, and emulation of range-bearing measurements. SLAM techniques are required to operate mobile robots in a priori unknown environments using only on-board sensors to simultaneously build a map of their surroundings; this map will be needed for the robot to track its position. In this context, the 6-DOF (degree of freedom) monocular camera case (monocular SLAM) possibly represents the harder variant of SLAM. In monocular SLAM, a single camera, which is freely moving through its environment, represents the sole sensory input to the system. The method proposed in this paper is based on a technique called delayed inverse-depth feature initialization, which is intended to initialize new visual features on the system. In this work, detailed formulation, extended discussions, and experiments with real data are presented in order to validate and to show the performance of the proposal.