Skip to main content Accessibility help
×
×
Home

Vision-based topological mapping and localization by means of local invariant features and map refinement

  • Emilio Garcia-Fidalgo (a1) and Alberto Ortiz (a1)
Summary

We propose an appearance-based approach for topological visual mapping and localization using local invariant features. To optimize running times, matchings between the current image and previously visited places are determined using an index based on a set of randomized kd-trees. We use a discrete Bayes filter for predicting loop candidates, whose observation model is a novel approach based on an efficient matching scheme between features. In order to avoid redundant information in the resulting maps, we also present a map refinement framework, which takes into account the visual information stored in the map for refining the final topology of the environment. These refined maps save storage space and improve the execution times of localizations tasks. The approach is validated using image sequences from several environments and compared with the state-of-the-art FAB-MAP 2.0 algorithm.

Copyright
Corresponding author
*Corresponding author. E-mail: emilio.garcia@uib.es
References
Hide All
1.Durrant-Whyte, H. and Bailey, T., “Simultaneous Localisation and Mapping (SLAM): Part I the essential algorithms,” Robot. Autom. Mag. 13 (2), 99110 (2006).
2.Sivic, J. and Zisserman, A., “Video Google: A Text Retrieval Approach to Object Matching in Videos,” Proceedings of the International Conference on Computer Vision, Nice, France (2003) pp. 14701477.
3.Zhang, H., “BoRF: Loop-Closure Detection with Scale Invariant Visual Features,” Proceedings of the International Conference on Robotics and Automation, Shangai, China (2011) pp. 31253130.
4.Angeli, A., Doncieux, S., Meyer, J.-A. and Filliat, D., “Incremental Vision-Based Topological SLAM,” Proceedings of the International Conference on Intelligent Robots and Systems, Nice, France (2008) pp. 2226.
5.Booij, O., Terwijn, B., Zivkovic, Z. and Krose, B., “Navigation Using an Appearance Based Topological Map,” Proceedings of the International Conference on Robotics and Automation, Roma, Italy (Apr. 2007) pp. 39273932.
6.Garcia-Fidalgo, E. and Ortiz, A., “Probabilistic Appearance-Based Mapping and Localization Using Visual Features,” Proceedings of the Pattern Recognition and Image Analysis, Lecture Notes in Computer Science, Funchal, Portugal, vol. 7887 (2013) pp. 277285.
7.Zivkovic, Z., Bakker, B. and Krose, B., “Hierarchical Map Building Using Visual Landmarks and Geometric Constraints,” Proceedings of the International Conference on Intelligent Robots and Systems, Edmonton, Canada (2005) pp. 24802485.
8.Ulrich, I. and Nourbakhsh, I., “Appearance-Based Place Recognition for Topological Localization,” Proceedings of the International Conference on Robotics and Automation, San Francisco, USA (2000) pp. 10231029.
9.Goedemé, T., Nuttin, M., Tuytelaars, T. and Van Gool, L., “Markerless Computer Vision Based Localization using Automatically Generated Topological Maps,” Proceedings of the European Navigation Conference, Rotterdam, The Netherlands (2004) pp. 235243.
10.Sabatta, D. G., “Vision-based Topological Map Building and Localisation using Persistent Features,” Proceedings of the Robotics and Mechatronics Symposium, Pretoria, South Africa (2008) pp. 16.
11.Kawewong, A., Tongprasit, N., Tungruamsub, S. and Hasegawa, O., “Online and incremental appearance-based SLAM in highly dynamic environments,” Int. J. Robot. Res. 30 (1), 3355 (2011).
12.Singh, G. and Kosecka, J., “Visual Loop Closing using Gist Descriptors in Manhattan World,” Proceedings of the Omnidirectional Robot Vision Workshop, Held with IEEE ICRA, Anchorage, USA (2010).
13.Murillo, A. C., Sagues, C. and Guerrero, J., “From omnidirectional images to hierarchical localization,” Robot. Auton. Syst. 55, 372382 (May 2007).
14.Cummins, M. and Newman, P., “FAB-MAP: Probabilistic localization and mapping in the space of appearance,” Int. J. Robot. Res. 27 (6), 647665 (2008).
15.Angeli, A., Doncieux, S., Meyer, J.-A. and Filliat, D., “Real-Time Visual Loop-Closure Detection,” Proceedings of the International Conference on Robotics and Automation, Pasadena, USA (2008) pp. 18421847.
16.Angeli, A., Filliat, D., Doncieux, S. and Meyer, J.-A., “A fast and incremental method for loop-closure detection using bags of visual words,” IEEE Trans. Robot. 24 (5), 10271037 (2008).
17.Zhang, H., “Indexing Visual Features: Real-Time Loop Closure Detection Using a Tree Structure,” Proceedings of the International Conference on Robotics and Automation, Vilamoura, Portugal (2012) pp. 36133618.
18.Fraundorfer, F., Engels, C. and Nister, D., “Topological Mapping, Localization and Navigation Using Image Collections,” Proceedings of the International Conference on Intelligent Robots and Systems, San Diego, USA (2007) pp. 38723877.
19.Johns, E. and Yang, G.-Z., “Feature Co-Occurrence Maps: Appearance-Based Localisation Throughout the Day,” Proceedings of the International Conference on Robotics and Automation, Karlsruhe, Germany (2013) pp. 32123218.
20.Nister, D. and Stewenius, H., “Scalable Recognition with a Vocabulary Tree,” Proceedings of the International Conference on Computer Vision and Pattern Recognition, New York, USA vol. 2 (2006) pp. 21612168.
21.Cummins, M. and Newman, P., “Accelerating FAB-MAP with concentration inequalities,” IEEE Trans. Robot. 26 (6), 10421050 (2010).
22.Cummins, M. and Newman, P. M., “Appearance-only SLAM at large scale with FAB-MAP 2.0,” Int. J. Robot. Res. 30 (9), 11001123 (2011).
23.Glover, A. J., Maddern, W. P., Milford, M. and Wyeth, G. F., “FAB-MAP + RatSLAM: Appearance-based SLAM for multiple times of day,” International Conference on Robotics and Automation, Anchorage, USA, IEEE (2010) pp. 35073512.
24.Filliat, D., “A Visual Bag of Words method for Interactive Qualitative Localization and Mapping,” Proceedings of the International Conference on Robotics and Automation, Roma, Italy (2007) pp. 39213926.
25.Angeli, A., Doncieux, S., Meyer, J.-A. and Filliat, D., “Visual Topological SLAM and Global Localization,” Proceedings of the International Conference on Robotics and Automation, Kobe, Japan (2009) pp. 43004305.
26.Nicosevici, T. and García, R., “On-line Visual Vocabularies for Robot Navigation and Mapping,” Proceedings of the International Conference on Intelligent Robots and Systems, St. Louis, USA (2009) pp. 205212.
27.Oliva, A. and Torralba, A., “Modeling the shape of the scene: A holistic representation of the spatial envelope,” Int. J. Comput. Vis. 42 (3), 145175 (2001).
28.Liu, Y. and Zhang, H., “Visual Loop Closure Detection with a Compact Image Descriptor,” Proceedings of the International Conference on Intelligent Robots and Systems, Vilamoura, Portugal (2012) pp. 10511056.
29.Siagian, C. and Itti, L., “Rapid biologically-inspired scene classification using features shared with visual attention,” IEEE Trans. Pattern Anal. Mach. Intell. 29 (2), 300–12 (2007).
30.Sünderhauf, N. and Protzel, P., “BRIEF-Gist - Closing the Loop by Simple Means,” Proceedings of the International Conference on Intelligent Robots and Systems, San Francisco, USA (2011) pp. 12341241.
31.Bradley, D. M., Patel, R., Vandapel, N. and Thayer, S., “Real-Time Image-Based Topological Localization in Large Outdoor Environments,” Proceedings of the International Conference on Intelligent Robots and Systems, Edmonton, Canada (2005) pp. 36703677.
32.Weiss, C., Masselli, A., Tamimi, H. and Zell, A., “Fast Outdoor Robot Localization using Integral Invariants,” Proceedings of the International Conference on Computer Vision Systems, Bielefeld, Germany (2007) 24 pp.
33.Wang, J., Zha, H. and Cipolla, R., “Efficient Topological Localization Using Orientation Adjacency Coherence Histograms,” Proceedings of the International Conference on Pattern Recognition, IEEE Computer Society, Hong Kong, China (2006) pp. 271274.
34.Badino, H., Huber, D. F. and Kanade, T., “Real-Time Topometric Localization,” Proceedings of the International Conference on Robotics and Automation, IEEE, St. Paul, USA (2012) pp. 16351642.
35.Lowe, D. G., “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. Comput. Vis. 60 (2), 91110 (2004).
36.Valgren, C., Duckett, T. and Lilienthal, A., “Incremental Spectral Clustering and Its Application to Topological Mapping,” Proceedings of the International Conference on Robotics and Automation, Roma, Italy (Apr. 2007) pp. 1014.
37.Bacca, B., Salvi, J., Batlle, J. and Cufi, X., “Appearance-based mapping and localisation using feature stability histograms,” Electron. Lett. 46 (16), 11201121 (2010).
38.Blanco, J. L., Fernandez-Madrigal, J. A. and Gonzalez, J., “Towards a unified bayesian approach to hybrid metric-topological SLAM,” IEEE Trans. Robot. 24 (2), 259270 (2008).
39.Zhang, H., Li, B. and Yang, D., “Keyframe Detection for Appearance-Based Visual SLAM,” Proceedings of the International Conference on Intelligent Robots and Systems, Taipei, Taiwan (2010) pp. 20712076.
40.Leutenegger, S., Chli, M., and Siegwart, R., “BRISK: Binary Robust Invariant Scalable Keypoints,” Proceedings of the International Conference on Computer Vision, Barcelona, Spain, vol. 0 (2011) pp. 25482555.
41.Rublee, E., Rabaud, V., Konolige, K. and Bradski, G., “ORB: An Efficient Alternative to SIFT or SURF,” Proceedings of the International Conference on Computer Vision, Barcelona, Spain (2011) pp. 25642571.
42.Alahi, A., Ortiz, R. and Vandergheynst, P., “FREAK: Fast Retina Keypoint,” Proceedings of the International Conference on Computer Vision and Pattern Recognition, Rhode Island, USA (2012) pp. 510517.
Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

Robotica
  • ISSN: 0263-5747
  • EISSN: 1469-8668
  • URL: /core/journals/robotica
Please enter your name
Please enter a valid email address
Who would you like to send this to? *
×

Keywords

Metrics

Full text views

Total number of HTML views: 0
Total number of PDF views: 0 *
Loading metrics...

Abstract views

Total abstract views: 0 *
Loading metrics...

* Views captured on Cambridge Core between <date>. This data will be updated every 24 hours.

Usage data cannot currently be displayed