Skip to main content
    • Aa
    • Aa
  • Get access
    Check if you have access via personal or institutional login
  • Cited by 7
  • Cited by
    This article has been cited by the following publications. This list is generated based on data provided by CrossRef.

    Liu, Zhi Wang, Liyang Zhang, Yun and Chen, C.L. Philip 2016. A SVM controller for the stable walking of biped robots based on small sample sizes. Applied Soft Computing, Vol. 38, p. 738.

    Fergani, K. Lui, D. Scharfenberger, C. Wong, A. and Clausi, D.A. 2014. Hybrid structural and texture distinctiveness vector field convolution for region segmentation. Computer Vision and Image Understanding, Vol. 125, p. 85.

    Lui, Dorothy Scharfenberger, Christian Fergani, Khalil Wong, Alexander and Clausi, David A. 2014. Enhanced Decoupled Active Contour Using Structural and Textural Variation Energy Functionals. IEEE Transactions on Image Processing, Vol. 23, Issue. 2, p. 855.

    Yen, Li-Yuan and Chang, Kuo-Hao 2012. Proceedings of the 2012 IEEE 16th International Conference on Computer Supported Cooperative Work in Design (CSCWD). p. 742.

    Zhao, Cancan Zhang, Xiaodong and Qiu, Junjiang 2012. Proceedings of the 2012 IEEE 16th International Conference on Computer Supported Cooperative Work in Design (CSCWD). p. 730.

    Mishra, Akshaya Kumar Fieguth, Paul W and Clausi, David A 2011. Decoupled Active Contour (DAC) for Boundary Detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 33, Issue. 2, p. 310.

    Park, Chul-Ho Yoo, Seong-Moo and Pan, W. David 2010. LSM: A layer subdivision method for deformable object matching. Information Sciences, Vol. 180, Issue. 19, p. 3718.


Object learning and detection using evolutionary deformable models for mobile robot navigation

  • M. Mata (a1), J. M. Armingol (a2), J. Fernández (a1) and A. de la Escalera (a2)
  • DOI:
  • Published online: 01 January 2008

Deformable models have been studied in image analysis over the last decade and used for recognition of flexible or rigid templates under diverse viewing conditions. This article addresses the question of how to define a deformable model for a real-time color vision system for mobile robot navigation. Instead of receiving the detailed model definition from the user, the algorithm extracts and learns the information from each object automatically. How well a model represents the template that exists in the image is measured by an energy function. Its minimum corresponds to the model that best fits with the image and it is found by a genetic algorithm that handles the model deformation. At a later stage, if there is symbolic information inside the object, it is extracted and interpreted using a neural network. The resulting perception module has been integrated successfully in a complex navigation system. Various experimental results in real environments are presented in this article, showing the effectiveness and capacity of the system.

Corresponding author
*Corresponding author. E-mail:
Linked references
Hide All

This list contains references from the content that can be linked to their source. For a full set of references and notes please see the PDF or HTML where available.

1.C. Balkenius , “Spatial learning with perceptually grounded representations,” Robot. Auton. Syst. 25, 165175 (1998).

2.S. M. Bhandarkar , J. Koh and M. Suk , “Multiscale image segmentation using a hierarchical self-organizing map,” Neurocomputing 14, 241272 (1997).

4.M. Betke and N. Makris , “Recognition, resolution, and complexity of objects subject to affine transformations,” Int. J. Comput. Vis. 44 (1), 540 (2001).

6.J. Borenstein and L. Feng , “Measurement and correction of systematic odometry errors in mobile robots,” IEEE Trans. Robot. Autom. 12 (5), 869880 (1996).

9.A. de la Escalera , J. M. Armingol , J. M. Pastor and F. J Rodríguez , “Visual sign information extraction and identification by deformable models for intelligent vehicles,” IEEE Trans. Intell. Transp. Syst. 5 (2), 5768 (2004).

10.G. N. De Souza and A. C. Kak , “Vision for mobile robot navigation: A survey,” IEEE Trans. Pattern Anal. Mach. Intell. 24 (2), 237267 (2002).

11.M. O. Franz , B. Schölkopf , H. A. Mallot and H. Bülthoff , “Learning view graphs for robot navigation,” Auton. Robots 5, 111125 (1998).

13.S. Mahadevan and G. Theocharous , “Rapid concept learning for mobile robots,” Mach. learn. 31, 727 (1998).

14.C. Kervrann and F. Heitz , “Statistical deformable model-based segmentation of image motion,” IEEE Trans. Image Process. 8 (4), 583588 (1999).

15.C. Kreucher and S. Lakshmanan , “LANA: A lane extraction algorithm that uses frequency domain features,” IEEE Trans. Robot. Autom. 15 (2), 343350 (1999).

16.S. Marsland , U. Nehmzow and T. Duckett , “Learning to select distinctive landmarks for mobile robot navigation,” Robot. Auton. Syst. 37, 241260 (2001).

18.H. Rue and O. K. Husby , “Identification of partly destroyed objects using deformable templates,” Stat. Comput. 8 (3), 221228 (1998).

19.P. Sala , R. Sim , A. Shokoufandeh and S. Dickinson , “Landmark selection for vision-based navigation,” IEEE Trans. Robot. 22 (2), 334349 (2006).

20.S. Se and D. G. Lowe and “Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks,” Int. J. Robot. Res. 21 (8), 735758 (2002).

21.S. Se , D. G. Lowe and J. J. Little , “Vision-based global localization and mapping for mobile robotsIEEE Trans. Robot. 21 (3), 364375 (2005).

Recommend this journal

Email your librarian or administrator to recommend adding this journal to your organisation's collection.

  • ISSN: 0263-5747
  • EISSN: 1469-8668
  • URL: /core/journals/robotica
Please enter your name
Please enter a valid email address
Who would you like to send this to? *