Hostname: page-component-89b8bd64d-mmrw7 Total loading time: 0 Render date: 2026-05-08T16:22:56.350Z Has data issue: false hasContentIssue false

Object learning and detection using evolutionary deformable models for mobile robot navigation

Published online by Cambridge University Press:  01 January 2008

M. Mata*
Affiliation:
Computer Architecture and Automation Department, Universidad Europea de Madrid, Villaviciosa de Odon, 28670 Madrid, Spain.
J. M. Armingol
Affiliation:
Intelligent Systems Laboratory, Universidad Carlos III de Madrid, Leganés, 28911 Madrid, Spain. E-mail: armingol@ing.uc3m.es
J. Fernández
Affiliation:
Computer Architecture and Automation Department, Universidad Europea de Madrid, Villaviciosa de Odon, 28670 Madrid, Spain.
A. de la Escalera
Affiliation:
Intelligent Systems Laboratory, Universidad Carlos III de Madrid, Leganés, 28911 Madrid, Spain. E-mail: armingol@ing.uc3m.es
*
*Corresponding author. E-mail: mmata@uem.es

Summary

Deformable models have been studied in image analysis over the last decade and used for recognition of flexible or rigid templates under diverse viewing conditions. This article addresses the question of how to define a deformable model for a real-time color vision system for mobile robot navigation. Instead of receiving the detailed model definition from the user, the algorithm extracts and learns the information from each object automatically. How well a model represents the template that exists in the image is measured by an energy function. Its minimum corresponds to the model that best fits with the image and it is found by a genetic algorithm that handles the model deformation. At a later stage, if there is symbolic information inside the object, it is extracted and interpreted using a neural network. The resulting perception module has been integrated successfully in a complex navigation system. Various experimental results in real environments are presented in this article, showing the effectiveness and capacity of the system.

Information

Type
Article
Copyright
Copyright © Cambridge University Press 2007

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Article purchase

Temporarily unavailable