The eyes see only what the mind is prepared to comprehend.
Anyone who has had to move about in the dark recognizes the importance of vision to human navigation. Tasks that are fraught with difficulty and danger in the dark become straightforward when the lights are on. Given that we seem to navigate effortlessly with vision, it seems natural to consider vision as a sensor for mobile robots. Visual sensing has many desirable potential features, including that facts that it is passive, has high resolution, and is long range.
Human vision relies on two eyes to transform information encoded in light into electrical signals transmitted by neurons. In the biological sciences, the fields of perception and cognition investigate how this neural information is processed to build internal representations of the environment and how humans reason about their environment using these representations. From a robotics point of view, the fields of computer vision and robot vision examine the task of building computer representations of the environment from light, and the study of artificial intelligence deals in part with the task of reasoning or planning based on the resulting environmental representation. Like the previous chapter, this chapter continues the exploration of how to build descriptions of the work from sensor data. In this chapter, we consider the issues involved in sensing using light and related media.
Vision is both strikingly powerful as a sensory medium and strikingly difficult to use in a robotics context.
Review the options below to login to check your access.
Log in with your Cambridge Aspire website account to check access.
There are no purchase options available for this title.
If you believe you should have access to this content, please contact your institutional librarian or consult our FAQ page for further information about accessing our content.