The subject, Computer Vision, deals with the science ofimparting to a machine or a computer the capabilityof seeing and understanding the environment as wehumans are able to do, and seeks to apply itstheories and models in various applications of ourlife and society. From the late sixties of the lastcentury, there have been efforts in analyzingdigital images captured by a scanner or a camera.Initially, it was the 2-D digital geometry in adiscrete grid of integral coordinate space whichdrew primary attention of the researchers. Inparticular, Prof. Azriel Rosenfeld (1931–2004) ofthe University of Maryland, USA, took a leading andpioneering role in developing theories of digitalpicture processing. Subsequently, the area wasstrengthened by the development and application oftheories of mathematical morphology, textureprocessing, pattern recognition techniques, etc.However, the major development in the theory ofcomputer vision, following the psycho-physiologicalmodels of human vision, happened in the seventies ofthe last century, when Prof. David Marr (1945–1980)of the Massachusetts Institute of Technology (MIT),Cambridge, USA, hypothesized three stages ofprocessing and representation of images by primalsketches consisting of edges, regions, 2.5-Dsketches of the scene, and finally 3-D models.
Over the years, theories of computer vision have beendeveloped from different areas of mathematical andphysical sciences, such as digital geometry,projective geometry, differential geometry, linearand nonlinear systems, human cognition andpsycho-visual perception, color representation andprocessing, computational learning, patternrecognition, etc. As we see, the theoreticalfoundation of the subject has been built fromdifferent domains, and it requires to learn thefundamentals across these disciplines in asystematic and organized manner in the context ofcore agenda of computer vision, which is to solveproblems related to the understanding of a 3-Dscene, static or dynamic, given visual inputs fromimaging systems.