To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Following the SPOT-5 launch, Spot Image and the French National Geographic Institute (IGN) have designed a high-accuracy worldwide database called Reference3D™ using data from the High Resolution Stereoscopic (HRS) SPOT-5 instrument. This database consists of three information layers: A Digital Elevation Model (DEM) at 50-m resolution, ortho-images at 5-m resolution, and quality masks with a circular horizontal accuracy better than 16 m for 90% of the points and an elevation accuracy better than 10 m for 90% of the points. A new system (called ANDORRE) was also developed to archive the Reference3D™ database and to automatically produce orthorectified images using Reference3D™ data. ANDORRE takes advantage of Reference3D™ horizontal and vertical accuracies to automatically register and rectify an image from any SPOT satellite in every area where Reference3D™ data are available. This chapter presents the automatic orthorectification algorithm, developed under the CNES (French Space Agency) leadership, that is composed of three main steps: (1) Generation of a reference image (in focal plane geometry) using Reference3D™ orthoimage and DEM layers, (2) modeling of the misregistration between the reference image and the SPOT image to be processed, and (3) resampling of the image into a cartographic reference frame. It also describes the geometric performance measured on operational cases involving different landscapes, DEM data, and image resolutions. Timing measurements show that the rectification of a 24,000 × 24,000 image can be performed in less than an hour.
Despite the importance of image registration to data integration and fusion in many fields, there are only a few books dedicated to the topic. None of the current, available books treats exclusively image registration of Earth (or space) satellite imagery. This is the first book dedicated fully to this discipline. The book surveys and presents various algorithmic approaches and applications of image registration in remote sensing. Although there are numerous approaches to the problem of registration, no single and clear solution stands out as a standard in the field of remote sensing, and the problem remains open for new, innovative approaches, as well as careful, systematic integration of existing methods. This book is intended to bring together a number of image registration approaches for study and comparison, so remote sensing scientists can review existing methods for application to their problems, and researchers in image registration can review remote sensing applications to understand how to improve their algorithms. The book contains invited contributions by many of the best researchers in the field, including contributions relating the experiences of several Earth science research teams working with operational software on imagery from major Earth satellite systems. Such systems include the Advanced Very High Resolution Radiometer (AVHRR), Landsat, MODerate resolution Imaging Spectrometer (MODIS), Satellite Pour l'Observation de la Terre (SPOT), VEGETATION, Multi-angle Imaging SpectroRadiometer (MISR), METEOSAT, and the Sea-viewing Wide Field-of-view Sensor (SeaWiFS).
An automated method has been developed for performing geolocation assessment on global satellite-based Earth remote sensing data. The method utilizes islands as targets that can be readily located in the sensor data and identified with reference locations. The essential elements are an algorithm for classifying the sensor data according to the source, a reference catalog of island locations, and a robust algorithm for matching viewed islands with the catalog locations. This method was originally developed and tested for the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) before its launch in 1997, and was refined using the flight data after launch. The results have been used for both ongoing assessment of geolocation accuracy and development of improvements to the geolocation processing algorithms. The method has also been applied to other moderate-resolution satellite sensors.
Introduction
The determination of geolocation for global Earth imaging sensors is generally performed in-line with initial (e.g., Level 0–1) data processing, using parametric algorithms based on navigation (satellite orbit and attitude) and telemetry data. The accuracy of the satellite navigation data may be adequate to meet geolocation requirements at the resolution of global sensors (250 to 1000 m) without the need for manual intervention or postprocessing corrections. However, this approach does need verification and feedback using the sensed image data. Both navigation data and sensor geometric models are subject to systematic and time-varying errors, and a means of ongoing assessment of geolocation accuracy is needed over the full range (temporal and geographic) of data collection.
Wavelets provide a multiresolution description of images according to a well-chosen division of the space-frequency plane. This description provides information about various features present in the images that can be utilized to perform registration of remotely sensed images. In the last few years, many wavelet filters have been proposed for applications such as compression; in this chapter, we review the general principle of wavelet decomposition and the many filters that have been proposed for wavelet transforms, as they apply to image registration. In particular, we consider orthogonal wavelets, spline wavelets, and two pyramids obtained from a steerable decomposition. These different filters are studied and compared using synthetic datasets generated from a Landsat-Thematic Mapper (TM) scene.
Introduction
The main thrust of this chapter is to describe image registration methods that focus on computational speed and on the ability of handling multisensor data. As was described in Chapter 1 and in Brown (1992), any image registration method can be described by a feature space, a search space, a search strategy, and a similarity metric. Utilizing wavelets for image registration not only defines the type of features that will be matched, but it also enables the matching process to follow a multiresolution search strategy. Such an iterative matching at multiple scales represents one of the main factors that will define the accuracy of such methods.
The field of satellite sensing/imaging is still in full expansion. The new millennium will see developments related not only to scientific missions, but also an explosion of commercial satellite systems providing data that will have economic and sociopolitical implications, with telecommunications taking a large place in the space market. In space, returning to the Moon and Mars, as well as exploring distant planets will see a growing number of distant satellite systems providing unprecedented amounts of data to analyze. The Lunar Reconnaissance Orbiter (LRO) is merely one recent example of such systems. Automatic, accurate, fast, and reliable image registration will increase the success of these future endeavors by providing data products that will foster interdisciplinary research and fast turnaround of information for applications with societal benefits.
This book has brought together invited contributions by 36 distinguished researchers in the field to present a coherent and detailed overview of current research and practice in the application of image registration to satellite imagery. The contributions cover the definition of the problem, theoretical issues in accuracy and efficiency, fundamental algorithms used in its solution, and real world case studies of image registration software applied to imagery from operational satellite systems.
As the field keeps evolving, we anticipate that new research will deal with combining multiple band-to-band registrations, extending 3D medical registration methodologies (Goshtasby, 2005; Hainal et al., 2001) to the registration of hyperspectral data, and automatically extracting windows of interest (Plaza et al., 2007) to guide more refined registration techniques.
This chapter covers a general class of image registration algorithms that apply numerical optimization to similarity measures relating to cumulative functions of image intensities. An example of these algorithms is an algorithm minimizing the least-squares difference in image intensities due to an iterative gradient-descent approach. Algorithms in this class, which work well in 2D and 3D, can be applied simultaneously to multiple bands in an image pair and images with significant radiometric differences to accurately recover subpixel transformations. The algorithms discussed differ in the specific similarity measure, the numerical method used for optimization, and the actual computation used. The similarity measure can vary from a measure that uses a radiometric function to account for nonlinear image intensity differences in the least-squares equations, to one that is based on mutual information, which accounts for image intensity differences not accounted for by a standard functional model. The numerical methods considered are basic recursive descent, a method based on Levenberg-Marquardt's technique, and Spall's algorithm. This chapter relates to the above registration algorithms and classifies them by their various elements. It also analyzes the image classes for which variants of these algorithms apply best.
Introduction
We consider in this chapter a class of image registration algorithms that apply numerical techniques for optimizing some similarity measures that relate only to the image intensities (or a function of the image intensities) of an image pair.
Correlation is an extremely powerful technique for finding similarities between two images. This chapter describes why correlation has proved to be a valuable tool, how to implement correlation to achieve extremely high performance processing, and indicates the limits of correlation so that it can be used where it is appropriate. Section 4.1 gives the underlying theory for fast correlation, which is the well-known convolution theorem. It is this theory that gives correlation a huge processing advantage in many applications. Also covered is normalized correlation, which is a form of correlation that allows images to be matched in spite of differences in the images due to uniform changes of intensity. Section 4.2 treats the practical implementation of correlation, including the use of masks to eliminate irrelevant or obscured portions of images. The implementations described in this section treat images that differ only by translation, and otherwise have the same orientation and scale. Section 4.3 introduces an extension of the basic algorithm to allow for small differences of scale and orientation as well as translation. Section 4.4 presents very high precision registration. This section shows how to make use of Fourier phase to determine translational differences down to a few hundredths of a pixel. The images must be oriented and scaled identically and have a translational difference that does not exceed half of a pixel. Section 4.5 deals with fast rotational registration that uses phase correlation to discover the rotational difference between two images.
To enable automated (without human intervention) AVHRR (Advanced Very High Resolution Radiometer) image navigation, a base image is defined and the maximum cross-correlation (MCC) method is used to automatically compute the satellite attitude parameters required to geometrically correct images to this base image. The auto attitude corrections are shown to be more accurate than the traditional linear translation methods and provide a significant improvement in geolocation accuracy over two other AVHRR image navigation methods. Geolocation accuracies are given for near-real-time use of this method for operational applications using daily imagery off the U.S. East and West Coasts. A further application of the attitude corrections is demonstrated whereby attitude corrections computed over land can be carried forward in the satellite's orbit to accurately navigate imagery over the open ocean where map reference points are not available.
Introduction
The accurate georegistration of satellite imagery typically requires the application of an orbital model to predict the location of the spacecraft, as well as an instrument pointing model to determine the geolocation of the sensor field of view (FOV) (Rosborough et al., 1994). The implementation of these two models is straightforward and easily automated. However, the obtained registration accuracy is dependent on the accuracies of the timing of the data and the spacecraft attitude (roll, pitch, and yaw) (Rosborough et al., 1994; Baldwin and Emery, 1994).
Currently, several high level satellite-based data products relevant to Earth science system and global change research are derived from medium spatial resolution remote sensing observations. Because of their high temporal frequency and important spatial coverage, these products, also called biophysical variables, are very useful to describe the mass and energy fluxes between the Earth's surface and the atmosphere. In order to check the relevance of using such products as inputs into land surface models, it is of importance to assess their accuracy. Validation activity consists in evaluating by independent means the quality of these biophysical variables. Currently, these variables are derived from coarse-resolution remote sensing observations. Validation methods consist in generating a ground truth map of these products at high spatial resolution. It is produced by using ground measurements of the biophysical variable and radiometric data from a high spatial resolution sensor. The relationship between biophysical variable and radiometric data, called the transfer function, allows extending the local ground measurements to the whole high spatial resolution image. The resulting biophysical variable map is aggregated to be compared with the medium spatial resolution satellite biophysical products. Three important geometrical issues influence the validation results: (1) The localization error of the local ground measurements, (2) the localization error between high and medium resolution images, and (3) the point spread function (PSF) associated with the medium spatial resolution remote sensing observations. This work aims at analyzing the influence of these issues within the validation process.
Registration of multiple source imagery is one of the most important issues when dealing with Earth science remote sensing data where information from multiple sensors exhibiting various resolutions must be integrated. Issues ranging from different sensor geometries, different spectral responses, to various illumination conditions, various seasons and various amounts of noise, need to be dealt with when designing a new image registration algorithm. This chapter represents a first attempt at characterizing a framework that addresses these issues, in which possible choices for the three components of any registration algorithm are validated and combined to provide different registration algorithms. A few of these algorithms were tested on three different types of datasets – synthetic, multitemporal and multispectral. This chapter presents the results of these experiments and introduces a prototype registration toolbox.
Introduction
In Chapter 1, we showed how the analysis of Earth science data for applications, such as the study of global environmental changes, involves the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, spectral, and spatial resolutions. For such applications, the first required step is fast and automatic image registration which can provide precision correction of satellite imagery, band-to-band calibration, and data reduction for ease of transmission. Furthermore, future decision support systems, intelligent sensors and adaptive constellations will rely on real- or near-real-time interpretation of Earth observation data, performed both onboard and at ground-based stations.
Meteorological images are acquired routinely from Sun-synchronous and geostationary platforms to meet the needs of operational forecasting and climate researchers. In the future, polar regions may be covered from Molniya orbits as well. The use of geostationary and Molniya orbits permits rapid revisits to the same site for the purpose of observing the evolution of weather systems. The large orbital altitudes for these systems introduce special problems for controlling georegistration error. This chapter reviews the subject of georegistration for meteorological data, emphasizing approaches for measuring and controlling georegistration error.
Introduction to meteorological satellites
Meteorological satellite images are captured from Low Earth Orbit (LEO) or Geostationary Earth Orbit (GEO) using visible, infrared (IR), and microwave bands. The U.S. Polar Operational Environmental Satellite (POES) and the Defense Meteorological Satellite Program (DMSP) are two such LEO systems with operational histories reaching back into the 1960s. Future LEO meteorological remote sensing needs will be fulfilled by the U.S. National Polar-orbiting Operational Environmental Satellite System (NPOESS) and by the European MetOp satellites (a series of polar orbiting meteorological satellites operated by the European Organization for the Exploitation of Meteorological Satellites), using instruments from the USA and Europe. The U.S. Geostationary Operational Environmental Satellite (GOES) program and the European METEOSAT program are two such GEO programs with similarly long operational histories. Japan and India have also operated GEO weather satellites and will continue to do so in the future.
A matched filter is a linear filtering device developed for signal detection in a noisy environment. The matched filtering technique can be directly employed in image registration for the estimation of image translation. The transfer function of a classic matched filter is the Fourier conjugate of the reference image, divided by the power spectrum of the system noise. If only the phase term of the reference image is used in the construction of the transfer function, the so-called phase-only matched filter generates a sharper peak at the maximum output than that obtained by a classic matched filter. Thus, the image translation can be detected more reliably. If rotation and scale change are also involved, the images to be registered can first be transformed into the Fourier-Mellin domain. The Fourier-Mellin transform of an image is translation invariant and represents rotation and scale changes as translations in the angular and radial coordinates. Employing matched filtering or phase-only matched filtering on the Fourier-Mellin transforms of the images yields the estimation of the image rotation and scaling. After correcting for rotation and scaling, the image translation can be determined by using the matched filtering or phase-only matched filtering method.
Introduction
Matched filtering is a classic signal processing technique widely used in signal detection (Turin, 1960; Vanderluht, 1969; Whalen, 1971; Kumar and Pochapsky, 1986). A matched filter is a linear filter with a transfer function that maximizes the output signal-to-noise ratio (SNR) for an input signal with known properties.
As the primary archive for the data acquired by the Landsat series of spacecraft, the U.S. Geological Survey Center for Earth Resources Observation and Science (EROS) is responsible for the processing systems that capture, correct, and distribute Landsat image data products. The Landsat ground system includes product generation components that apply the radiometric and geometric processing necessary to convert the digital detector samples acquired by the sensor to top-of-atmosphere radiance measurements referenced to an Earth-fixed coordinate system. These product generation systems implement geometric correction algorithms that use the instrument and spacecraft support data, provided in the Landsat data stream, to construct a model of the geometric relationship between the acquired image data samples and an Earth-fixed ground reference system. These support data include onboard measurements of instrument timing, spacecraft attitude (orientation) and jitter, and estimates of the spacecraft position and velocity derived from ground tracking data. The accuracy of the basic systematic geometric registration model is limited by uncertainties in these supporting data. In particular, the spacecraft position and attitude data typically contain residual biases on the order of tens to hundreds of meters. Fortunately, these errors are usually slowly varying, allowing for the registration of multiple Landsat images using simple, low-order techniques. When higher-accuracy products are required, additional capabilities of the Landsat ground system are employed to perform more precise geometric correction.
Multiview images of a 3D scene contain sharp local geometric differences. To register such images, a transformation function is needed that can accurately model local geometric differences between the images. A weighted linear transformation function for the registration of multiview images is proposed. Properties of this transformation function are explored and its accuracy in image registration is compared with accuracies of other transformation functions.
Introduction
Due to the acquisition of satellite images at a high altitude and relatively low resolution, overlapping images have very little to no local geometric differences, although global geometric differences may exist between them. Surface spline (Goshtasby, 1988) and multiquadric (Zagorchev and Goshtasby, 2006) transformation functions have been found to effectively model global geometric differences between overlapping satellite images.
High-resolution multiview images of a scene captured by low-flying aircrafts contain considerable local geometric differences. Local neighborhoods in the scene may appear differently in multiview images due to variation in local scene relief and difference in imaging view angle. Global transformation functions that successfully registered satellite images might not be able to satisfactorily register multiview aerial images.
A new transformation function for the registration of multiview aerial images is proposed. The transformation function is defined by a weighted sum of linear functions, each containing information about the geometric difference between corresponding local neighborhoods in the images.