47 results
Highly pathogenic avian influenza causes mass mortality in Sandwich Tern Thalasseus sandvicensis breeding colonies across north-western Europe
- Ulrich Knief, Thomas Bregnballe, Ibrahim Alfarwi, Mónika Z. Ballmann, Allix Brenninkmeijer, Szymon Bzoma, Antoine Chabrolle, Jannis Dimmlich, Elias Engel, Ruben Fijn, Kim Fischer, Bernd Hälterlein, Matthias Haupt, Veit Hennig, Christof Herrmann, Ronald in ‘t Veld, Elisabeth Kirchhoff, Mikael Kristersson, Susanne Kühn, Kjell Larsson, Rolf Larsson, Neil Lawton, Mardik Leopold, Sander Lilipaly, Leigh Lock, Régis Marty, Hans Matheve, Włodzimierz Meissner, Paul Morrison, Stephen Newton, Patrik Olofsson, Florian Packmor, Kjeld T. Pedersen, Chris Redfern, Francesco Scarton, Fred Schenk, Olivier Scher, Lorenzo Serra, Alexandre Sibille, Julian Smith, Wez Smith, Jacob Sterup, Eric Stienen, Viola Strassner, Roberto G. Valle, Rob S. A. van Bemmelen, Jan Veen, Muriel Vervaeke, Ewan Weston, Monika Wojcieszek, Wouter Courtens
-
- Journal:
- Bird Conservation International / Volume 34 / 2024
- Published online by Cambridge University Press:
- 02 February 2024, e6
-
- Article
-
- You have access Access
- Open access
- HTML
- Export citation
-
In 2022, highly pathogenic avian influenza (HPAI) A(H5N1) virus clade 2.3.4.4b became enzootic and caused mass mortality in Sandwich Tern Thalasseus sandvicensis and other seabird species across north-western Europe. We present data on the characteristics of the spread of the virus between and within breeding colonies and the number of dead adult Sandwich Terns recorded at breeding sites throughout north-western Europe. Within two months of the first reported mortalities, 20,531 adult Sandwich Terns were found dead, which is >17% of the total north-western European breeding population. This is probably an under-representation of total mortality, as many carcasses are likely to have gone unnoticed and unreported. Within affected colonies, almost all chicks died. After the peak of the outbreak, in a colony established by late breeders, 25.7% of tested adults showed immunity to HPAI subtype H5. Removal of carcasses was associated with lower levels of mortality at affected colonies. More research on the sources and modes of transmission, incubation times, effective containment, and immunity is urgently needed to combat this major threat for colonial seabirds.
4 Evaluating Plasma GFAP for the Detection of Alzheimer’s Disease Dementia
- Madeline Ally, Henrik Zetterberg, Kaj Blennow, Nicholas J. Ashton, Thomas K. Karikari, Hugo Aparicio, Michael A. Sugarman, Brandon Frank, Yorghos Tripodis, Ann C. McKee, Thor D. Stein, Brett Martin, Joseph N. Palmisano, Eric G. Steinberg, Irene Simkina, Lindsay Farrer, Gyungah Jun, Katherine W. Turk, Andrew E. Budson, Maureen K. O’Connor, Rhoda Au, Wei Qiao Qiu, Lee E. Goldstein, Ronald Killiany, Neil W. Kowall, Robert A. Stern, Jesse Mez, Michael L. Alosco
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 408-409
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Blood-based biomarkers represent a scalable and accessible approach for the detection and monitoring of Alzheimer’s disease (AD). Plasma phosphorylated tau (p-tau) and neurofilament light (NfL) are validated biomarkers for the detection of tau and neurodegenerative brain changes in AD, respectively. There is now emphasis to expand beyond these markers to detect and provide insight into the pathophysiological processes of AD. To this end, a reactive astrocytic marker, namely plasma glial fibrillary acidic protein (GFAP), has been of interest. Yet, little is known about the relationship between plasma GFAP and AD. Here, we examined the association between plasma GFAP, diagnostic status, and neuropsychological test performance. Diagnostic accuracy of plasma GFAP was compared with plasma measures of p-tau181 and NfL.
Participants and Methods:This sample included 567 participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC) Longitudinal Clinical Core Registry, including individuals with normal cognition (n=234), mild cognitive impairment (MCI) (n=180), and AD dementia (n=153). The sample included all participants who had a blood draw. Participants completed a comprehensive neuropsychological battery (sample sizes across tests varied due to missingness). Diagnoses were adjudicated during multidisciplinary diagnostic consensus conferences. Plasma samples were analyzed using the Simoa platform. Binary logistic regression analyses tested the association between GFAP levels and diagnostic status (i.e., cognitively impaired due to AD versus unimpaired), controlling for age, sex, race, education, and APOE e4 status. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate diagnostic groups compared with plasma p-tau181 and NfL. Linear regression models tested the association between plasma GFAP and neuropsychological test performance, accounting for the above covariates.
Results:The mean (SD) age of the sample was 74.34 (7.54), 319 (56.3%) were female, 75 (13.2%) were Black, and 223 (39.3%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having cognitive impairment (GFAP z-score transformed: OR=2.233, 95% CI [1.609, 3.099], p<0.001; non-z-transformed: OR=1.004, 95% CI [1.002, 1.006], p<0.001). ROC analyses, comprising of GFAP and the above covariates, showed plasma GFAP discriminated the cognitively impaired from unimpaired (AUC=0.75) and was similar, but slightly superior, to plasma p-tau181 (AUC=0.74) and plasma NfL (AUC=0.74). A joint panel of the plasma markers had greatest discrimination accuracy (AUC=0.76). Linear regression analyses showed that higher GFAP levels were associated with worse performance on neuropsychological tests assessing global cognition, attention, executive functioning, episodic memory, and language abilities (ps<0.001) as well as higher CDR Sum of Boxes (p<0.001).
Conclusions:Higher plasma GFAP levels differentiated participants with cognitive impairment from those with normal cognition and were associated with worse performance on all neuropsychological tests assessed. GFAP had similar accuracy in detecting those with cognitive impairment compared with p-tau181 and NfL, however, a panel of all three biomarkers was optimal. These results support the utility of plasma GFAP in AD detection and suggest the pathological processes it represents might play an integral role in the pathogenesis of AD.
5 Antemortem Plasma GFAP Predicts Alzheimer’s Disease Neuropathological Changes
- Madeline Ally, Henrik Zetterberg, Kaj Blennow, Nicholas J. Ashton, Thomas K. Karikari, Hugo Aparicio, Michael A. Sugarman, Brandon Frank, Yorghos Tripodis, Brett Martin, Joseph N. Palmisano, Eric G. Steinberg, Irene Simkina, Lindsay Farrer, Gyungah Jun, Katherine W. Turk, Andrew E. Budson, Maureen K. O’Connor, Rhoda Au, Wei Qiao Qiu, Lee E. Goldstein, Ronald Killiany, Neil W. Kowall, Robert A. Stern, Jesse Mez, Bertran R. Huber, Ann C. McKee, Thor D. Stein, Michael L. Alosco
-
- Journal:
- Journal of the International Neuropsychological Society / Volume 29 / Issue s1 / November 2023
- Published online by Cambridge University Press:
- 21 December 2023, pp. 409-410
-
- Article
-
- You have access Access
- Export citation
-
Objective:
Blood-based biomarkers offer a more feasible alternative to Alzheimer’s disease (AD) detection, management, and study of disease mechanisms than current in vivo measures. Given their novelty, these plasma biomarkers must be assessed against postmortem neuropathological outcomes for validation. Research has shown utility in plasma markers of the proposed AT(N) framework, however recent studies have stressed the importance of expanding this framework to include other pathways. There is promising data supporting the usefulness of plasma glial fibrillary acidic protein (GFAP) in AD, but GFAP-to-autopsy studies are limited. Here, we tested the association between plasma GFAP and AD-related neuropathological outcomes in participants from the Boston University (BU) Alzheimer’s Disease Research Center (ADRC).
Participants and Methods:This sample included 45 participants from the BU ADRC who had a plasma sample within 5 years of death and donated their brain for neuropathological examination. Most recent plasma samples were analyzed using the Simoa platform. Neuropathological examinations followed the National Alzheimer’s Coordinating Center procedures and diagnostic criteria. The NIA-Reagan Institute criteria were used for the neuropathological diagnosis of AD. Measures of GFAP were log-transformed. Binary logistic regression analyses tested the association between GFAP and autopsy-confirmed AD status, as well as with semi-quantitative ratings of regional atrophy (none/mild versus moderate/severe) using binary logistic regression. Ordinal logistic regression analyses tested the association between plasma GFAP and Braak stage and CERAD neuritic plaque score. Area under the curve (AUC) statistics from receiver operating characteristics (ROC) using predicted probabilities from binary logistic regression examined the ability of plasma GFAP to discriminate autopsy-confirmed AD status. All analyses controlled for sex, age at death, years between last blood draw and death, and APOE e4 status.
Results:Of the 45 brain donors, 29 (64.4%) had autopsy-confirmed AD. The mean (SD) age of the sample at the time of blood draw was 80.76 (8.58) and there were 2.80 (1.16) years between the last blood draw and death. The sample included 20 (44.4%) females, 41 (91.1%) were White, and 20 (44.4%) were APOE e4 carriers. Higher GFAP concentrations were associated with increased odds for having autopsy-confirmed AD (OR=14.12, 95% CI [2.00, 99.88], p=0.008). ROC analysis showed plasma GFAP accurately discriminated those with and without autopsy-confirmed AD on its own (AUC=0.75) and strengthened as the above covariates were added to the model (AUC=0.81). Increases in GFAP levels corresponded to increases in Braak stage (OR=2.39, 95% CI [0.71-4.07], p=0.005), but not CERAD ratings (OR=1.24, 95% CI [0.004, 2.49], p=0.051). Higher GFAP levels were associated with greater temporal lobe atrophy (OR=10.27, 95% CI [1.53,69.15], p=0.017), but this was not observed with any other regions.
Conclusions:The current results show that antemortem plasma GFAP is associated with non-specific AD neuropathological changes at autopsy. Plasma GFAP could be a useful and practical biomarker for assisting in the detection of AD-related changes, as well as for study of disease mechanisms.
Cereal and oil seed crops response to organic nitrogen when grown in rotation with annual aerial-seeded pasture legumes
- Angelo Loi, Dean T. Thomas, Ronald J. Yates, Robert J. Harrison, Mario D'Antuono, Giovanni A. Re, Hayley C. Norman, John G. Howieson
-
- Journal:
- The Journal of Agricultural Science / Volume 160 / Issue 3-4 / June 2022
- Published online by Cambridge University Press:
- 29 June 2022, pp. 207-219
-
- Article
- Export citation
-
Nitrogen fixation from pasture legumes is a fundamental process that contributes to the profitability and sustainability of dryland agricultural systems. The aim of this research was to determine whether well-managed pastures, based on aerial-seeding pasture legumes, could partially or wholly meet the nitrogen (N) requirements of subsequent grain crops in an annual rotation. Fifteen experiments were conducted in Western Australia with wheat, barley or canola crops grown in a rotation that included the pasture legume species French serradella (Ornithopus sativus), biserrula (Biserrula pelecinus), bladder clover (Trifolium spumosum), annual medics (Medicago spp.) and the non-aerial seeded subterranean clover (Trifolium subterraneum). After the pasture phase, five rates of inorganic N fertilizer (Urea, applied at 0, 23, 46, 69 and 92 kg/ha) were applied to subsequent cereal and oil seed crops. The yields of wheat grown after serradella, biserrula and bladder clover, without the use of applied N fertilizer, were consistent with the target yields for growing conditions of the trials (2.3 to 5.4 t/ha). Crop yields after phases of these pasture legume species were similar or higher than those following subterranean clover or annual medics. The results of this study suggest a single season of a legume-dominant pasture may provide sufficient organic N in the soil to grow at least one crop, without the need for inorganic N fertilizer application. This has implications for reducing inorganic N requirements and the carbon footprint of cropping in dryland agricultural systems.
Epigenetic correlates of neonatal contact in humans
- Sarah R. Moore, Lisa M. McEwen, Jill Quirt, Alex Morin, Sarah M. Mah, Ronald G. Barr, W. Thomas Boyce, Michael S. Kobor
-
- Journal:
- Development and Psychopathology / Volume 29 / Issue 5 / December 2017
- Published online by Cambridge University Press:
- 22 November 2017, pp. 1517-1538
-
- Article
- Export citation
-
Animal models of early postnatal mother–infant interactions have highlighted the importance of tactile contact for biobehavioral outcomes via the modification of DNA methylation (DNAm). The role of normative variation in contact in early human development has yet to be explored. In an effort to translate the animal work on tactile contact to humans, we applied a naturalistic daily diary strategy to assess the link between maternal contact with infants and epigenetic signatures in children 4–5 years later, with respect to multiple levels of child-level factors, including genetic variation and infant distress. We first investigated DNAm at four candidate genes: the glucocorticoid receptor gene, nuclear receptor subfamily 3, group C, member 1 (NR3C1), μ-opioid receptor M1 (OPRM1) and oxytocin receptor (OXTR; related to the neurobiology of social bonds), and brain-derived neurotrophic factor (BDNF; involved in postnatal plasticity). Although no candidate gene DNAm sites significantly associated with early postnatal contact, when we next examined DNAm across the genome, differentially methylated regions were identified between high and low contact groups. Using a different application of epigenomic information, we also quantified epigenetic age, and report that for infants who received low contact from caregivers, greater infant distress was associated with younger epigenetic age. These results suggested that early postnatal contact has lasting associations with child biology.
A proposal to standardize soil/solution herbicide distribution coefficients
- Jerome B. Weber, Gail G. Wilkerson, H. Michael Linker, John W. Wilcut, Ross B. Leidy, Scott Senseman, William W. Witt, Michael Barrett, William K. Vencill, David R. Shaw, Thomas C. Mueller, Donnie K. Miller, Barry J. Brecke, Ronald E. Talbert, Thomas F. Peeper
-
- Journal:
- Weed Science / Volume 48 / Issue 1 / February 2000
- Published online by Cambridge University Press:
- 20 January 2017, pp. 75-88
-
- Article
- Export citation
-
Herbicide soil/solution distribution coefficients (Kd) are used in mathematical models to predict the movement of herbicides in soil and groundwater. Herbicides bind to various soil constituents to differing degrees. The universal soil colloid that binds most herbicides is organic matter (OM), however clay minerals (CM) and metallic hydrous oxides are more retentive for cationic, phosphoric, and arsenic acid compounds. Weakly basic herbicides bind to both organic and inorganic soil colloids. The soil organic carbon (OC) affinity coefficient (Koc) has become a common parameter for comparing herbicide binding in soil; however, because OM and OC determinations vary greatly between methods and laboratories, Koc values may vary greatly. This proposal discusses this issue and offers suggestions for obtaining the most accurate Kd, Freundlich constant (Kf), and Koc values for herbicides listed in the WSSA Herbicide Handbook and Supplement.
10 - Signal Detection Theory
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp 494-550
-
- Chapter
- Export citation
6 - Radiative Transfer and Atmospheric Compensation
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp 295-359
-
- Chapter
- Export citation
-
Summary
The next processing step for calibrated imaging spectrometer data is the conversion from the at-aperture radiance to a surface reflectance signature, for the VNIR/SWIR spectral range, or to a surface emissivity signature and temperature, for the LWIR spectral range. This requires that the transmission and emission of the atmosphere be quantified from the aggregated information that is present in the at-aperture radiance. In the solar reflective range this includes estimates of the amount of water in the scene, which is highly variable, and of the aerosol loading in order to calculate both the diffuse radiance and the contribution from light that is scattered and reflected from the directly viewed pixel and the surrounding area. Similarly, for a sensor operating in the emissive regime, the water and the atmospheric thermal emission is quantified in order to retrieve the ground-leaving radiance that is used to estimate the temperature and emissivity of the surface.
In this chapter, the physics of radiative transfer will be developed first, in order to establish a basis for the discussion of the particular techniques that are applied in the reflective and emissive regimes. The modeling tools that are used to quantitatively describe the processes of absorption, transmission, and emission in a forward sense, i.e. in the direction of light propagation, are introduced prior to delving into the problem of retrieving the surface properties of interest. The reflectance retrieval is treated in detail and includes the derivation of the inverse radiative transfer model, the estimation of the quantities that are required to apply the inverse model, and the algorithms that are utilized, using both physics-based modeling and empirical techniques. The final sections address the problem of atmospheric compensation in the longwave infrared.
Radiative Transfer
The propagation of radiation through the atmosphere is described by the theory of radiative transfer. A complete description of radiative transfer is beyond our scope; however, the critical concepts required to understand the processes of atmospheric compensation are introduced within the limitations of a book devoted to remote sensing using an imaging spectrometer. Radiative transfer as a discipline was established by Arthur Schuster's paper in 1905, where he recognized the importance of multiple scattering (Schuster, 1905).
Plate section
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp -
-
- Chapter
- Export citation
Hyperspectral Imaging Remote Sensing
- Physics, Sensors, and Algorithms
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016
-
A practical and self-contained guide to the principles, techniques, models and tools of imaging spectroscopy. Bringing together material from essential physics and digital signal processing, it covers key topics such as sensor design and calibration, atmospheric inversion and model techniques, and processing and exploitation algorithms. Readers will learn how to apply the main algorithms to practical problems, how to choose the best algorithm for a particular application, and how to process and interpret hyperspectral imaging data. A wealth of additional materials accompany the book online, including example projects and data for students, and problem solutions and viewgraphs for instructors. This is an essential text for senior undergraduate and graduate students looking to learn the fundamentals of imaging spectroscopy, and an invaluable reference for scientists and engineers working in the field.
11 - Hyperspectral Data Exploitation
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp 551-620
-
- Chapter
- Export citation
-
Summary
The main objective of hyperspectral imaging remote sensing is the identification of materials or phenomena from their reflectance or emissivity spectra to serve the needs of different applications. In this chapter, building on the understanding of the phenomenology of spectral remote sensing and the introduced signal processing methods, we develop algorithms for some unique hyperspectral imaging applications: detection of hard targets, gas detection, change detection, and image classification. The emphasis is on algorithms developed based on phenomenologically sound signal models, realistic application-driven requirements, and rigorous signal processing procedures, rather than ad hoc algorithms or trendy theoretical algorithms based on unrealistic assumptions.
Target Detection in the Reflective Infrared
The objective of hyperspectral target detection is to find objects of interest (called “hard targets” or simply “targets”) within a hyperspectral image and to discriminate between various target types on the basis of their spectral characteristics. The advantages are automated signal processing and lower spatial resolution requirements for the sensor. In this section we discuss the defining features of the target detection problem, we explain how to choose target detection algorithms, we investigate the consequences of practical limitations, and we evaluate performance using field data. Predicting detection performance using theoretical models is discussed in Section 11.2.
Definition of the Target Detection Problem
Hyperspectral target detection algorithms search for targets by exploiting the spectral characteristics of the target's surface material by looking at the spectrum of each pixel. Depending on the spatial resolution of the sensor, targets of interest may not be clearly resolved, and hence may appear in only a few pixels or even as part of a single pixel (subpixel target). Thus, the first key attribute of the hyperspectral target detection problem is that a “target present” versus “target absent” decision must be made individually for every pixel of a hyperspectral image. In most applications each target is characterized by its spectral signature and detection algorithms make decisions using the target signature and the data cube of the imaged scene.
Typical “search-and-detection” applications include the detection of man-made materials in natural backgrounds for the purpose of search and rescue, and the detection of military vehicles for purposes of defense and intelligence.
1 - Introduction
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp 1-35
-
- Chapter
- Export citation
-
Summary
This chapter provides an introduction to the basic principles of hyperspectral remote sensing. The main objective is to explain how information about the earth's surface is conveyed to a remote hyperspectral imaging sensor, which are the key factors determining the nature and quality of the acquired data, and how the data should be processed to extract meaningful information for practical applications. By definition, hyperspectral imaging systems collect co-aligned images in many relatively narrow bands throughout the ultraviolet, visible, and infrared regions of the electromagnetic spectrum.
Introduction
The term “remote sensing” has several valid definitions. In the broadest sense, according to Webster's dictionary, remote sensing is “the acquisition of information about a distant object without coming into physical contact with it.” For our purposes, remote sensing deals with the acquisition, processing, and interpretation of images, and related data, obtained from aircraft and satellites that record the interaction between matter and electromagnetic radiation.
The detection of electromagnetic radiation via remote sensing has four broad components: a source of radiation, interaction with the atmosphere, interaction with the earth's surface, and a sensor (see Figure 1.1). The link between the components of the system is electromagnetic energy transferred by means of radiation.
Source The source of electromagnetic radiation may be natural, like the sun's reflected light or the earth's emitted heat, or man-made, like microwave radar. This leads to a classification of remote sensing systems into active and passive types. Active systems emit radiation and analyze the returned signal. Passive systems detect naturally occurring radiation either emitted by the sun or thermal radiation emitted by all objects with temperatures above absolute zero. With active systems, like microwave radar, it is possible to determine the distance of a target from the sensor (range); passive systems cannot provide range information.
Atmospheric interaction The characteristics of the electromagnetic radiation propagating through the atmosphere are modified by various processes, including absorption and scattering. This distortion is undesirable and requires correction if we wish to study the earth's surface, or desirable if we wish to study the atmosphere itself.
Appendix Introduction to Gaussian Optics
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp 621-653
-
- Chapter
- Export citation
-
Summary
To understand both the spectral discrimination and the image production from an imaging spectrometer requires both Gaussian, or geometrical, and physical optics that govern the processes of diffraction and interference. This appendix provides the barest introduction to Gaussian optics and a qualitative description of aberration theory. Physical optics is introduced in Chapter 4 when it is required, as in, for example, the action of a grating. The analysis presented is for centered or axially symmetric systems about an axis that passes through the centers of the sequential optical elements known as the optical axis. The concept of paraxial ray tracing, where the rays travel infinitesimally close to the optical axis, is also introduced, providing the location and size of the image. The brightness of an image is developed through the definition and placement of pupils and stops. Finally the quality of an image is determined through aberration theory.
The Optical Path
In Chapter 2 we introduced the concept of a wavefront as the surface of constant phase for an electromagnetic wave. This surface is perpendicular to the wave vector k, which is in the direction of propagation of the wave. A ray of light is defined to be parallel to the wave vector and is in the direction of energy propagation as described by the Poynting vector. The ray, an infinitely narrow beam of light, is the construct used in geometrical optics but has no precise physical meaning. For example, if we attempt to create a narrow beam of light by illuminating a pinhole we are defeated by diffraction effects that spread the beam out, with the effect becoming larger as the pinhole diameter is reduced. Nevertheless the ray is an extremely useful and practical concept that is universally used in Gaussian optics and ray tracing. The wavefront, on the other hand, has a precise physical meaning and, for the isotropic materials used in imaging spectrometers, the direction of energy propagation is always along the wavefront normal.
In Gaussian optics, light is assumed to propagate rectilinearly along the rays with their directions changed only by the processes of reflection and refraction. The optical systems considered here are composed of a series of surfaces that can be either refracting or reflecting and have a common axis of rotational symmetry known as the optical axis.
Contents
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp vii-x
-
- Chapter
- Export citation
9 - Spectral Mixture Analysis
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp 443-493
-
- Chapter
- Export citation
-
Summary
Analysis of spectral mixtures is important in remote sensing imaging spectroscopy, because essentially the spectrum of any pixel of a natural scene is a mixture. The analysis of mixed spectra, known as Spectral Mixture Analysis (SMA), is the subject of this chapter. SMA attempts to answer two questions: (a) What are the spectra of the individual materials? (b) What are the proportions of the individual materials? We focus on linear mixing because of its relative analytical and computational simplicity and because it works satisfactorily in many practical applications.We discuss the physical aspects of the linear mixing model, geometrical interpretations and algorithms, and statistical analysis using the theory of least squares estimation. The main applications of SMA are in the areas of hyperspectral image interpretation and subpixel target detection.
Spectral Mixing
When a ground resolution element contains several materials, all these materials contribute to the individual pixel spectrum measured by the sensor. The result is a composite or mixed spectrum, and the “pure” spectra that contribute to the mixture are called endmember spectra. Spectral mixtures can be macroscopic or intimate depending on what scale the mixing is taking place (see Figure 9.1).
In a macroscopic mixture the materials in the field of view are optically separated in patches so there is no multiple scattering between components (each reflected photon interacts with only one surface material). Such mixtures are linear: that is, the combined spectrum is simply the sum of the fractional area times the spectrum of each component. Linear mixing is possible as long as the radiation from component patches remains separate until it reaches the sensor.
In an intimate mixture, such as the microscopic mixture of mineral grains in a soil or rock, a single photon interacts with more than one material. In this case, mixing occurs when radiation from several surfaces combines before it reaches the sensor. These types of mixtures are nonlinear in nature and therefore more difficult to analyze and use.
To illustrate some of the issues involved in SMA, we discuss some examples of mixed spectra provided by Adams and Gillespie (2006). Figure 9.2(a) shows spectra for mixtures of a material having a featureless spectrum (quartz) with a material having a spectrum with diagnostic absorption bands (alunite).
Frontmatter
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp i-iv
-
- Chapter
- Export citation
7 - Statistical Models for Spectral Data
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp 360-405
-
- Chapter
- Export citation
Dedication
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp v-vi
-
- Chapter
- Export citation
4 - Imaging Spectrometers
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp 154-227
-
- Chapter
- Export citation
-
Summary
This chapter will address the optical principles that underpin any imaging spectrometer and then delve into the details of the dominant optical designs. The complete imaging spectrometer typically includes a scan mechanism, such as a rotating mirror, a telescope fore optic to image the scene at the input of a spectrometer, the spectrometer, which can accomplish the separation of the radiance into the different wavelength bins either through dispersive or interferometric means, and the focal plane array where the signal is converted to digital numbers. The optical flow from the scene to the resulting digital signals is depicted in Figure 4.1. The details of the optical system are presented and the description focuses on the concepts required to enable the reader to understand, at an introductory level, how these complex systems work.
This chapter will also introduce the unifying concept of the measurement equation. All imaging spectrometers share spatial, spectral, and radiometric properties, which can be mathematically described in a general way. Important concepts that are common to all of the optical forms presented are captured succinctly in a simple and elegant way. It is the measurement equation that provides the explanatory framework upon which the optical details will be built. An understanding of Gaussian or geometrical optics is required, and an overview that includes the basics of image formation and the concepts of pupils and stops is presented as an appendix. The appendix also includes an introduction to optical aberrations. Some of the equations that are developed in the appendix are referenced in this chapter.
Optically, imaging spectrometers are properly thought of as integrated systems rather than being comprised of the individual subsystems of the fore optics, the spectrometer, and the detector or detector array. The optical design engineer considers the system in its entirety during the design phase to ensure that the spatial, spectral, radiometric, and signal-to-noise ratio goals are accomplished and that the imaging spectrometer is manufacturable. This is a rather obvious observation, but merits emphasis. An imaging spectrometer is a system and not the sum of its parts, even though they will be addressed here through a subsystem analysis.
5 - Imaging Spectrometer Characterization and Data Calibration
- Dimitris G. Manolakis, Ronald B. Lockwood, Thomas W. Cooley
-
- Book:
- Hyperspectral Imaging Remote Sensing
- Published online:
- 10 November 2016
- Print publication:
- 20 October 2016, pp 228-294
-
- Chapter
- Export citation
-
Summary
The utility of the data from an imaging spectrometer critically depends upon the quantitative relationship between the scene in nature and the scene as captured by the sensor. As was shown in Chapter 4, the raw data will only somewhat resemble the at-aperture spectral radiance from the surface due to the optical characteristics of the fore optic and the spectrometer. Additionally, the data acquisition by the focal plane array will further modify the irradiance that composes the image due to the spectral dependence of the detector material's quantum efficiency. It will also add noise terms that, if large enough, will further complicate the relationship between the scene and its raw image. The calibration of the data from an imaging spectrometer is the crucial data processing step that transforms the raw imagery into radiance that can be physically modeled. The science of radiometry provides the theoretical framework and the measurement processes that enable a sensor to be characterized and the data to be converted to physical units that are tied to reference standards. This chapter describes the process of sensor characterization that leads to calibration products that are applied to the raw data and some of the techniques that are used to evaluate the accuracy and precision of the calibrated data will be introduced. An overview of the important measurement and data reduction processes for vicarious calibration, which is critical for space-based systems, will also be presented.
Introduction
The characterization of an imaging spectrometer is challenging due to the spatial extent of the collected scene and the large spectral range that is relatively finely sampled, at least for an imager. For example, an Offner–Chrisp imaging spectrometer often has between 200 and 400 spectral and about 1000 spatial samples or about 400,000 individual measurements in a single readout of the focal plane array. To collect a scene the FPA is read out thousands of times. All of the data must be calibrated in order to be used to greatest effect, with the overarching goal being that the result should not depend upon the time or location of the collected scene, there should be no field of view dependence, and it should be immune, within reasonable limits, to the illumination and atmospheric conditions.