To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Detection of signals in interference is a key area in signal processing applications such as radar, sonar and telecommunications. The theory of signal detection has been extensively covered in the literature. Many textbooks exist, including the classic by Van Trees (2001a) and his later additions to the series (Van Trees, 2001b, 2002a,b), the text on radar detection by DiFranco and Rubin (1980), and several texts on estimation and detection (Scharf, 1991; Poor, 1994; Kay, 1993, 1998). Signal detection theory is well established when the interference is Gaussian. However, methods for detection in the non-Gaussian case are often cumbersome and in many cases non-optimal.
Signal detection is formulated as a test of a hypothesis (Lehmann, 1991). To cover signal detection, we first need to introduce some concepts of hypothesis testing. This is followed by an exposition of bootstrap based hypothesis testing. In the second part of the chapter, we provide details on bootstrap detection of signals in Gaussian and non-Gaussian noise and show how bootstrap detection alleviates the restrictions imposed by classical detectors.
Principles of hypothesis testing
As the term suggests, in hypothesis testing one wishes to decide whether or not some formulated hypothesis is correct. The choice is between two decisions: accepting or rejecting the hypothesis.
Chapters 3 and 4 dealt with fundamentals of bootstrap based detection and model selection, respectively. In this chapter, we provide some interesting applications of the theory covered in the former chapters to real world problems. We report only on some problems we worked on over the last years. These selected problems had been solved using classical techniques only if we had made strong assumptions which may not be valid. They are also analytically intractable.
The applications include a wide range of signal processing problems. We first report on results for optimal vibration sensor placement on spark ignition engines to detect knock. We show how the bootstrap can be used to estimate distributions of complicated statistics. Then we discuss a passive acoustic emission problem where we estimate confidence intervals for an aircraft's flight parameters. This is followed by the important problem of civilian landmine detection. We suggest an approach to detect buried landmines using a ground penetrating radar. We continue with another radar application concerning noise floor estimation in high frequency over-the-horizon radar. The chapter concludes with the estimation of the optimal model for corneal elevation in the human eye.
Optimal sensor placement for knock detection
This application illustrates the concepts discussed in Sections 3.2 and 2.2 of hypothesis testing and variance stabilisation, respectively.
Signal processing has become a core discipline in engineering research and education. Many modern engineering problems rely on signal processing tools. This could be either for filtering the acquired measurements in order to extract and interpret information or for making a decision as to the presence or absence of a signal of interest. Generally speaking, statistical signal processing is the area of signal processing where mathematical statistics is used to solve signal processing problems. Nowadays, however, it is difficult to find an application of signal processing where tools from statistics are not used. A statistician would call the area of statistical signal processing time series analysis.
In most statistical signal processing applications where a certain parameter is of interest there is a need to provide a rigorous statistical performance analysis for parameter estimators. An example of this could be finding the accuracy of an estimator of the range of a flying aircraft in radar. These estimators are usually computed based on a finite number of measurements, also called a sample. Consider, for example, a typical radar scenario, in which we aim to ascertain whether the received signal contains information about a possible target or is merely interference. The decision in this case, based on calculating the so-called test statistic, has to be supported with statistical measures, namely the probability of detection and the probability of false alarm.
Many engineering applications require parametric modelling. On one hand, there exist statistical models based on real observations of our physical environment (Hahn and Shapiro, 1967); an example of this is statistical modelling of interference (Jakeman and Pusey, 1976; Middleton, 1999). On the other hand, there exist generic models to describe data, such as autoregressive models, commonly used in both signal processing and time series analysis (see Section 2.1.4). In previous chapters, we focused our attention on estimation problems. Given measurements, it is also of importance to decide which model best fits the data. More often we are required to select a model and perform a conditional estimation of the parameters of interest. When we say select a model we mean choose a particular set of parameters in the given model. The conditional estimation refers then to the estimation of those parameters conditioned on the chosen model.
Bootstrap methods based on residuals can be used to select the best model according to a certain prediction criterion. In this chapter, we consider application of bootstrap model selection methods to both linear and nonlinear models. The methods presented are consistent and in most cases they out-perform classical techniques of model selection. We also report on how the methods apply to dependent data models such as autoregressive models.
We review a number of engineering problems that can be posed or solved using Fourier transforms for the groups of rigid-body motions of the plane or three-dimensional space. Mathematically and computationally these problems can be divided into two classes: (1) physical problems that are described as degenerate diffusions on motion groups; (2) enumeration problems in which fast Fourier transforms are used to efficiently compute motion-group convolutions. We examine engineering problems including the analysis of noise in optical communication systems, the allowable positions and orientations reachable with a robot arm, and the statistical mechanics of polymer chains. In all of these cases, concepts from noncommutative harmonic analysis are put to use in addressing real-world problems, thus rendering them tractable.
1. Introduction
Noncommutative harmonic analysis is a beautiful and powerful area of pure mathematics that has connections to analysis, algebra, geometry, and the theory of algorithms. Unfortunately, it is also an area that is almost unknown to engineers. In our research group, we have addressed a number of seemingly intractable “real-world” engineering problems that are easily modeled and/or solved using techniques of noncommutative harmonic analysis. In particular, we have addressed physical/mechanical problems that are described well as functions or processes on the rotation and rigid-body-motion groups. The interactions and evolution of these functions are described using group-theoretic convolutions and diffusion equations, respectively. In this paper we provide a survey of some of these applications and show how computational harmonic analysis on motion groups is used.
We give an overview of phylogenetic invariants: a technique for reconstructing evolutionary family trees from DNA sequence data. This method is useful in practice and is based on a number of simple ideas from elementary group theory, probability, linear algebra, and commutative algebra.
1. Introduction
Phylogeny is the branch of biology that seeks to reconstruct evolutionary family trees. Such reconstruction can take place at various scales. For example, we could attempt to build the family tree for various present day indigenous populations in the Americas and Asia in order to glean information about the possible course of migration of humans into the Americas. At the level of species, we could seek to determine whether modern humans are more closely related to chimpanzees or to gorillas. Ultimately, we would like to be able to reconstruct the entire “tree of life” that describes the course of evolution leading to all present day species. Because the status of the “leaves” on which we wish to build a tree differs from instance to instance, biologists use the general term taxa (singular taxon) for the leaves in a general phylogenetic problem.
For example, for 4 taxa, we might seek to decide whether the tree describes the course of evolution. In such trees:
• the arrow of time is down the page,
• paths down through the tree represent lineages (lines of descent),
• any point on a lineage corresponds to a point of time in the life of some ancestor of a taxon,
• vertices other than leaves represent times at which lineages diverge,
• the root corresponds to the most recent common ancestor of all the taxa.
In many ways, the late 1950s marked the beginning of the digital age, and with it, the beginning of a new age for the mathematics of signal processing. Highspeed analog-to-digital converters had just been invented. These devices were capable of taking analog signals like time series (think of continuous functions of time like seismograms which measure the seismic activity — the amount of bouncing — at a fixed location, or an EEG, or an EKG) and converting them to lists of numbers. These numbers were obtained by sampling the time series, that is, recording the value of the function at regular intervals, which at that time could be as fast as 300,000 times every second. (Current technology permits sampling at much higher rates where necessary.) Suddenly, reams and reams of data were being generated and new mathematics was needed for their analysis, manipulation and management.
So was born the discipline of Digital Signal Processing (DSP), and it is no exaggeration to say that the world has not been the same. In the mathematical sciences the DSP revolution has, among other things, helped drive the development of disciplines like algorithmic analysis (which was the impetus behind the creation of computer science departments), communication and information theory, linear algebra, computational statistics, combinatorics, and discrete mathematics. DSP tools have changed the face of the arts (electroacoustic music and image processing), health care (medical imaging and computed imaging), and, of course, both social and economic commerce (i.e., the internet). Suffice to say that the mathematics of DSP is one of the pillars supporting the amazing technological revolution that we are experiencing today.
To register two images means to align them so that common features overlap and differences-for example, a tumor that has grownare readily apparent. Being able to easily spot differences between two images is obviously very important in applications. This paper is an introduction to image registration as applied to medical imaging. We first define image registration, breaking the problem down into its constituent component. We then discuss various techniques, reflecting different choices that can be made in developing an image registration technique. We conclude with a brief discussion.
1. Introduction
1.1. Background. To register two images means to align them, so that common features overlap and differences, should there be any, between the two are emphasized and- readily visible to the naked eye. We refer to the process of aligning two images as image registration.
There are a host of clinical applications requiring image registration. For example, one would like to compare two Computed Tomography (CT) scans of a patient, taken say six months ago and yesterday, and identify differences between the two, e.g., the growth of a tumor during the intervening six months (Figure 1). One could also want to align Positron Emission Tomography (PET) data to an MR image, so as to help identify the anatomic location of certain mental activation [43]. And one may want to register lung surfaces in chest Computed Tomography (CT) scans for lung cancer screening [7]. While all of these identifications can be done in the radiologist's head, the possibility always exists that small, but critical, features could be missed. Also, beyond identification itself, the extent of alignment required could provide important quantitative information, e.g., how much a tumor's volume has changed.
Diffuse tomography refers to the use of probes in the infrared part of the energy spectrum to obtain images of highly scattering media. There are important potential medical applications and a host of difficult mathematical issues in connection with this highly nonlinear inverse problem. Taking into account scattering gives a problem with many more unknowns, as well as pieces of data, than in the simpler linearized situation. The aim of this paper is to show that in some very simplified discrete model, reckoning with scattering gives an inversion problem whose solution can be reduced to that of a finite number of linear inversion problems. We see here that at least for the model in question, the proportion of variables that can be solved for is higher in the nonlinear case than in the linear one. We also notice that this gives a highly nontrivial problem in what can be called network tomography.
1. Introduction
Optical, or diffuse, tomography, refers to the use of low energy probes to obtain images of highly scattering media.
The main motivation for this line of work is, at present, the use of an infrared laser to obtain images of diagnostic value. There is a proposal to use this in neonatal clinics to measure oxygen content in the brains of premature babies as well as in the case of repeated mammography. With the discovery of highly specific markers that respond well in the optical or infrared region there are many potential applications of this emerging area; see [AI; A2].
In 1965 J. Cooley and J. Tukey published an article detailing an efficient algorithm to compute the Discrete Fourier Transform, necessary for processing the newly available reams of digital time series produced by recently invented analog-to-digital converters. Since then, the CooleyTukey Fast Fourier Transform and its variants has been a staple of digital signal processing.
Among the many casts of the algorithm, a natural one is as an efficient algorithm for computing the Fourier expansion of a function on a finite abelian group. In this paper we survey some of our recent work on he “separation of variables” approach to computing a Fourier transform on an arbitrary finite group. This is a natural generalization of the Cooley-Tukey algorithm. In addition we touch on extensions of this idea to compact and noncom pact groups.
Pure and Applied Mathematics: Two Sides of a Coin
The Bulletin of the AMS for November 1979 had a paper by L. Auslander and R. Tolimieri [3] with the delightful title “Is computing with the Finite Fourier Transform pure or applied mathematics?” This rhetorical question was answered by showing that in fact, the finite Fourier transform, and the family of efficient algorithms used to compute it, the Fast Fourier Transform (FFT), a pillar of the world of digital signal processing, were of interest to both pure and applied mathematicians.
Auslander had come of age as an applied mathematician at a time when pure and applied mathematicians still received much of the same training. The ends towards which these skills were then directed became a matter of taste.
We consider the best sparsijying basis (BSB) and the kurtosis maximizing basis (KMB) of a particularly simple stochastic process called the “generalized spike process”. The BSB is a basis for which a given set of realizations of a stochastic process can be represented most sparsely, whereas the KMB is an approximation to the least statistically-dependent basis (LSDB) for which the data representation has minimal statistical dependence. In each realization, the generalized spike process puts a single spike with amplitude sampled from the standard normal distribution at a random location in an otherwise zero vector of length n.
We prove that both the BSB and the KMB select the standard basis, if we restrict our basis search to all possible orthonormal bases in ℝn. If we extend our basis search to all possible volume-preserving invertible linear transformations, we prove the BSB exists and is again the standard basis, whereas the KMB does not exist. Thus, the KMB is rather sensitive to the orthonormality of the transformations, while the BSB seems insensitive. Our results provide new additional support for the preference of the BSB over the LSDB/KMB for data compression. We include an explicit computation of the BSB for Meyer's discretized ramp process.
1. Introduction
This paper is a sequel to our previous paper [3], where we considered the best sparsijying basis (BSB), and the least statistically-dependent basis (LSDB) for input data assumed to be realizations of a very simple stochastic process called the “spike process.” This process, which we will refer to as the “simple” spike process for convenience, puts a unit impulse (i.e., constant amplitude of 1) at a random location in a zero vector of length n.