To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We briefly review the mathematics in the coding engine of JPEG 2000, a state-of-the-art image compression system. We focus in depth on the transform, entropy coding and bitstream assembler modules. Our goal is to present a general overview of the mathematics underlying a state of the art scalable image compression technology.
1. Introduction
Data compression is a process that creates a compact data representation from a raw data source, usually with an end goal of facilitating storage or transmission. Broadly speaking, compression takes two forms, either loss less or lossy, depending on whether or not it is possible to reconstruct exactly the original datastream from its compressed version. For example, a data stream that consists of long runs of Os and Is (such as that generated by a black and white fax) would possibly benefit from simple run-length encoding, a lossless technique replacing the original datastream by a sequence of counts of the lengths of the alternating substrings of Os and Is. Lossless compression is necessary for situations in which changing a single bit can have catastrophic effects, such as in machine code of a computer program.
While it might seem as though we should always demand lossless compression, there are in fact many venues where exact reproduction is unnecessary. In particular, media compression, which we define to be the compression of image, audio, or video files, presents an excellent opportunity for lossy techniques. For example, not one among us would be able to distinguish between two images which differ in only one of the 229 bits in a typical 1024 x 1024 color image.
In this paper we investigate quadrature rules for functions on compact Lie groups and sections of homogeneous vector bundles associated with these groups. First a general notion of band-limitedness is introduced which generalizes the usual notion on the torus or translation groups. We develop a sampling theorem that allows exact computation of the Fourier expansion of a band-limited function or section from sample values and quantifies the error in the expansion when the function or section is not band-limited. We then construct specific finitely supported distributions on the classical groups which have nice error properties and can also be used to develop efficient algorithms for the computation of Fourier transforms on these groups.
1. Introduction
The Fourier transform of a function on a compact Lie group computes the coefficients (Fourier coefficients) that enable its expression as a linear combination of the matrix elements from a complete set of irreducible representations of the group. In the case of abelian groups, especially the circle and its lower dimensional products (tori) this is precisely the expansion of a function on these domains in terms of complex exponentials. This representation is at the heart of classical signal and image processing (see [25; 26], for example).
The successes of abelian Fourier analysis are many, ranging from national defense to personal entertainment, from medicine to finance. The record of achievements is so impressive that it has perhaps sometimes led scientists astray, seducing them to look for ways to use these tools in situations where they are less than appropriate: for example, pretending that a sphere is a torus so as to avoid the use of spherical harmonics in favor of Fourier series - a favored mathematical hammer casting the multitudinous problems of science as a box of nails.
This article presents a simple version of Integrated Sensing and Processing (ISP) for statistical pattern recognition wherein the sensor measureIllents to be taken are adaptively selected based on task-specific metries. Thus the measurement space in which the pattern recognition task is ultimately addressed integrates adaptive sensor technology with the specific task for which the sensor is employed. This end-to-end optimization of sensor/ processor/exploitation subsystems is a theme of the DARPA Defense Sciences Office Applied and Computational Mathematics Program's ISP program. We illustrate the idea with a pedagogical example and application to the HyMap hyperspectral sensor and the Tufts University “artificial nose” chemical sensor.
1. Introduction
An important activity, common to many fields of endeavor, is the act of refining high order information (detections of events, classification of objects, identification of activities, etc.) from large volumes of diverse data which is increasingly available through modern means of measurement, communication, and processing. This exploitation function winnows the available data concerning an object or situation in order to extract useful and actionable information, quite often through the application of techniques from statistical pattern recognition to the data. This may involve activities like detection, identification, and classification which are applied to the raw measured data, or possibly to partially processed information derived from it.
When new data are sought in order to obtain information about a specific situation, it is now increasingly common to have many different measurement degrees of freedom potentially available for the task.
The classical (scalar-valued) theory of spherical functions, put forward by Cartan and others, unifies under one roof a number of examples that were very well-known before the theory was formulated. These examples include special functions such as like Jacobi polynomials, Bessel functions, Laguerre polynomials, Hermite polynomials, Legendre functions, which had been workhorses in many areas of mathematical physics before the appearance of a unifying theory. These and other functions have found interesting applications in signal processing, including specific areas such as medical imaging.
The theory of matrix-valued spherical functions is a natural extension of the well-known scalar-valued theory. Its historical development, however, is different: in this case the theory has gone ahead of the examples. The purpose of this article is to point to some examples and to interest readers in this new aspect in the world of special functions.
We close with a remark connecting the functions described here with the theory of matrix-valued orthogonal polynomials.
1. Introduction and Statement of Results
The theory of matrix-valued spherical functions (see [GV; T]) gives a natural extension of the well-known theory for the scalar-valued case, see [He]. We start with a few remarks about the scalar-valued case.
The classical (scalar-valued) theory of spherical functions (put forward by Cartan and others after him) allows one to unify under one roof a number of examples that were very well known before the theory was formulated. These examples include many special functions like Jacobi polynomials, Bessel functions, Laguerre polynomials, Hermite polynomials, Legendre functions, etc.
This paper addresses some of the fundamental problems which have to be solved in order for optical networks to utilize the full bandwidth of optical fibers. It discusses some of the premises for signal processing in optical fibers. It gives a short historical comparison between the development of transmission techniques for radio and microwaves to that of optical fibers. There is also a discussion of bandwidth with a particular emphasis on what physical interactions limit the speed in optical fibers. Finally, there is a section on line codes and some recent developments in optical encoding of wavelets.
1. Introduction
When Claude Shannon developed the mathematical theory of communication [1] he knew nothing about lasers and optical fibers. What he was mostly concerned with were communication channels using radio- and microwaves. Inherently, these channels have a narrower bandwidth than do optical fibers because of the lower carrier frequency (longer wavelength). More serious than this theoretical limitation are the practical bandwidth limitations imposed by weather and other environmental hazards. In contrast, optical fibers are a marvellously stable and predictable medium for transporting information and the influence of noise from the fiber itself can to a large degree be neglected. So, until recently there was no real need for any advanced signal processing in optical fiber communications systems. This has all changed over the last few years with the development of the internet.
Optical fiber communication became an economic reality in the early 1970s when absorption of less than 20 dB /km was achieved in optical fibers and lifetimes of more than 1 million hours for semiconductor lasers were accomplished.
Underlying many of the current mathematical opportunities in digital signal processing are unsolved analog signal processing problems. For instance, digital signals for communication or sensing must map into an analog format for transmission through a physical layer. In this layer we meet a canonical example of analog signal processing: the electrical engineer's impedance matching problem. Impedance matching is the design of analog signal processing circuits to minimize loss and distortion as the signal moves from its source into the propagation medium. This paper works the matching problem from theory to sampled data, exploiting links between H∞ theory, hyperbolic geometry, and matching circuits. We apply J. W. Helton's significant extensions of operator theory, convex analysis, and optimization theory to demonstrate new approaches and research opportunities in this fundamental problem.
1. The Impedance Matching Problem
Figure 1 shows a twin-whip HF (high-frequency) antenna mounted on a superstructure representative of a shipboard environment. If a signal generator is connected directly to this antenna, not all the power delivered to the antenna can be radiated by the antenna. If an impedance mismatch exists between the signal generator and the antenna, some of the signal power is reflected from the antenna back to the generator. To effectively use this antenna, a matching circuit must be inserted between the signal generator and antenna to minimize this wasted power.
Figure 2 shows the matching circuit connecting the generator to the antenna. Port 1 is the input from the generator. Port 2 is the output that feeds the antenna.
Three-dimensional volumetric data are becoming increasingly available in a wide range of scientific and technical disciplines. With the right tools, we can expect such data to yield valuable insights about many important phenomena in our three-dimensional world.
In this paper, we develop tools for the analysis of 3-D data which may contain structures built from lines, line segments, and filaments. These tools come in two main forms: (a) Monoscale: the X-ray transform, offering the collection of line integrals along a wide range of lines running through the image-at all different orientations and positions; and (b) Multiscale: the (3-D) beamlet transform, offering the collection of line integrals along line segments which, in addition to ranging through a wide collection of locations and positions, also occupy a wide range of scales.
We describe different strategies for computing these transforms and several basic applications, for example in finding faint structures buried in noisy data.
1. Introduction
In field after field, we are currently seeing new initiatives aimed at gathering large high-resolution three-dimensional datasets. While three-dimensional data have always been crucial to understanding the physical world we live in, this transition to ubiquitous 3-D data gathering seems novel. The driving force is undoubtedly the pervasive influence of increasing storage capacity and computer processing power, which affects our ability to create new 3-D measurement instruments, but which also makes it possible to analyze the massive volumes of data that inevitably result when 3-D data are being gathered.
Since the discovery of codes using algebraic geometry by V. D. Goppa in 1977, there has been a great deal of research on these codes. Their importance was realized when in 1982 Tsfasman, Vlăduţ, and Zink proved that certain algebraic geometry codes exceeded the Asymptotic Gilbert–Varshamov Bound, a feat many coding theorists felt could never be achieved. Algebraic geometry codes, now often called geometric Goppa codes, were originally developed using many extensive and deep results from algebraic geometry. These codes are defined using algebraic curves. They can also be defined using algebraic function fields as there is a one-to-one correspondence between “nice” algebraic curves and these function fields. The reader interested in the connection between these two theories can consult. Another approach appeared in the 1998 publication by Høholdt, van Lint, and Pellikaan, where the theory of order and weight functions was used to describe a certain class of geometric Goppa codes.
In this chapter we choose to introduce a small portion of the theory of algebraic curves, enough to allow us to define algebraic geometry codes and present some simple examples. We will follow a very readable treatment of the subject by J. L. Walker. Her monograph would make an excellent companion to this chapter. For those who want to learn more about the codes and their decoding but have a limited understanding of algebraic geometry, the Høholdt, van Lint, and Pellikaan chapter in the Handbook of Coding Theory can be examined.
The decoding algorithms that we have considered to this point have all been hard decision algorithms. A hard decision decoder is one which accepts hard values (for example 0s or 1s if the data is binary) from the channel that are used to create what is hopefully the original codeword. Thus a hard decision decoder is characterized by “hard input” and “hard output.” In contrast, a soft decision decoder will generally accept “soft input” from the channel while producing “hard output” estimates of the correct symbols. As we will see later, the “soft input” can be estimates, based on probabilities, of the received symbols. In our later discussion of turbo codes, we will see that turbo decoding uses two “soft input, soft output” decoders that pass “soft” information back and forth in an iterative manner between themselves. After a certain number of iterations, the turbo decoder produces a “hard estimate” of the correct transmitted symbols.
Additive white Gaussian noise
In order to understand soft decision decoding, it is helpful to take a closer look first at the communication channel presented in Figure 1.1. Our description relies heavily on the presentation in. The box in that figure labeled “Channel” is more accurately described as consisting of three components: a modulator, a waveform channel, and a demodulator; see Figure 15.1. For simplicity we restrict ourselves to binary data. Suppose that we transmit the binary codeword c = c1 … cn.
In this chapter we discuss some basic properties of combinatorial designs and their relationship to codes. In Section 6.5, we showed how duadic codes can lead to projective planes. Projective planes are a special case of t-designs, also called block designs, which are the main focus of this chapter. As with duadic codes and projective planes, most designs we study arise as the supports of codewords of a given weight in a code.
t-designs
A t-(v, k, λ) design, or briefly a t-design, is a pair (P, B) where P is a set of v elements, called points, and B is a collection of distinct subsets of P of size k, called blocks, such that every subset of points of size t is contained in precisely λ blocks. (Sometimes one considers t-designs in which the collection of blocks is a multiset, that is, blocks may be repeated. In such a case, a t-design without repeated blocks is called simple. We will generally only consider simple t-designs and hence, unless otherwise stated, the expression “t-design” will mean “simple t-design.”) The number of blocks in B is denoted by b, and, as we will see shortly, is determined by the parameters t, v, k, and λ.
The [n, k] codes that we have studied to this point are called block codes because we encode a message of k information symbols into a block of length n. On the other hand convolutional codes use an encoding scheme that depends not only upon the current message being transmitted but upon a certain number of preceding messages. Thus “memory” is an important feature of an encoder of a convolutional code. For example, if x(1), x(2), … is a sequence of messages each from to be transmitted at time 1, 2, …, then an (n, k) convolutional code with memory M will transmit codewords c(1), c(2), … where depends upon x(i), x(i − 1), …, x(i − M). In our study of linear block codes we have discovered that it is not unusual to consider codes of fairly high lengths n and dimensions k. In contrast, the study and application of convolutional codes has dealt primarily with (n, k) codes with n and k very small and a variety of values of M.
Convolutional codes were developed by Elias in 1955. In this chapter we will only introduce the subject and restrict ourselves to binary codes. While there are a number of decoding algorithms for convolutional codes, the main one is due to Viterbi; we will examine his algorithm in Section 14.2.