To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper addresses some of the fundamental problems which have to be solved in order for optical networks to utilize the full bandwidth of optical fibers. It discusses some of the premises for signal processing in optical fibers. It gives a short historical comparison between the development of transmission techniques for radio and microwaves to that of optical fibers. There is also a discussion of bandwidth with a particular emphasis on what physical interactions limit the speed in optical fibers. Finally, there is a section on line codes and some recent developments in optical encoding of wavelets.
1. Introduction
When Claude Shannon developed the mathematical theory of communication [1] he knew nothing about lasers and optical fibers. What he was mostly concerned with were communication channels using radio- and microwaves. Inherently, these channels have a narrower bandwidth than do optical fibers because of the lower carrier frequency (longer wavelength). More serious than this theoretical limitation are the practical bandwidth limitations imposed by weather and other environmental hazards. In contrast, optical fibers are a marvellously stable and predictable medium for transporting information and the influence of noise from the fiber itself can to a large degree be neglected. So, until recently there was no real need for any advanced signal processing in optical fiber communications systems. This has all changed over the last few years with the development of the internet.
Optical fiber communication became an economic reality in the early 1970s when absorption of less than 20 dB /km was achieved in optical fibers and lifetimes of more than 1 million hours for semiconductor lasers were accomplished.
Underlying many of the current mathematical opportunities in digital signal processing are unsolved analog signal processing problems. For instance, digital signals for communication or sensing must map into an analog format for transmission through a physical layer. In this layer we meet a canonical example of analog signal processing: the electrical engineer's impedance matching problem. Impedance matching is the design of analog signal processing circuits to minimize loss and distortion as the signal moves from its source into the propagation medium. This paper works the matching problem from theory to sampled data, exploiting links between H∞ theory, hyperbolic geometry, and matching circuits. We apply J. W. Helton's significant extensions of operator theory, convex analysis, and optimization theory to demonstrate new approaches and research opportunities in this fundamental problem.
1. The Impedance Matching Problem
Figure 1 shows a twin-whip HF (high-frequency) antenna mounted on a superstructure representative of a shipboard environment. If a signal generator is connected directly to this antenna, not all the power delivered to the antenna can be radiated by the antenna. If an impedance mismatch exists between the signal generator and the antenna, some of the signal power is reflected from the antenna back to the generator. To effectively use this antenna, a matching circuit must be inserted between the signal generator and antenna to minimize this wasted power.
Figure 2 shows the matching circuit connecting the generator to the antenna. Port 1 is the input from the generator. Port 2 is the output that feeds the antenna.
Three-dimensional volumetric data are becoming increasingly available in a wide range of scientific and technical disciplines. With the right tools, we can expect such data to yield valuable insights about many important phenomena in our three-dimensional world.
In this paper, we develop tools for the analysis of 3-D data which may contain structures built from lines, line segments, and filaments. These tools come in two main forms: (a) Monoscale: the X-ray transform, offering the collection of line integrals along a wide range of lines running through the image-at all different orientations and positions; and (b) Multiscale: the (3-D) beamlet transform, offering the collection of line integrals along line segments which, in addition to ranging through a wide collection of locations and positions, also occupy a wide range of scales.
We describe different strategies for computing these transforms and several basic applications, for example in finding faint structures buried in noisy data.
1. Introduction
In field after field, we are currently seeing new initiatives aimed at gathering large high-resolution three-dimensional datasets. While three-dimensional data have always been crucial to understanding the physical world we live in, this transition to ubiquitous 3-D data gathering seems novel. The driving force is undoubtedly the pervasive influence of increasing storage capacity and computer processing power, which affects our ability to create new 3-D measurement instruments, but which also makes it possible to analyze the massive volumes of data that inevitably result when 3-D data are being gathered.
Since the discovery of codes using algebraic geometry by V. D. Goppa in 1977, there has been a great deal of research on these codes. Their importance was realized when in 1982 Tsfasman, Vlăduţ, and Zink proved that certain algebraic geometry codes exceeded the Asymptotic Gilbert–Varshamov Bound, a feat many coding theorists felt could never be achieved. Algebraic geometry codes, now often called geometric Goppa codes, were originally developed using many extensive and deep results from algebraic geometry. These codes are defined using algebraic curves. They can also be defined using algebraic function fields as there is a one-to-one correspondence between “nice” algebraic curves and these function fields. The reader interested in the connection between these two theories can consult. Another approach appeared in the 1998 publication by Høholdt, van Lint, and Pellikaan, where the theory of order and weight functions was used to describe a certain class of geometric Goppa codes.
In this chapter we choose to introduce a small portion of the theory of algebraic curves, enough to allow us to define algebraic geometry codes and present some simple examples. We will follow a very readable treatment of the subject by J. L. Walker. Her monograph would make an excellent companion to this chapter. For those who want to learn more about the codes and their decoding but have a limited understanding of algebraic geometry, the Høholdt, van Lint, and Pellikaan chapter in the Handbook of Coding Theory can be examined.
The decoding algorithms that we have considered to this point have all been hard decision algorithms. A hard decision decoder is one which accepts hard values (for example 0s or 1s if the data is binary) from the channel that are used to create what is hopefully the original codeword. Thus a hard decision decoder is characterized by “hard input” and “hard output.” In contrast, a soft decision decoder will generally accept “soft input” from the channel while producing “hard output” estimates of the correct symbols. As we will see later, the “soft input” can be estimates, based on probabilities, of the received symbols. In our later discussion of turbo codes, we will see that turbo decoding uses two “soft input, soft output” decoders that pass “soft” information back and forth in an iterative manner between themselves. After a certain number of iterations, the turbo decoder produces a “hard estimate” of the correct transmitted symbols.
Additive white Gaussian noise
In order to understand soft decision decoding, it is helpful to take a closer look first at the communication channel presented in Figure 1.1. Our description relies heavily on the presentation in. The box in that figure labeled “Channel” is more accurately described as consisting of three components: a modulator, a waveform channel, and a demodulator; see Figure 15.1. For simplicity we restrict ourselves to binary data. Suppose that we transmit the binary codeword c = c1 … cn.
In this chapter we discuss some basic properties of combinatorial designs and their relationship to codes. In Section 6.5, we showed how duadic codes can lead to projective planes. Projective planes are a special case of t-designs, also called block designs, which are the main focus of this chapter. As with duadic codes and projective planes, most designs we study arise as the supports of codewords of a given weight in a code.
t-designs
A t-(v, k, λ) design, or briefly a t-design, is a pair (P, B) where P is a set of v elements, called points, and B is a collection of distinct subsets of P of size k, called blocks, such that every subset of points of size t is contained in precisely λ blocks. (Sometimes one considers t-designs in which the collection of blocks is a multiset, that is, blocks may be repeated. In such a case, a t-design without repeated blocks is called simple. We will generally only consider simple t-designs and hence, unless otherwise stated, the expression “t-design” will mean “simple t-design.”) The number of blocks in B is denoted by b, and, as we will see shortly, is determined by the parameters t, v, k, and λ.
The [n, k] codes that we have studied to this point are called block codes because we encode a message of k information symbols into a block of length n. On the other hand convolutional codes use an encoding scheme that depends not only upon the current message being transmitted but upon a certain number of preceding messages. Thus “memory” is an important feature of an encoder of a convolutional code. For example, if x(1), x(2), … is a sequence of messages each from to be transmitted at time 1, 2, …, then an (n, k) convolutional code with memory M will transmit codewords c(1), c(2), … where depends upon x(i), x(i − 1), …, x(i − M). In our study of linear block codes we have discovered that it is not unusual to consider codes of fairly high lengths n and dimensions k. In contrast, the study and application of convolutional codes has dealt primarily with (n, k) codes with n and k very small and a variety of values of M.
Convolutional codes were developed by Elias in 1955. In this chapter we will only introduce the subject and restrict ourselves to binary codes. While there are a number of decoding algorithms for convolutional codes, the main one is due to Viterbi; we will examine his algorithm in Section 14.2.
In 1948 Claude Shannon published a landmark paper “A mathematical theory of communication” that signified the beginning of both information theory and coding theory. Given a communication channel which may corrupt information sent over it, Shannon identified a number called the capacity of the channel and proved that arbitrarily reliable communication is possible at any rate below the channel capacity. For example, when transmitting images of planets from deep space, it is impractical to retransmit the images. Hence if portions of the data giving the images are altered, due to noise arising in the transmission, the data may prove useless. Shannon's results guarantee that the data can be encoded before transmission so that the altered data can be decoded to the specified degree of accuracy. Examples of other communication channels include magnetic storage devices, compact discs, and any kind of electronic communication device such as cellular telephones.
The common feature of communication channels is that information is emanating from a source and is sent over the channel to a receiver at the other end. For instance in deep space communication, the message source is the satellite, the channel is outer space together with the hardware that sends and receives the data, and the receiver is the ground station on Earth. (Of course, messages travel from Earth to the satellite as well.) For the compact disc, the message is the voice, music, or data to be placed on the disc, the channel is the disc itself, and the receiver is the listener.