To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
• User selection: which users to transmit in the uplink or receive in the downlink.
• Resource allocation: what time-frequency bandwidth to be allocated to the selected users and what transmit power to be used.
Good scheduling strives to achieve two goals, namely quality of service (QoS) on the user level, measured by data rate, delay, loss, and fairness among users, and efficiency on the system level, measured by the total amount of traffic supported by the system.
A key feature in an OFDMA mobile broadband cellular system is scheduling by which a base station dynamically selects users and allocates time-frequency-power resource to them. In contrast, when a user is admitted to the system in a circuit-switched voice system, it is statically assigned a piece of bandwidth resource (time, frequency, or code) over which the voice traffic is transported without explicit dynamic scheduling. In a CDMA voice system, for example, the only dynamic resource allocation job is power allocation or power control [168]. The situation is quite different in mobile broadband, because data traffic is bursty and has different QoS requirements. Static resource allocation cannot simultaneously meet the QoS requirements and achieve high system efficiency. Scheduling becomes a necessity to dynamically match user selection and resource allocation with traffic needs and wireless channel conditions. On the other hand, scheduling has been well studied in wireline broadband networks (see [13, 137]), and many of those design and analysis ideas are applicable to the wireless counterpart.
Kung Yao, University of California, Los Angeles,Flavio Lorenzelli, The Aerospace Corporation, Los Angeles,Chiao-En Chen, National Chung-Cheng University, Taiwan
In Chapter 4, we considered the detection of known binary deterministic signals in Gaussian noises. In this chapter, we consider the detection and classification of M-ary deterministic signals. In Section 5.1, we introduce the problem of detecting M given signal waveforms in AWGN. Section 5.2 introduces the Gram–Schmidt orthonormalization method to obtain a set of N orthonormal signal vectors or waveforms from a set of N linearly independent signal vectors or waveforms. These orthonormal vectors or signal waveforms are used as a basis for representing M-ary signal vectors or waveforms in their detection. Section 5.3 treats the detection of M-ary given signals in AWGN. Optimum decisions under the Bayes criterion, the minimum probability of error criterion, the maximum a posteriori criterion, and the minimum distance decision rule are considered. Simple minimum distance signal vector geometry concepts are used to evaluate symbol error probabilities of various commonly encountered M-ary modulations including binary frequency-shifted-keying (BFSK), binary phase-shifted-keying (BPSK), quadra phase-shifted-keying (QPSK), and quadra-amplitude-modulation (QAM) communication systems. Section 5.4 considers optimum signal design for M-ary systems. Section 5.5 introduces linearly and non-linearly separable and support vector machine (SVM) concepts used in classification of M deterministic pattern vectors. A brief conclusion is given in Section 5.6. Some general comments are given in Section 5.7. References and homework problems are given at the end of this chapter.
So far we have studied the system design principles of OFDMA-based mobile broadband under a conventional cellular network framework. The basic premises of the framework are:
• The base stations use high transmit power, and are placed at carefully chosen locations, ideally at the vertices of regular hexagons.
• A user is connected to the “best” base station. The best base station is usually the closest one that has the greatest downlink signal strength received at the user.
• A base station is open to all the users within a cell by providing “unrestricted” access service.
• Both the downlink and uplink communications are one-hop between the base station and the users. The users do not communicate directly even if they are nearby to each other.
• The time-frequency resource is reused spatially. Among cells reusing the same resource, a signal transmitted in one cell is treated as interference/noise in another cell.
• The spectrum to be used in a cell is fixed and known to both the base station and the users.
In this chapter, we explore several ideas that go beyond the conventional cellular framework in pursuit of the next performance leap.
The first such idea is heterogenous network topology. Inwireless, moving the transmitter and receiver close to each other increases signal strength, reduces required transmit power and thus interference to other transmissions, and allows dense spectrum reuse.
Communications systems are designed to send information from one point to another in the face of corrupting noise, signal loss and other degrading effects. Because these effects are statistical in nature, the field of signal detection and estimation was created to provide an analytical means to quantify link performance, establishing the quality of the information transfer. Although communications moved from point-to-point data links to networking among many users in the last decade, link analysis is still important to setting link performance even though networking can help overcome link performance limitations.
The most commonly used parameter for link analysis is the signal-to-noise ratio (SNR), which is normally defined as the ratio of the average signal power to the average noise power at some point in the receiver chain. Although to first order, optical and radio frequency (RF) communications systems operate essentially in the same way, their detection processes are different [1–3].
In an RF receiver, the first detector senses the signal and noise field strengths. The noise input to this detector is usually caused by thermal noise from the antenna and the associated field preamplifier.
In Chapter 10, we discussed the effects on high-data-rate FSOC systems from atmospheric turbulence, which are dominated by scintillation / channel fading and beam wander, how they are modeled and how they are mitigated to yield high communications link / network performance. From some of the figures there, one can see channel dynamics on a very fast scale. When clouds insert themselves in the link, we suggested that RF communications (a hybrid system) could provide a means for keeping the link connected at a reasonably high, but much lower, data rate. This works well in the atmosphere in those climates where clouds are infrequent or sparse. This strategy does not work in the optical scatter channel where particulate absorption and scattering significantly degrade the incoming signal to the point that the original diffraction-limited beam becomes lost in the system noise floor. Alternate strategies must be employed to facilitate high communications link / network availability because of the significant degradation of the original signal by the atmospheric and maritime optical scatter channels. The signal structures that result from each channel are quite different from each other, as well as significantly different from the turbulence channel. Kennedy was one of the first to recognize that these significant differences in structure from the former relative to the latter could not be easily mitigated; he suggested that optical system designers exploit their new structures in order to close the communications link as typical mitigation techniques were useless in the diffusive scattering regime that defines high link availability [1]. Chapters 5 and 9 showed that it could be used for target imaging. This chapter will discuss how this approach can be used for communications in the optical scattering channel, highlighting the models and techniques used by today’s researchers and engineers.
Optical scatter channel models
Chapter 5 introduced the optical scatter channel by describing Mie scattering and its effect on optical signals. This introduced the inherent properties of the optical channel. In this section, we will expand this discussion, focusing on the detailed effects on laser communications created by the atmospheric and maritime scatter channels, and their modeling. Each has its own unique characteristics.
When we decided to write this book about the design of electro-optic systems, we agreed to make it as fundamental as possible. To do this in detail would most probably make the book unwieldy. Rather, we will try to motivate all aspects of the design from fundamental principles, stating the important results, and leaving the derivations to references. We will take as our starting point the first two Laws of Thermodynamics [1]. The Three Laws of Thermodynamics are the basic foundation of our understanding of how the Universe works. Everything, no matter how large or small, is subject to the Three Laws. The Laws of Thermodynamics dictate the specifics for the movement of heat and work, both natural and man-made. The First Law of Thermodynamics is a statement of the conservation of energy – the Second Law is a statement about the nature of that conservation – and the Third Law is a statement about reaching Absolute Zero (0° K). These laws and Maxwell’s equations were developed in the nineteenth century, and are the foundation upon which twentieth-century physics was founded.
The authors have been active participants in the area of electro-optic systems for over four decades, covering the introduction of laser systems and low loss optical fibers and the institutionalizing of photonic systems into everyday life. Yet for all the literature that exists, and all the work that has been accomplished, we felt that no single book existed that integrated the entire field of electro-optics, reaching back to all the fundamental building blocks and providing enough examples to be useful to practicing engineers. After much discussion and a slow start, we decided first to reference as much material as possible, bringing forth only the highlights necessary to guide researchers in the field. Then we decided to minimize mathematical developments by relegating them, as much as possible, to explanatory examples. What has evolved in our development is a clear statement of the duality of time and space in electro-optic systems. This had been touched upon in our earlier work, but has been brought forth clearly in this book in the duality of modulation index in time, and contrast in space. In doing so, and in other areas, we feel that this book contains new material with regard to the processing of spatial images which have propagated through deleterious channels. We feel that this book contains much new material in the areas of communications and imaging through deleterious channels.
In Chapter 1, we reach back to the true foundations of modern physics, the establishment of the first two laws of thermodynamics. While taken for granted, it is the first law that explains why we can see stars at the edge of the universe, and governs the radiant properties of propagating systems. The second law and the insight of Claude Shannon have created the modern field of Information Theory. Using his fundamental definitions of channel capacity we are able to establish the duality of time and space in electro-optics. This requires one basic mathematical development that is included in Appendix A, and is developed in Chapters 3 and 4.
This chapter discusses some of the key aspects of the signal modulation and coding schemes used in FOC and FSOC systems today. Most notably, we will review the use of return-to-zero (RZ) and non-return-to-zero (NRZ) in coding the information streams and see their effect on systems performance, as well as receiver sensitivity.
Modern signal modulation schemes
Let us begin with some definitions.
Return-to-zero (RZ)
RZ describes a signal modulation technique where the signal drops (returns) to zero between each incoming pulse. The signal is said to be “self-clocking”. This means that a separate clock signal does not need to be sent alongside the information signal to synchronize the data stream. The penalty is the system uses twice the bandwidth to achieve the same data-rate as compared to non-return-to-zero format (see next definition).
Although any RZ scheme contains a provision for synchronization, it still has a DC component, resulting in “baseline wander” during long strings of “0” or “1” bits, just like the line code non-return-to-zero. This wander is also known as a “DC droop”, resulting from the AC coupling of such signals