To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The discussion in Chapter 1 indicated that dividing the planned coverage area into a number of radio cells results in a more spectrally efficient solution with smaller and lighter end-user devices. In such a network, as the individual moves further away from the cell to which he or she is currently connected, the signal strength at the mobile eventually falls to a level where correct operation cannot be guaranteed and the call may ‘drop’. However, because the cellular system is designed to ensure good coverage over the plan region there will be one or more other cells at this location that can be received at adequate signal strength, provided some mechanism is found to ‘hand over’ the call to one of these cells. Most of the complexity in practical cellular systems arises from the need to achieve this handover in a way that makes this process as imperceptible to the user as possible.
This chapter aims to establish a common understanding of the way most cellular networks operate, using the ubiquitous GSM system as a baseline, and highlight the key differences that can be expected in networks providing fixed or ‘nomadic’ wireless access. It will also explore the factors that significantly contribute to cellular network operating expense and thus determine activities that impact the operators' profit and loss account. Finally, the profit and loss account will be used as an agenda to identify wireless network technologies that are likely to change in the future.
At the time of writing, and to an extent never seen before, there is an expectation that almost any information or service that is available through communication systems in the office or home will be available wherever the user happens to be. This is placing incredible demands on wireless communications and has been the driver for the genesis and deployment of three generations of cellular systems in the space of 20 years. In parallel with this revolution in access technology has come the recognition that any information, whether for communication, entertainment or, indeed, for other purposes as yet unenvisaged, can be stored and transported in a universal digital format. The former technology-driven distinctions of analogue storage and transport for high bandwidth signals, such as video, and digital storage for other content are no more. These changes, together with an increasing international consensus on a ‘light-touch’ regime for regulation to stimulate competition, have enabled the first generation of quad-play multi-national companies to become established. Such companies seek to spread a strong base of content and services across what would formerly have been known as broadcast (cable, satellite, terrestrial), fixed telephony, mobile and broadband access channels. However, the ability for such companies to deliver applications and services that operate reliably and consistently, regardless of user location, is ultimately predicated on their ability to design solutions that deliver an appropriate and guaranteed quality of service (QoS) over what will certainly be a finite and potentially narrow-access data pipe.
The human species is unique amongst all life forms in developing a sophisticated and rich means of communication speech. While communication may have had its origins in the need for individuals to work co-operatively to survive, it is now deeply embedded in the human psyche and is motivated as much by social as business needs. Historically, this was met simply as individuals with similar interests and values chose to form small settlements or villages and all communication was face to face. It was not until the introduction of the telephone in the late nineteenth century that social and business networks could be sustained even when the individuals concerned did not live in the vicinity. Although the coverage and level of automation of the fixed telephony network improved dramatically over the next 100 years, the next major step, communication on the move, was only possible with the introduction of wireless networks.
The term ‘wireless network’ is very broad and, at various points in history, could have included everything from Marconi's first transatlantic communication in 1901, the first truly mobile (tactical) networks, in the form of the Motorola walkie-talkie in use during the Second World War, to the wide-area private mobile networks in use by the emergency services and large companies since the late 1940s. However, ‘wireless networks’ didn't really enter the public consciousness until the commercial deployment of cellular mobile radio in the 1980s.
In Chapters 1 and 2, the drivers for the development of the cellular system architecture were discussed and an overview of the key network elements and principles of operation for a GSM cellular solution was provided. The remainder of the book will address the activities necessary to design and deploy profitable wireless networks. Figure 3.1 summarises where these key processes are to be found by chapter.
In this chapter, the principles and processes that are used to plan wireless access networks will be developed. The major focus will be on cellular networks, as these usually represent the most complex planning cases, but an overview of the corresponding processes for 802.11 is also provided. Circuit voice networks will be examined initially; the treatment will then be extended to understand the additional considerations that come into play as first circuit-data and subsequently packet-data-based applications are introduced.
With the planning sequence understood, the way in which information from such processes can be used to explore the potential profitability of networks well in advance of deployment will be addressed. Choices regarding which applications are to be supported in the network and the quality of service offered will be shown to have a major impact on the profitability of network projects.
Circuit voice networks
In most forms of retailing, the introduction of new products follows the ‘S curve’ sequence first recognised by Rogers [1].
In Chapter 3, the generic principles, processes and deployment configurations applicable to cellular network planning were developed. In this chapter and the following four, a detailed design process will be developed to address, step by step, the practical deployment process for four different wireless networks.
This chapter describes the steps in the planning process that are essentially common, regardless of the specific air interface under consideration. The high-level planning of Chapter 3 will have estimated the total number of cell sites and the maximum cell size, and made decisions on the applications to be deployed. The detailed plan will define the actual locations of cell sites, antenna types, mast heights, etc., using topographical data for the specific regions. The plan will also assure guaranteed levels of coverage, capacity and availability for the applications to be supported.
The changing relative implementation cost and impact on battery life of particular technologies at points in time over the last 25 years has given rise to three distinct RAN standards:
TDMA (as employed in GSM, GPRS, EDGE),
CDMA (as employed in UMTS releases 99, 4, 5, 6, 7 and CDMA 2000),
OFDMA (as employed in 802.11, 802.16e (WiMAX) and as planned for 3G LTE).
These technologies are expected to dominate wireless deployments over the next 20 years and it is a comprehensive understanding of major factors, such as coverage, capacity and latency, that will enable system designers to exploit their potential fully.
The effects of roundoff noise in control and signal processing systems, and in numerical computation have been described in detail in the previous chapters.
There is another kind of quantization that takes place in these systems however, which has not yet been discussed, and that is coefficient quantization.
The coefficients of an equation being implemented by computer must be represented according to a given numerical scale. The representation is of course done with a finite number of bits. The same would be true for the coefficients of a digital filter or for the gains of a control system.
If a coefficient can be perfectly represented by the allowed number of bits, there would be no error in the system implementation. If the coefficient required more bits than the allowed word length, then the coefficient would need to be rounded to the nearest number on the allowed number scale. The rounding of the coefficient would result in a change in the implementation and would cause an error in the computed result. This error is distinct from and independent of quantization noise introduced by roundoff in computation. Its effect is bias–like, rather than the PQN nature of roundoff, studied previously.
If a numerical equation is being implemented or simulated by computer, quantization of the coefficients causes the implementation of a slightly different equation.
The purpose of this chapter is to provide an introduction to the basics of statistical analysis, to discuss the ideas of probability density function (PDF), characteristic function (CF), and moments. Our goal is to show how the characteristic function can be used to obtain the PDF and moments of functions of statistically related variables. This subject is useful for the study of quantization noise.
PROBABILITY DENSITY FUNCTION
Figure 3.1(a) shows an ensemble of random time functions, sampled at time instant t = t1 as indicated by the vertical dashed line. Each of the samples is quantized in amplitude. A “histogram” is shown in Fig. 3.1(b). This is a “bar graph” indicating the relative frequency of the samples falling within the given quantum box. Each bar can be constructed to have an area equal to the probability of the signal falling within the corresponding quantum box at time t = t1. The sum of the areas must total to 1. The ensemble should have an arbitrarily large number of member functions. As such, the probability will be equal to the ratio of the number of “hits” in the given quantum box divided by the number of samples. If the quantum box size is made smaller and smaller, in the limit the histogram becomes fx(x), the probability density function (PDF) of x, sketched in Fig. 3.1(c).
The extremely fast rolloff of the characteristic function of Gaussian variables provides nearly perfect fulfillment of the quantization theorems under most circumstances, and allows easy approximation of the errors in Sheppard's corrections by the first terms of their series expression. However, for most other distributions, this is not the case.
As an example, let us study the behavior of the residual error of Sheppard's first correction in the case of a sinusoidal quantizer input of amplitude A.
Plots of the error are shown in Fig. G.1.
It can be observed that neither of the functions is smooth, that is, a high–order Fourier series is necessary for properly representing the residual error in Sheppard's first correction, R1(A, μ) with sufficient accuracy. The maxima and minima of R1(A, μ) obtained for each value of A by changing μ, exhibits oscillatory behavior. For some values of A, for example as A ≈ 1.43q or A ≈ 1.93q (marked by vertical dotted lines in Fig. G.1(b)), the residual error of Sheppard's correction remains quite small for any value of the mean, but the limits of the error jump apart rapidly for values of A even close to these. A conservative upper bound of the error is therefore as high as the peaks in Fig. G.1(b). One could use the envelope of the error function for this purpose.