To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter studies MQAM for downlink multicode CDMA systems with interference cancellation to support high data rate services. In the current 3G WCDMA systems, in addition to multicode transmission, MQAM is employed for HSDPA due to its high spectral efficiency. In frequency selective fading channels, multipath interference seriously degrades the system performance. In this chapter, theoretical analysis is presented to show that with the help of interference cancellation technique, MQAM may be employed in high SNR cases to increase system throughput. Moreover, it is found that when using the interference cancellation technique, extra pilot power should be invested for more accurate channel estimation, and consequently better BER performance can be achieved.
Introduction
MQAM modulated multicode CDMA is proposed for HSDPA in the 3G standards, by which the throughput can be increased without extra bandwidth investment. As mentioned in Chapter 1, the introduction of multicode transmission causes multipath interference in frequency selective fading channels due to multipath propagation delays. In this chapter, a coherent Rake receiver with interference cancellation is studied.
Moreover, in WCDMA systems, a common pilot channel is used for channel estimation at the receiver. However, channel estimation error occurs since the received pilot channel signal suffers from the multipath interference and AWGN noise, which affects the coherent data decision and the regeneration of multipath interference, and thus degrades the system performance. The effects of imperfect channel estimation and additive multipath interference on system performance are investigated.
To conclude this book, we summarize our main results and conclusions, before briefly speculating on the most promising areas for future research.
Putting it all together
Stochastic resonance
Chapter 2 presents a historical review and elucidation of the major epochs in the history of stochastic resonance (SR) research, and discussion of the evolution of the term ‘stochastic resonance’. A list of the main controversies and debates associated with the field is given.
Chapter 2 also demonstrates qualitatively that SR can actually occur in a single threshold device, where the threshold is set to the signal mean. Although SR cannot occur in the conventional signal-to-noise ratio (SNR) measure in this situation, if ensemble averaging is allowed, then the presence of an optimal noise level can decrease distortion.
Furthermore, Chapter 2 contains a discussion and critique of the use of SNR measures to quantify SR, the debate about SNR gains due to SR, and the relationship between SNRs and information theory.
Suprathreshold stochastic resonance
Chapter 4 provides an up-to-date literature review of previous work on suprathreshold stochastic resonance (SSR). It also gives numerical results, showing SSR occurring for a number of matched and mixed signal and noise distributions not previously considered. A generic change of variable in the equations used to determine the mutual information through the SSR model is introduced. This change of variable results in a probability density function (PDF) that describes the average transfer function of the SSR model.
As described and illustrated in Chapters 4–7, a form of stochastic resonance called suprathreshold stochastic resonance can occur in a model system where more than one identical threshold device receives the same signal, but is subject to independent additive noise. In this chapter, we relax the constraint in this model that each threshold must have the same value, and aim to find the set of threshold values that either maximizes the mutual information, or minimizes the mean square error distortion, for a range of noise intensities. Such a task is a stochastic optimal quantization problem. For sufficiently large noise, we find that the optimal quantization is achieved when all thresholds have the same value. In other words, the suprathreshold stochastic resonance model provides an optimal quantization for small input signal-to-noise ratios.
Introduction
The previous four chapters consider a form of stochastic resonance, known as suprathreshold stochastic resonance (SSR), which occurs in an array of identical noisy threshold devices. The noise at the input to each threshold device is independent and additive, and this causes a randomization of effective threshold values, so that all thresholds have unique, but random, effective values. Chapter 4 discusses and extends Stocks' result (Stocks 2000c) that the mutual information between the SSR model's input and output signals is maximized for some nonzero value of noise intensity. Chapter 6 considers how to reconstruct an approximation of the input signal by decoding the SSR model's output signal.
Stochastic resonance (SR), being an interdisciplinary and evolving subject, has seen many debates. Indeed, the term SR itself has been difficult to comprehensively define to everyone's satisfaction. In this chapter we look at the problem of defining stochastic resonance, as well as exploring its history. Given that the bulk of this book is focused on suprathreshold stochastic resonance (SSR), we give particular emphasis to forms of stochastic resonance where thresholding of random signals occurs. An important example where thresholding occurs is in the generation of action potentials by spiking neurons. In addition, we outline and comment on some of the confusions and controversies surrounding stochastic resonance and what can be achieved by exploiting the effect. This chapter is intentionally qualitative. Illustrative examples of stochastic resonance in threshold systems are given, but fuller mathematical and numerical details are left for subsequent chapters.
Introducing stochastic resonance
Stochastic resonance, although a term originally used in a very specific context, is now broadly applied to describe any phenomenon where the presence of internal noise or external input noise in a nonlinear system provides a better system response to a certain input signal than in the absence of noise. The key term here is nonlinear. Stochastic resonance cannot occur in a linear system – linear in this sense means that the output of the system is a linear transformation of the input of the system. A wide variety of performance measures have been used – we shall discuss some of these later.
By definition, signal or data quantization schemes are noisy in that some information about a measurement or variable is lost in the process of quantization. Other systems are subject to stochastic forms of noise that interfere with the accurate recovery of a signal, or cause inaccuracies in measurements. However stochastic noise and quantization can both be incredibly useful in natural processes or engineered systems. As we saw in Chapter 2, one way in which noisy behaviour can be useful is through a phenomenon known as stochastic resonance (SR). In order to relate SR and signal quantization, this chapter provides a brief history of standard quantization theory. Such results and research have come mainly from the electronic engineering community, where quantization needs to be understood for the very important process of analogue-to-digital conversion – a fundamental requirement for the plethora of digital systems in the modern world.
Information and quantization theory
Analogue-to-digital conversion (ADC) is a fundamental stage in the electronic storage and transmission of information. This process involves obtaining samples of a signal, and their quantization to one of a finite number of levels.
According to the Australian Macquarie Dictionary, the definition of the word ‘quantize’ is
1. Physics: a. to restrict (a variable) to a discrete value rather than a set of continuous values. b. to assign (a discrete value), as a quantum, to the energy content or level of a system. 2. Electronics: to convert a continuous signal waveform into a waveform which can have only a finite number (usually two) of values.
This chapter discusses the behaviour of the mutual information and channel capacity in the suprathreshold stochastic resonance model as the number of threshold elements becomes large or approaches infinity. The results in Chapter 4 indicate that the mutual information and channel capacity might converge to simple expressions of N in the case of large N. The current chapter finds that accurate approximations do indeed exist in the large N limit. Using a relationship between mutual information and Fisher information, it is shown that capacity is achieved either (i) when the signal distribution is Jeffrey's prior, a distribution which is entirely dependent on the noise distribution, or (ii) when the noise distribution depends on the signal distribution via a cosine relationship. These results provide theoretical verification and justification for previous work in both computational neuroscience and electronics.
Introduction
Section 4.4 of Chapter 4 presents results for the mutual information and channel capacity through the suprathreshold stochastic resonance (SSR) model shown in Fig. 4.1. Recall that σ is the ratio of the noise standard deviation to the signal standard deviation. For the case of matched signal and noise distributions and a large number of threshold devices, N, the optimal value of σ – that is, the value of σ that maximizes the mutual information and achieves channel capacity – appears to asymptotically approach a constant value with increasing N. This indicates that analytical expressions might exist in the case of large N for the optimal noise intensity and channel capacity.
In the early 1990s, the need for a ‘third-generation’ cellular standard was recognised by many agencies worldwide. The European Union had funded a series of research programmes since the late 1980s, such as RACE [1], aimed at putting in place the enabling technology for 3G, and similar work was underway in Japan, the USA and other countries. The runaway success of GSM, however, had fortuitously put ETSI in the driving seat, as most countries wanted a GSM-compatible evolution for 3G. Recognising this need, and to reduce the risk of fragmented cellular standards that characterised the world before GSM, ETSI proposed that a partnership programme should be established between the leading national bodies to develop the new standard. The result was the formation of 3GPP (the 3G Partnership Programme) between national standards bodies representing China, Europe, Japan, Korea and the USA. It was agreed that ETSI would continue to provide the infrastructure and technical support for 3GPP, ensuring that the detailed technical knowledge of GSM, resident in the staff of the ETSI secretariat, was not lost.
Although not explicitly written down in the form of detailed 3G requirements at the time, the consensus amongst the operator, vendor and administration delegates that drove standards evolution might best be summarised as:
Provide better support for the expected demand for rich multimedia services,
Provide lower-cost voice services (through higher voice capacity),
Reuse GSM infrastructure wherever possible to facilitate smooth evolution.
This was clearly a very sensible set of ambitions, but limited experience of multimedia in the fixed network, and of the specific needs of packet-based services in particular, meant that some aspects of the resulting UMTS standard were not ideal.
As discussed earlier, one method for increasing capacity in cellular networks is to make the cell sizes smaller, allowing more subscribers to use the available radio spectrum without interfering with each other. A similar approach is used in 802.11 (Wi-Fi) – either to provide public ‘hot-spot’ broadband connections or to link to broadband connections in the home via a Wi-Fi router. The range of such systems is limited to a few tens of metres by restricting the power output of the Wi-Fi transmitter. The goal of a wireless mesh is to extend the 802.11 coverage outdoors over a wider area (typically tens of square kilometres) not simply by increasing the power, but by creating contiguous coverage with dozens of access points (APs) or nodes, separated by distances of 100–150 metres. For such a solution to be economically viable, the access points themselves need to be relatively cheap to manufacture and install, and the back-haul costs must be tightly managed. To address this latter requirement, only a small percentage of APs (typically 10–20%) have dedicated back-haul to the Internet; the other APs pass their traffic through neighbouring APs until an AP with back-haul is reached. At the time of writing, the IEEE 802.11s standard for mesh networking is still being drafted, and various proprietary flavours of mesh networks exist. The following discussion outlines the principal characteristics and properties of most commercially available mesh networks, which will be embodied in 802.11s when it is finalised.
The life-cycle of any wireless telecommunications network broadly follows the process illustrated in Figure 10.1. As discussed in Chapter 3, the initial planning makes technology choices to meet the overall business goals. This leads to the design phase, where detailed studies are made of system capacity and coverage to ensure that the network performance criteria are likely to be met. Once the design is complete, the network infrastructure is ordered, installed and commissioned – depending on the scale of the network, this phase can take many months or even years. Once the network has been built and commissioned, subscribers are given access to the network, which is then said to be operational. The performance of the network is subsequently routinely monitored to ensure that any equipment failures, software problems or other issues are quickly identified. Any problems that do occur are fixed by operational support engineers or automatically dealt with by equipment redundancy and fault correction procedures. Finally, the performance of the stable network is examined, and its configuration is fine-tuned to maximise capacity or optimise the quality of service delivered to the subscribers. If necessary, the cycle is repeated, as new network expansion is planned and the network grows to cope with growth in the subscriber base.
In practice, the phases of the network life-cycle are rarely as distinct as this, and there is much overlap and iteration around the cycle.
At the beginning of 2003, results from the first commercial deployments of UMTS were coming in and the inefficiency of Release 99 both spectrally and in terms of its long ‘call’ set-up times were becoming apparent. Coincident with this, Wi-Fi networks were becoming omnipresent in businesses and cities and broadband was achieving significant penetration in homes through much of the developed world. The user expectation was shifting from ‘dial-up’ latencies of tens of seconds to delays of less than 200 ms! These events together made it clear that there was a need for change if operators using 3GPP-based networks were to remain competitive. The requirements for a ‘long-term evolution’ of UMTS can thus be traced to this time when a study, which eventually led to the publication of a document Evolution of 3GPP System [1], commenced. The results from this study made it clear that, in future systems, support of all services should be via a single ‘all IP network’ (AIPN), a fact already recognised in the 3GPP study ‘IP Multimedia Services’ with an initial functionality release in Release 5 (June 2002). What was different, however, was that it also identified that both the core network and access systems needed to be updated or replaced in order to provide a better user experience. Even at this initial stage, control and user plane latencies were itemised as major issues with latencies of <100 ms targeted alongside peak user rates of 100 Mbits/s.
The development of a framework to assess system capacity for any air interface typically falls into two parts: firstly estimation of the S/N or C/I necessary to deliver the required bit error rate (BER) for the system in a point-to-point link and secondly understanding the limiting system conditions under which the most challenging S/N or C/I will be encountered. Once these two conditions are understood, calculations of maximum ranges and capacities for the system can be made.
C/I assessment for GSM
The base modulation scheme employed by GSM is Gaussian minimum shift keying (GMSK). This is a form of binary frequency shift keying with the input bit stream used to drive the frequency modulator filtered by a network with a Gaussian impulse response. This filter removes the high frequencies, which would otherwise be present because of the ‘fast’ edges in the modulating bit stream. If BT, the product of the filter – 3 dB bandwidth and the modulating bit period is chosen to be around 0.3, it enables most of the energy from the 270 kbits/s bit rate of GSM to be accommodated in a 200 kHz channel with low interference from the adjacent channels and negligible interference from those beyond.
Figure 5.1 illustrates the effect of the Gaussian filtering on the original rectangular pulse train and goes on to show the way inter-symbol interference arises, as multi-path introduces longer delays than that due to propagation of the direct ‘ray’.
Telecommunications networks have always fascinated me. My interest was sparked when, as an engineering student in the late sixties, I was told that the telephone network was the biggest machine on earth yet it was constructed from a few basic building blocks replicated many, many times over. It seemed to me that a machine with those characteristics was both already a remarkable engineering feat and a perfect platform for the rapid development of more sophisticated services. So I decided upon a career in telecommunications engineering.
I soon discovered that there was nothing basic about either the building blocks or the architecture of those networks. They were already engineeringly sophisticated at every layer and in every enabling technology. That sophistication was to lead to the continuing development of telecoms networks at a far greater pace than any of us working in the field three or four decades ago could possibly have imagined.
From voice to data; analogue to digital; terrestrial to satellite; tethered to untethered, the progress has been remarkable. Yet undoubtedly the most remarkable development of all has been in wireless networks. Nearly half of the world's population take it for granted that the purpose of telecoms networks is to connect people, not places. An increasing proportion of them use those connections for exchanging text and images as readily as voice. The transformational effect on national economies, education, health and many other factors that bear upon the quality of life is apparent.
This chapter aims to provide an overview of the role of the core network and transmission in wireless solutions. Insight is given into the factors that have influenced network evolution from early cellular architectures, such as GSM Release 98, through to systems currently being standardised for the future, exemplified by Release 8. The chapter will conclude with a worked example illustrating the dimensioning of an IP multimedia system (IMS) transmission for a system supporting multiple applications.
It is useful to establish a common terminology before discussing networks in more detail. In the early 1990s, ETSI proposed the convention shown in Figure 9.1 [1], to distinguish between two distinct types of circuit service that a network might provide, namely bearer services and end-to-end applications, which it called teleservices. In the case of bearer services, a wireless network is ‘providing the capability to transmit signals between two access points’. Support of teleservices, however, requires the provision of ‘the complete capability, including terminal equipment functions, for communication between users according to protocols established by agreement between network operators’. Defining teleservices in this way has standardised the details of the complete set of services, applications and supplementary services that they provide. As a consequence, substantial effort is often required to introduce new services or simply to modify the existing one (customisation). This makes it more difficult for operators to differentiate their services.