To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
With the exception of a scheduling request, all uplink control consists of feedback information to support downlink transmissions. The channel quality feedback is provided to support downlink channel-sensitive scheduling and link adaptation. The rank and precoding matrix indication is used for selecting a downlink MIMO transmission format. The ACK/NACK signaling provides feedback on downlink hybrid ARQ transmissions. In contrast to uplink control, the only feedback information on the downlink is ACK/NACK signaling to support uplink hybrid ARQ operation and transmission power control (TPC) commands to support uplink power control. The reason for this asymmetry is simply the fact that both the uplink and the downlink schedulers resides in the eNB. Therefore, the bulk of downlink signaling involves uplink and downlink scheduling grants that convey information on the transmission format and resource allocation for both the uplink and downlink transmissions. In order to support the uplink channel-sensitive scheduling, the uplink channel quality is estimated from the uplink sounding reference signal (SRS).
The three downlink control channels transmitted every subframe are physical control format indicator channel (PCFICH), physical downlink control channel (PDCCH) and physical hybrid ARQ indicator channel (PHICH). The PCFICH carries information on the number of OFDM symbols used for PDCCH. The PDCCH is used to inform the UEs about the resource allocation as well as modulation, coding and hybridARQ control information. Since multiple UEs can be scheduled simultaneously within a subframe in a frequency or space division multiplexed fashion multiple PDCCHs each carrying information for a single UE are transmitted.
The goal of power control is to transmit at the right amount of power needed to support a certain data rate. Too much power generates unnecessary interference, while too little power results in an increased error rate requiring retransmissions and hence resulting in larger transmission delays and lower throughputs. In a WCDMA system, power control is important particularly in the uplink to avoid the near–far problem. This is because the uplink transmissions are nonorthogonal and very high signal levels from cell-center UEs can overwhelm the weak signals received from cell-edge UEs. Therefore, a very elaborate power control mechanism based on the fast closed-loop principle is used in the WCDMA system. Similarly, power control is used for the downlink of WCDMA systems to support the fixed rate delay-sensitive voice service. However, for high-speed data transmission in WCDMA/HSPA systems, transmissions are generally performed at full power and link adaptation is preferably used to match the data rate to the channel conditions.
The LTE uplink uses orthogonal SC-FDMA access and hence the near–far problem of WCDMA does not exist. However, high levels of interference from neighboring cells can still limit the uplink coverage if UEs in the neighboring cells are not power controlled. The cellular systems are generally coverage limited in the uplink due to limited UE transmit power. The increased levels of interference from neighboring cells increase Interference over Thermal (IoT) limiting coverage at the desired cell. Therefore, uplink power control is beneficial in an orthogonal uplink access as well.
The LTE system supports fast dynamic scheduling on a per subframe basis to exploit gains from channel-sensitive scheduling. Moreover, advanced techniques such as link adaptation, hybrid ARQ and MIMO are employed to meet the performance goals. A set of physical control channels are defined in both the uplink and the downlink to enable the operation of these techniques. In order to support channel sensitive scheduling and link adaptation in the downlink, the UEs measure and report their channel quality information back to the eNB. Similarly, for downlink hybrid ARQ operation, the hybrid ARQ ACK/NACK feedback from the UE is provided in the uplink.
Two types of feedback information are required for MIMO operation, the first is MIMO rank information and the second is preferred precoding information. It is well known that even when a system supports N × N MIMO, rank-N or N MIMO layers transmission is not always beneficial. The MIMO channel experienced by a UE generally limits the maximum rank that can be used for transmission. In general, for weak users in the system, a lower rank transmission is preferred over a higher rank transmission. This is because at low SINR, the capacity is power limited and not degree-of-freedom limited and therefore multiple layers transmission is not helpful. Moreover, when the antennas are correlated, the channel matrix is rank deficient leading to a single layer or rank-1 transmission. Therefore, the system should support a variable number of MIMO layers transmission to maximize gains from MIMO.
An important requirement for the LTE system is improved cell-edge performance and throughput. This is to provide some level of service consistency in terms of geographical coverage as well as in terms of available data throughput within the coverage area. In a cellular system, however, the SINR disparity between cell-center and cell-edge users can be of the order of 20 dB. The disparity can be even higher in a coverage-limited cellular system. This leads to vastly lower data throughputs for the cell-edge users relative to cell-center users creating a large QoS discrepancy.
The cell-edge performance may be either noise-limited or interference-limited. In a noise-limited situation that typically occurs in large cells in rural areas, the performance can generally be improved by providing a power gain. The power gain can be achieved by using high-gain directional transmit antennas, increased transmit power, transmit beam-forming and receive beam-forming or receive diversity, etc. The total transmit power is generally dictated by regulatory requirements and hence limits the coverage gains possible due to increased transmit power.
The situation is different in small cells interference-limited cases, where, in addition to noise, inter-cell interference also contributes to degraded cell-edge SINR. In this case, providing a transmit power gain may not help because as the signal power goes up, the interference power also increases. This is assuming that with a transmit power gain all cells in the system will operate at a higher transmit power.
Specification of a propagation channel model is of foremost importance in the design of a wireless communication system. A propagation model is used to predict how the channel affects the transmitted signal so that transmitters and receivers that best compensate for the channel's corrupting behaviors can be developed. A propagation model is also used as a basis for performance evaluation and comparison of competing wireless technologies. An example of such propagation models is ITU-R channel models that were developed for IMT-2000 system evaluation. A wireless propagation channel model needs to be refined as new system parameters (e.g. larger bandwidths and new frequency bands) or radio technologies exploiting new characteristics of the channel such as multi-antenna schemes are introduced. A well-defined channel model allows for the assessing of the system performance under new parameters as well as gains due to introduction of new radio technologies. The performance of multi-antennas technologies, for example, depends upon the spatial correlations between antennas. As ITU-R channel models do not characterize the spatial correlations, using these propagation models may lead to overestimating the gains of multi-antenna techniques. In order to provide a reasonable propagation platform for multi-antenna techniques evaluation, the spatial channel model (SCM) was developed. The SCM defines a ray-based model derived from stochastic modeling of scatters and therefore allows to model spatial correlations required for evaluation of multi-antenna techniques.
The current 3G systems use a wideband code division multiple access (WCDMA) scheme within a 5 MHz bandwidth in both the downlink and the uplink. In WCDMA, multiple users potentially using different orthogonal Walsh codes are multiplexed on to the same carrier. In a WCDMA downlink (Node-B to UE link), the transmissions on different Walsh codes are orthogonal when they are received at the UE. This is due to the fact that the signal is transmitted from a fixed location (base station) on the downlink and all the Walsh codes are received synchronized. Therefore, in the absence of multi-paths, transmissions on different codes do not interfere with each other. However, in the presence of multi-path propagation, which is typical in cellular environments, the Walsh codes are no longer orthogonal and interfere with each other resulting in inter-user and/or inter-symbol interference (ISI). The multi-path interference can possibly be eliminated by using an advanced receiver such as linear minimum mean square error (LMMSE) receiver. However, this comes at the expense of significant increase in receiver complexity.
The multi-path interference problem of WCDMA escalates for larger bandwidths such as 10 and 20 MHz required by LTE for support of higher data rates. This is because chip rate increases for larger bandwidths and hence more multi-paths can be resolved due to shorter chip times. Note that LMMSE receiver complexity increases further for larger bandwidths due to increase of multi-path intensity. Another possibility is to employ multiple 5 MHz WCDMA carriers to support 10 and 20 MHz bandwidths.
A cell search procedure is used by the UEs to acquire time and frequency synchronization within a cell and detect the cell identity. In the LTE system, cell search supports a scalable transmission bandwidth from 1.08 to 19.8 MHz. The cell search is assumed to be based on two signals transmitted in the downlink, the synchronization signals and broadcast control channel (BCH).
The primary purpose of the synchronization signals is to enable the acquisition of the received symbol timing and frequency of the downlink signal. The cell identity information is also carried on the synchronization signals. The UE can obtain the remaining cell/system-specific information from the BCH. The primary purpose of the BCH is to broadcast a certain set of cell and/or system-specific information. After receiving synchronization signals and BCH, the UE generally acquires information that includes the overall transmission bandwidth of the cell, cell ID, number of transmit antenna ports and cyclic prefix length, etc.
The synchronization signals and BCH are transmitted using the same minimum bandwidth of 1.08MHz in the central part of the overall transmission band of the cell. This is because, regardless of the total transmission bandwidth capability of an eNB, a UE should be able to determine the cell ID using only the central portion of the bandwidth in order to achieve a fast cell search.
The reference signals are used for channel quality measurements for scheduling, link adaptation and handoff, etc. as well as for data demodulation.
The LTE system design goal is optimization for low mobile speeds ranging from stationary users to up to 15 km/h mobile speeds. At these low speeds, eNode-B can exploit multi-user diversity gains by employing channel sensitive scheduling. For downlink transmissions, UEs feed back downlink channel quality information back to the eNode-B. Using a channel quality sensitive scheduler such as proportional fair scheduler, eNode-B can serve a UE on time-frequency resources where it is experiencing the best conditions. It is well known that when multi-user diversity can be exploited, use of other forms of diversity such as transmit diversity degrades performance. This is because multi-user diversity relies on large variations in channel conditions while the transmit diversity tries to average out the channel variations.
The LTE system is also required to support speeds ranging from 15–120 km/h with high performance. Actually, the system requirements state mobility support up to 350 km/h or even up to 500 km/h. At high UE speeds, the channel quality feedback becomes unreliable due to feedback delays. When reliable channel quality estimates are not available at eNode-B, channel-sensitive scheduling becomes infeasible. Under these conditions, it is desired to average out the channel variations by all possible means. Moreover, the channel sensitive scheduler has to wait for the right (good) channel conditions when a UE can be scheduled. This introduces delays in packet transmissions. For delay-sensitive traffic such as VoIP application, channel-sensitive scheduling cannot be used under most conditions.
Random access is generally performed when the UE turns on from sleep mode, performs handoff from one cell to another or when it loses uplink timing synchronization. At the time of random access, it is assumed that the UE is time-synchronized with the eNB on the downlink. Therefore, when a UE turns on from sleep mode, it first acquires downlink timing synchronization. The downlink timing synchronization is achieved by receiving primary and secondary synchronization sequences and the broadcast channel as discussed in Chapter 9. After acquiring downlink timing synchronization and receiving system information including information on parameters specific to random access, the UE can perform the random access preamble transmission. Random access allows the eNB to estimate and, if needed, adjust the UE uplink transmission timing to within a fraction of the cyclic prefix. When an eNB successfully receives a random access preamble, it sends a random access response indicating the successfully received preamble(s) along with the timing advance (TA) and uplink resource allocation information to the UE. The UE can then determine if its random access attempt has been successful by matching the preamble number it used for random access with the preamble number information received from the eNB. If the preamble number matches, the UE assumes that its preamble transmission attempt has been successful and it then uses the TA information to adjust its uplink timing. After the UE has acquired uplink timing synchronization, it can send uplink scheduling or a resource request using the resources indicated in the random access response message as depicted in Figure 10.1.
A cellular radio system consists of a collection of fixed eNBs that define the radio coverage areas or cells. Typically, a non-line-of-sight (NLOS) radio propagation path exists between an eNB and a UE due to natural and man-made objects that are situated between the eNB and the UE. As a consequence, the radio waves propagate via reflections, diffractions and scattering. The arriving waves at the UE in the downlink direction (at the eNB in the uplink direction) experience constructive and destructive additions because of different phases of the individual waves. This is due to the fact that, at the high carrier frequencies typically used in the cellular wireless communication, small changes in the differential propagation delays introduce large changes in the phases of the individual waves. If the UE is moving or there are changes in the scattering environment, then the spatial variations in the amplitude and phase of the composite received signal will manifest themselves as the time variations known as Rayleigh fading or fast fading. Traditionally, the time-varying nature of the wireless channel was considered undesirable because it required very high signal-to-noise ratio (SNR) margins for providing the desired bit error or packet error reliability. Therefore, system design efforts focused on averaging out the signal variations due to fast fading by using various forms of diversity schemes such as space, angle, polarization, field, frequency, time or multi-path diversity.
The cellular wireless communications industry witnessed tremendous growth in the past decade with over four billion wireless subscribers worldwide. The first generation (1G) analog cellular systems supported voice communication with limited roaming. The second generation (2G) digital systems promised higher capacity and better voice quality than did their analog counterparts. Moreover, roaming became more prevalent thanks to fewer standards and common spectrum allocations across countries particularly in Europe. The two widely deployed second-generation (2G) cellular systems are GSM (global system for mobile communications) and CDMA (code division multiple access). As for the 1G analog systems, 2G systems were primarily designed to support voice communication. In later releases of these standards, capabilities were introduced to support data transmission. However, the data rates were generally lower than that supported by dial-up connections. The ITU-R initiative on IMT-2000 (international mobile telecommunications 2000) paved the way for evolution to 3G. A set of requirements such as a peak data rate of 2 Mb/s and support for vehicular mobility were published under IMT-2000 initiative. Both the GSM and CDMA camps formed their own separate 3G partnership projects (3GPP and 3GPP2, respectively) to develop IMT-2000 compliant standards based on the CDMA technology. The 3G standard in 3GPPis referred to as wideband CDMA(WCDMA) because it uses a larger 5 MHz bandwidth relative to 1.25 MHz bandwidth used in 3GPP2's cdma2000 system. The 3GPP2 also developed a 5 MHz version supporting three 1.25 MHz subcarriers referred to as cdma2000-3x.
Voice communication and download data services such as web browsing are based on point-to-point (PTP) communication. On the other hand, multicast and broadcast services are based on point-to-multipoint (PTM) communication, where data packets are simultaneously transmitted from a single source to multiple destinations. Examples of broadcast services are radio and television services that are broadcast over the air or over cable networks and the content is available to all the users. Multicast refers to services that are delivered to users who have joined a particular multicast group. The service delivery using point-to-multipoint (PTM) communication is generally more efficient when a large number of users is interested in receiving the same content such as a mobile TV channel. This results in efficient transmission not only over the wireless link but also in the core and access networks. This is because a single multicast broadcast packet travels in the core and access networks and is copied and forwarded to multiple Node-Bs in the multicast broadcast area.
The broadcast services can be delivered to mobile devices either via an independent broadcast network such as DVB-H (digital video broadcast-handheld), DMB (digital multimedia broadcast), MediaFLO or over a service provider's cellular network. The DMB is a South Korean standard derived from the digital audio broadcast (DAB) standard. In the case of an independent broadcast network, dual mode UEs capable of receiving service from both the broadcast network and the cellular network are required.
The LTE network architecture is designed with the goal of supporting packet-switched traffic with seamless mobility, quality of service (QoS) and minimal latency. A packet-switched approach allows for the supporting of all services including voice through packet connections. The result in a highly simplified flatter architecture with only two types of node namely evolved Node-B (eNB) and mobility management entity/gateway (MME/GW). This is in contrast to many more network nodes in the current hierarchical network architecture of the 3G system. One major change is that the radio network controller (RNC) is eliminated from the data path and its functions are now incorporated in eNB. Some of the benefits of a single node in the access network are reduced latency and the distribution of the RNC processing load into multiple eNBs. The elimination of the RNC in the access network was possible partly because the LTE system does not support macro-diversity or soft-handoff.
In this chapter, we discuss network architecture designs for both unicast and broadcast traffic, QoS architecture and mobility management in the access network. We also briefly discuss layer 2 structure and different logical, transport and physical channels along with their mapping.
Network architecture
All the network interfaces are based on IP protocols. The eNBs are interconnected by means of an X2 interface and to the MME/GW entity by means of an S1 interface as shown in Figure 2.1. The S1 interface supports a many-to-many relationship between MME/GW and eNBs.
A major design goal for the LTE system is flexible bandwidth support for deployments in diverse spectrum arrangements. With this objective in mind, the physical layer of LTE is designed to support bandwidths in increments of 180 kHz starting from a minimum bandwidth of 1.08 MHz. In order to support channel sensitive scheduling and to achieve low packet transmission latency, the scheduling and transmission interval is defined as a 1 ms subframe. Two cyclic prefix lengths namely normal cyclic prefix and extended cyclic prefix are defined to support small and large cells deployments respectively. A subcarrier spacing of 15 kHz is chosen to strike a balance between cyclic prefix overhead and robustness to Doppler spread. An additional smaller 7.5 kHz subcarrier spacing is defined for MBSFN to support large delay spreads with reasonable cyclic prefix overhead. The uplink supports localized transmissions with contiguous resource block allocation due to single-carrier FDMA. In order to achieve frequency diversity, inter-subframe and intra-subframe hopping is supported. In the downlink, a distributed transmission allocation structure in addition to localized transmission allocation is defined to achieve frequency diversity with small signaling overhead.
Channel bandwidths
The LTE system supports a set of six channel bandwidths as given in Table 8.1.We note that the transmission bandwidth configuration BWconfig is 90% of the channel bandwidth BWchannel for 3–20 MHz. For 1.4 MHz channel bandwidth, the transmission bandwidth is only 77% of the channel bandwidth. Therefore, LTE deployment in the small 1.4 MHz channel is less spectrally efficient than the 3 MHz and larger channel bandwidths.
Like other 3G systems, the current HSPA system uses turbo coding as the channel-coding scheme. The LTE system supports peak data rates that are an order of magnitude higher than the current 3 G systems. It is therefore fair to ask the question, can the turbo coding scheme scale to data rates in excess of 100 Mb/s supported by LTE, while maintaining reasonable decoding complexity? This question is particularly important as other coding schemes, which offer inherent parallelism and therefore provide very high decoding speeds such as Low Density Parity Check (LDPC) codes, have recently become available. A major argument against turbo coding schemes is that they are not amenable to parallel implementations thus limiting the achievable decoding speeds. The problem, in fact, lies in the turbo code internal interleaver used in the current HSPA system, which creates memory contention among processors in parallel implementation. Therefore, if the turbo code internal interleaver can somehow be made contention free, it becomes possible for turbo code to benefit from parallel processing and hence achieve high decoding speeds.
LDPC codes
Similar to turbo codes, LDPC codes are near-Shannon limit error correcting codes. More recently, LDPC codes have been adopted in standards including IEEE 802.16e wireless MAN, IEEE 802.11n wireless LAN and digital video broadcast DVB-S2. The LDPC codes allow an extremely flexible code design that can be tailored to achieve efficient encoding and decoding. The interest in LDPC codes comes from their potential to achieve very high throughput (due to the inherent parallelism of the decoding algorithm) while maintaining good error-correcting performance and low decoding complexity.
In cellular systems, the wireless communication service in a given geographical area is provided by multiple Node-Bs or base stations. The downlink transmissions in cellular systems are one-to-many, while the uplink transmissions are many-to-one. A one-to-many service means that a Node-B transmits simultaneous signals to multiple UEs in its coverage area. This requires that the Node-B has very high transmission power capability because the transmission power is shared for transmissions to multiple UEs. In contrast, in the uplink a single UE has all its transmission power available for its uplink transmissions to the Node-B. Typically, the maximum allowed downlink transmission power in cellular systems is 43 dBm, while the uplink transmission power is limited to around 24 dBm. This means that the total transmit power available in the downlink is approximately 100 times more than the transmission power from a single UE in the uplink. In order for the total uplink power to be the same as the downlink, approximately 100 UEs should be simultaneously transmitting on the uplink.
Most modern cellular systems also support power control, which allows, for example, allocating more power to the cell-edge users than the cell-center users. This way, the cell range in the downlink can be extended because the Node-B can always allocate more power to the coverage-limited UE. However, in the uplink, the maximum transmission power is constrained by the maximum UE transmission power.
The Global system for mobile communications (GSM) is the dominant wireless cellular standard with over 3.5 billion subscribers worldwide covering more than 85% of the global mobile market. Furthermore, the number of worldwide subscribers using high-speed packet access (HSPA) networks topped 70 million in 2008. HSPA is a 3 Gevolution of GSM supporting high-speed data transmissions using WCDMA technology. Global uptake of HSPA technology among consumers and businesses is accelerating, indicating continued traffic growth for high-speed mobile networks worldwide. In order to meet the continued traffic growth demands, an extensive effort has been underway in the 3G Partnership Project (3GPP) to develop a new standard for the evolution of GSM/HSPAtechnology towards a packet-optimized system referred to as Long-Term Evolution (LTE).
The goal of the LTE standard is to create specifications for a new radio-access technology geared to higher data rates, low latency and greater spectral efficiency. The spectral efficiency target for the LTE system is three to four times higher than the current HSPA system. These aggressive spectral efficiency targets require pushing the technology envelope by employing advanced air-interface techniques such as low-PAPR orthogonal uplink multiple access based on SC-FDMA (single-carrier frequency division multiple access) MIMO multiple-input multiple-output multi-antenna technologies, inter-cell interference mitigation techniques, low-latency channel structure and single-frequency network (SFN) broadcast. The researchers and engineers working on the standard come up with new innovative technology proposals and ideas for system performance improvement.