To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Driven by a new generation of wireless user equipment and the proliferation of bandwidth-intensive applications, user data traffic and the corresponding network load are increasing in an exponential manner. Most of this new data traffic is being generated indoors, which requires increased link budget and coverage extension to provide satisfactory user experience. As a result, current cellular networks are reaching their breaking point and conventional cellular architectures, which are devised to cater to large coverage areas and optimized for homogeneous traffic, are facing unprecedented challenges to meet these user demands.
In this context, there has been an increasing interest to deploy small cellular access points in residential homes, subways, and offices. These network architectures, which may be either operator deployed and/or consumer installed and are comprised of a mix of low-power cells underlying the macrocell network, are commonly referred to as small cell networks. By deploying additional network nodes within the local area range and bringing the network closer to end users, spatial reuse and coverage can be significantly improved, thus allowing future cellular systems to achieve higher data rates, while retaining the seamless connectivity and mobility of cellular networks.
Inspired by the attractive features and potential advantages of small cell networks, their development and deployment are gaining momentum in the wireless industry and research communities during the last few years. It has also attracted the attention of standardization bodies such as the Third Generation Partnership Project (3GPP) Long Term Evolution (LTE)-Advanced (see Chapter 14) and the IEEE 802.16 Wireless Metropolitan Area Networks.
By
Holger Claussen, Bell Labs Ireland, Alcatel-Lucent,
Lester Ho, Bell Labs Ireland, Alcatel-Lucent,
Sam Samuel, Bell Labs Ireland, Alcatel-Lucent,
Florian Pivit, Bell Labs Ireland, Alcatel-Lucent
The idea of having a cellular system deployed without planning is quite a challenging one. To have the resulting system work without any human involvement (except for the physical placement of cells) is even more challenging. The motivations for technology to head in this direction are numerous. Among these are the ever-present need to reduce the costs of operating a cellular network, and the need for the cellular network to keep pace with the increasing data demands placed upon it. The concept of a cellular network being able to deploy itself is not new and has been examined philosophically in [1]. The idea of femtocells, or more generally small cells, as a means to fill coverage gaps and to increase capacity is even older. Its roots can be traced back to [2]. There are, however, very good reasons why such deployments have not been successful in the past. The first is that as the size of the cells reduces, in order to increase the spatial frequency reuse, the deployment costs increase (due to the sheer number of additional cells). The second is that, once the cells are deployed, planning such a network manually can be complex and burdensome. If not carefully thought through, any insertion of a new cell into the topology can have detrimental rather than beneficial effects. These simple facts may often ruin the business case for the deployment of additional cells.
Multiple input multiple output (MIMO) communication has been established both theoretically and practically as a means to increase data rates and improve reliability in wireless networks. While single input single output (SISO) wireless communication techniques rely on time domain or frequency domain processing to precode and decode the transmitted and received data signals, multiple antenna communication provides an extra spatial dimension to improve the wireless link performance in terms of error rate, coverage, and/or spectral efficiency.
As interest in MIMO communication has grown, upcoming cellular standards have embraced using multiple antennas at the base stations (BSs) and the mobile user terminals to increase the data rates and improve the performance of the radio link [1]. Multiple antennas are also being considered in small cell networks (SCNs) and femtocell networks as a means to improve coverage and manage interference [2, 3]. The development of MIMO techniques for two-tier networks needs to take into account the specific topology of the network, characterized by irregularity in terms of deployment, operation mode (closed access vs. open access), channel state information (CSI) availability, and backhaul connectivity. In this chapter, we provide an overview of MIMO communication techniques in two-tier networks. We present the state of the art in terms of MIMO precoding and coordination techniques to manage interference in heterogeneous networks. We illustrate the various gains and the associated challenges from using linear precoding with perfect and imperfect channel state information at the transmitter (CSIT) in femtocell networks and evaluate the potential role that multi-antenna communication is bound to play in two-tier networks.
One unique trait of femtocells is that they are paid, installed, and managed by the end users. Compared with macrocells, which are installed by the network operators and thus can be accessed by any user, femtocells can choose the set of users that is allowed for access. In the simplest scenario, the femtocell can be configured for either: (i) closed access, where only registered home users can use the femtocell; or (ii) open access where any nearby users are allowed to use the femtocell. The choice of femtocell access control involves many important issues in two-tier femtocell networks [1-6].
The first important issue is cross-tier interference. Unlike wireless fidelity (WiFi) access points, femtocells serve users in licensed spectrum to guarantee quality of service (QoS) and because the devices they communicate with are developed for those frequencies. Compared to allocating separate channels inside the licensed spectrum exclusively to femtocells, sharing spectrum would be preferred from an operator perspective [1, 5, 7]. However, the co-channel spectrum sharing between femtocells and macrocells potentially gives rise to serious cross-tier interference in closed access. As shown in Figures 2.1 and 2.2, in closed access a cellular user, even when it is geographically close to a femtocell, is forced to communicate with the distant macro base station (BS). Therefore, this cellular user suffers from strong downlink interference from the nearby femtocell (see Figure 2.1), and likewise causes strong uplink interference to that femtocell (see Figure 2.2) [8-10].
The possibility of increasing coverage and capacity of radio cellular systems by deploying femtocell base stations is strongly dependent on the capability of avoiding interference with the macrocell network (macrocell users and macrocell base stations). A centralized radio resource allocation procedure controlling the spectrum allocation of the multi-tier network would ensure a perfect coexistence between the macrocell and the femtocell networks. However, such a centralized system is of very high complexity. Considering the distributed nature of the femtocell network, with femtocell base stations placed without planning, a distributed resource allocation system is therefore desirable. In the broader field of cognitive radio networks, frequency band sharing can be based on different approaches. For less dynamic systems, access to the spectrum can be based on the information provided by a central database where geographical information on the spectrum usage is stored. In this case, the cognitive device is aware of its geographical position and can decide which frequency bands can be used to establish a communication link. For the case of femtocells, since the spectrum usage from the macrocell system is quite dynamic and changes frequently, a spectrum allocation relying on database information is not feasible. As for distributed secondary networks, cognitive radio resource management (CRRM) is a solution for a femtocell user (FU) to detect and access the idle spectrum [1-4]. A way to implement the CRRM is to activate a dedicated signaling channel between the macrocell and femtocell networks. The busy channel assessment can be based on the detection of a preamble shared between the macrocell and femtocell networks.
Wireless cellular networks are designed to provide network coverage over large areas and support many users. Most recently, studies in 3GPP LTE-advanced have looked at the deployment of heterogeneous wireless networks to improve system performance as well as to enhance network coverage, especially in-building coverage [1–6]. Heterogeneous wireless networks use a mix of higher tier macrocells to extend network reach and lower tier small cells to enhance performance within the same frequency band [1–6]. These smaller cells offload the traffic from the macrocells and connect the traffic to the cellular core network via broadband access networks. However, as user-installed small cells (femtocells) are often deployed in an ad hoc manner, this gives rise to the problem of interference between cells. For example, amacrocell with a femtocell or a femtocell with another femtocell. New resource allocation techniques are required to ensure that the users control their power to mitigate performance loss due to interference. To enhance decentralized deployment, users also need to adapt their power with minimal signaling overhead [1, 2]. For example, users in femtocell can use digital subscriber line (DSL) or cable modem to exchange messages through the cellular core network to adjust their transmit powers to reduce the interference caused to the macrocell users.
There are several techniques for using radio signals to determine the position of an object:
Angle of arrival of a received signal indicates the direction to the transmitter. On its own this does not allow position to be determined as one also needs to know the distance between transmitter and receiver.
Direct inference of range. Range can be inferred in a number of ways: using signal strength; using techniques for measuring the time-of-flight of the radio signal; using phase of the received signal as a measure of range (assuming techniques for synchronising transmit and receive clocks can be implemented).
Measuring the time of arrival of a radio signal. In the absence of a definitive time reference (usually the case) it is necessary to measure arrival times of two or more signals and then by comparing them a position can be computed. Time Difference of Arrival or Observed Time Difference of Arrival is the most common way of resolving clock uncertainties.
This chapter will look at each of the main techniques in turn, identifying their strengths, weaknesses, advantages and disadvantages, with an emphasis on their practical use in real-world positioning systems.
Angle of arrival
Measuring the angle of arrival (AOA) of a radio signal is one of the oldest methods of locating a source and for many applications is still the preferred method. Radar systems are based on the principle of angle or bearing to target; they make use of a highly directional antenna sweeping the area and looking for echoes from reflective objects in the scanned area. Homing or beacon finding devices, often used for tracking wild animals or for stolen car recovery, may use angle of arrival to locate the source, a radio transmitter attached to the object being tracked, and then follow the direction of the radio signal until the quarry has been found. Most optical systems for tracking a target, whether with moving or fixed cameras are based on measuring the angle to the target.
Whilst it is impossible to predict the future there are a number of trends at the cutting edge of location and positioning that tell us what the topical issues of today are and hint at the direction things may go in the future.
Firstly there is the explosion in new GNSSs coming online. Combining measurements from them will lead to improved performance as well as robustness and ubiquity. However, we have shown that GNSS is vulnerable to attack and there is also some debate about whether so many systems will begin to interfere with one another. So whilst GNSS technology continues to advance there is also a significant amount of work going into relative positioning systems for special applications. These systems combine the best of radio signals and other sensor measurements in order to extract meaningful information about the relative position and location context of people and objects. In high-end general applications these systems will be combined with GNSS in order to provide seamless coverage indoors and out.
Of course on the technology side we must not forget about visual and imaging systems. These mimic the way we as humans perceive our environment better than any other, and with ongoing rapid advancement of image processing technology they promise to be a key component of future positioning and localisation systems.
The two most common coordinate systems used for positioning and navigation are:
Latitude, longitude and height above mean sea level;
Cartesian systems in which (x,y,z) axes are arranged orthogonally to one another.
However, in our day-to-day lives, most position and location information is described contextually: I’m at work; I’m sitting at my desk; I’m in the car; I’m travelling south on the M1; I’m sitting next to Jim; etc. This sort of contextual information is what most applications would ideally like to have available to them, but for the purposes of computing a position or location, most locating systems work in Cartesian space (including GPS), although the resulting position may be presented as latitude, longitude and height.
Latitude and longitude
A simple definition
Latitude and Longitude are two angles used to describe a terrestrial location on the surface of the Earth. See Figure 2.1.
Latitude is measured north and south from the equator, which is an imaginary line running around the circumference of the Earth so that 90 degrees north is the north pole and 90 degrees south is the south pole. Lines of latitude run east–west linking points having the same angular measurement from the zero degree parallel, the equator.
Location is becoming an important and integral part of our everyday lives, spurred on by the widespread, almost ubiquitous, availability of GPS (Global Positioning by Satellite) technology in everyday consumer devices such as car navigation systems, mobile phones and cameras, and the increasing adoption of inertial sensors, particularly accelerometers and magnetometers in everyday products. In order to set the context, this book first looks at coordinate systems and what is meant by position or location and how to describe this information [Chapter 2].
Because of the importance of GPS, the next topic covered is global navigation satellite systems (GNSS) [Chapter 3]. The ability to determine the precise location of a device anywhere in the world is in turn leading to the emergence of many different Location-Based Services promoted by leading global companies such as Google with Google Maps and Latitude, Nokia, Microsoft, AOL and community initiatives such as OpenStreetMap.
However, despite the hype surrounding GPS it is not the only positioning technology available and indeed there are many applications for which it doesn’t offer the required capability or performance. This book is intended to give a clear understanding of the different options available for locating and positioning systems with an emphasis on their real-world capabilities and applications. The next chapter [Chapter 4] covers the most important methods for determining position using radio signals, then Chapter 5 covers inertial navigation techniques and Chapter 6 looks at other methods of locating and positioning things. Chapter 7 deals with accuracy and performance and what they mean, as well as some of the fundamental techniques relating to location and position. Of course things never go entirely as planned, so Chapter 8 considers errors and failures and how to deal with them.
Determining the position of an object is a statistical task. It is based on measurements and observations of the environment around and signals from neighbouring devices – which are also affected by the environment. All of the measurements are subject to noise and uncertainty. When the measurements are combined to compute an estimate of the position, the result is statistical. This means that any position we compute has a probability and error margin associated with it.
It is common practice to describe a positioning system as ‘accurate to 2.5 metres’ (or some other number according to the system and technology used). So what does this actually mean?
What it does not mean is that every single location it ever generates will be within 2.5 metres of the ‘actual position’! So this raises several important questions:
What proportion of measurements are within 2.5 metres?
How do we know when a measurement is worse than the specified value?
Under what conditions would we expect this performance to be achieved?
What is the ‘actual position’ with which the system is being compared?
Are the errors repeatable? In other words will the same error occur if the test position is revisited at a later time?
At the time of writing there are two operating Global Navigation Satellite Systems (GNSS). The best known of the satellite positioning systems is GPS (Global Positioning System) which is the satellite-based navigation system developed by the US Department of Defense under the NAVSTAR programme. The other is GLONASS, the Russian system. Although it had fallen into disrepair it is now restored to full operational status.
There are additional planned systems, including GALILEO, COMPASS and others. Galileo and Compass are both in their pre-operational phases and are expected to become operational in the time-frame 2014 to 2018.
GNSSs allow a receiver which can receive radio signals from four or more navigation satellites (in general) to compute its position within an Earth fixed reference coordinate frame such as WGS84. The key features are:
It is a one-way system in which the satellites transmit navigation signals and receivers receive the signals and use them to compute their own positions (usually).
The receiver must be able to receive signals from four or more satellites (for normal 3D positioning), and therefore it does not work in places where the signals cannot be received. The better the quality of the signals received, the better the positioning.
The quality of the position fix is affected by the environment in which the receiver is operating: multipath, diffraction and interference all affect performance.
Essential principles underpinning services and applications
Navigation
There are relatively few applications for which location or position is the core purpose. Navigation is the notable example in which the purpose is to find a place or navigate a route to a place. Before the advent of modern technology ocean navigation was a huge problem facing mariners at sea. With the help of a sextant measuring the inclination of the Sun above the horizon at midday allows one’s latitude to be estimated. However, determining longitude is far more difficult. Sobel [9] describes the impact of this problem and efforts to solve it in the days of the early great sea navigators.
Given modern GPS systems it is easy to determine one’s position to good accuracy outdoors when there is an adequate view of the sky. However, the application challenge is to provide guidance to the user leading them along a route from one point to another.
One hears many stories and jokes about GPS navigation systems leading users astray. Whilst this is occasionally for technical reasons – failure to obtain a fix, or positioning errors – more often than not the problem lies with the navigation aspects: for example old or incorrect map data or lack of clarity in the guidance instructions.