To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The growing demand for wireless communication makes it important to determine the capacity limits of the underlying channels for these systems. These capacity limits dictate the maximum data rates that can be transmitted over wireless channels with asymptotically small error probability, assuming no constraints on delay or complexity of the encoder and decoder. The mathematical theory of communication underlying channel capacity was pioneered by Claude Shannon in the late 1940s. This theory is based on the notion of mutual information between the input and output of a channel. In particular, Shannon defined channel capacity as the channel's mutual information maximized over all possible input distributions. The significance of this mathematical construct was Shannon's coding theorem and its converse. The coding theorem proved that a code did exist that could achieve a data rate close to capacity with negligible probability of error. The converse proved that any data rate higher than capacity could not be achieved without an error probability bounded away from zero. Shannon's ideas were quite revolutionary at the time: the high data rates he predicted for telephone channels, and his notion that coding could reduce error probability without reducing data rate or causing bandwidth expansion. In time, sophisticated modulation and coding technology validated Shannon's theory and so, on telephone lines today, we achieve data rates very close to Shannon capacity with very low probability of error.
The wireless radio channel poses a severe challenge as a medium for reliable high-speed communication. Not only is it susceptible to noise, interference, and other channel impediments, but these impediments change over time in unpredictable ways as a result of user movement and environment dynamics. In this chapter we characterize the variation in received signal power over distance due to path loss and shadowing. Path loss is caused by dissipation of the power radiated by the transmitter as well as by effects of the propagation channel. Path-loss models generally assume that path loss is the same at a given transmit–receive distance (assuming that the path-loss model does not include shadowing effects). Shadowing is caused by obstacles between the transmitter and receiver that attenuate signal power through absorption, reflection, scattering, and diffraction. When the attenuation is strong, the signal is blocked. Received power variation due to path loss occurs over long distances (100–1000 m), whereas variation due to shadowing occurs over distances that are proportional to the length of the obstructing object (10–100 m in outdoor environments and less in indoor environments). Since variations in received power due to path loss and shadowing occur over relatively large distances, these variations are sometimes referred to as large-scale propagation effects. Chapter 3 will deal with received power variations due to the constructive and destructive addition of multipath signal components.
The advances over the last several decades in hardware and digital signal processing have made digital transceivers much cheaper, faster, and more power efficient than analog transceivers. More importantly, digital modulation offers a number of other advantages over analog modulation, including higher spectral efficiency, powerful error correction techniques, resistance to channel impairments, more efficient multiple access strategies, and better security and privacy. Specifically, high-level digital modulation techniques such as MQAM allow much more efficient use of spectrum than is possible with analog modulation. Advances in coding and coded modulation applied to digital signaling make the signal much less susceptible to noise and fading, and equalization or multicarrier techniques can be used to mitigate intersymbol interference (ISI). Spread-spectrum techniques applied to digital modulation can simultaneously remove or combine multipath, resist interference, and detect multiple users. Finally, digital modulation is much easier to encrypt, resulting in a higher level of security and privacy for digital systems. For all these reasons, systems currently being built or proposed for wireless applications are all digital systems.
Digital modulation and detection consist of transferring information in the form of bits over a communication channel. The bits are binary digits taking on the values of either 1 or 0. These information bits are derived from the information source, which may be a digital source or an analog source that has been passed through an A/D converter.
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
The paradigm of coresets has recently emerged as a powerful tool for efficiently approximating various extent measures of a point set P. Using this paradigm, one quickly computes a small subset Q of P, called a coreset, that approximates the original set P and and then solves the problem on Q using a relatively inefficient algorithm. The solution for Q is then translated to an approximate solution to the original point set P. This paper describes the ways in which this paradigm has been successfully applied to various optimization and extent measure problems.
1. Introduction
One of the classical techniques in developing approximation algorithms is the extraction of “small” amount of “most relevant” information from the given data, and performing the computation on this extracted data. Examples of the use of this technique in a geometric context include random sampling [Chazelle 2000; Mulmuley 1993], convex approximation [Dudley 1974; Bronshteyn and Ivanov 1976], surface simplification [Heckbert and Garland 1997], feature extraction and shape descriptors [Dryden and Mardia 1998; Costa and Cesar 2001]. For geometric problems where the input is a set of points, the question reduces to finding a small subset (a coreset) of the points, such that one can perform the desired computation on the coreset.
As a concrete example, consider the problem of computing the diameter of a point set. Here it is clear that, in the worst case, classical sampling techniques like ϵ-approximation and ϵ-net would fail to compute a subset of points that contain a good approximation to the diameter [Vapnik and Chervonenkis 1971; Haussler and Welzl 1987].
This chapter summarizes the technical details associated with the two most prevalent wireless systems in operation today: cellular phones and wireless LANs. It also summarizes the specifications for three short range wireless network standards that have emerged to support a broad range of applications. More details on wireless standards can be found in.
Cellular Phone Standards
First-Generation Analog Systems
In this section we summarize cellular phone standards. We begin with the standards for first-generation (1G) analog cellular phones, whose main characteristics are summarized in Table D.1. Systems based on these standards were widely deployed in the 1980s. While many of these systems have been replaced by digital cellular systems, there are many places throughout the world where these analog systems are still in use. The best known standard is the Advanced Mobile Phone Service (AMPS), developed by Bell Labs in the 1970s and first used commercially in the United States in 1983. After its U.S. deployment, many other countries adopted AMPS as well. This system has a narrowband version, narrowband AMPS (N-AMPS), with voice channels that are one third the bandwidth of regular AMPS. Japan deployed the first commercial cellular phone system in 1979 with the NTT (MCS-L1) standard based on AMPS, but at a higher frequency and with voice channels of slightly lower bandwidth. Europe also developed a similar standard to AMPS called the Total Access Communication System (TACS), which operates at a higher frequency and with smaller bandwidth channels than AMPS.
In this chapter we examine fading models for the constructive and destructive addition of different multipath components introduced by the channel. Although these multipath effects are captured in the ray-tracing models from Chapter 2 for deterministic channels, in practice deterministic channel models are rarely available and so we must characterize multipath channels statistically. In this chapter we model the multipath channel by a random time-varying impulse response. We will develop a statistical characterization of this channel model and describe its important properties.
If a single pulse is transmitted over a multipath channel then the received signal will appear as a pulse train, with each pulse in the train corresponding to the line-of-sight component or a distinct multipath component associated with a distinct scatterer or cluster of scatterers. The time delay spread of a multipath channel can result in significant distortion of the received signal. This delay spread equals the time delay between the arrival of the first received signal component (LOS or multipath) and the last received signal component associated with a single transmitted pulse. If the delay spread is small compared to the inverse of the signal bandwidth, then there is little time spreading in the received signal. However, if the delay spread is relatively large then there is significant time spreading of the received signal, which can lead to substantial signal distortion.
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
We define quasiconvex programming, a form of generalized linear programming in which one seeks the point minimizing the pointwise maximum of a collection of quasiconvex functions. We survey algorithms for solving quasiconvex programs either numerically or via generalizations of the dual simplex method from linear programming, and describe varied applications of this geometric optimization technique in meshing, scientific computation, information visualization, automated algorithm analysis, and robust statistics.
1. Introduction
Quasiconvex programming is a form of geometric optimization, introduced in [Amenta et al. 1999] in the context of mesh improvement techniques and since applied to other problems in meshing, scientific computation, information visualization, automated algorithm analysis, and robust statistics [Bern and Eppstein 2001; 2003; Chan 2004; Eppstein 2004]. If a problem can be formulated as a quasiconvex program of bounded dimension, it can be solved algorithmically in a linear number of constant-complexity primitive operations by generalized linear programming techniques, or numerically by generalized gradient descent techniques. In this paper we survey quasiconvex programming algorithms and applications.
1.1. Quasiconvex functions. Let Y be a totally ordered set, for instance the real numbers ℝ or integers ℤ ordered numerically. For any function f : X ⟼ Y, and any value ƛ ∈ Y, we define the lower level set A function q : X ⟼ Y, where X is a convex subset of Rd, is called quasiconvex [Dharmadhikari and Joag-Dev 1988] when its lower level sets are all convex.
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
A carpenter's ruler is a ruler divided into pieces of different lengths which are hinged where the pieces meet, which makes it possible to fold the ruler. The carpenter's ruler folding problem, originally posed by Hopcroft, Joseph and Whitesides, is to determine the smallest case (or interval on the line) into which the ruler fits when folded. The problem is known to be NP-complete. The best previous approximation ratio achieved, dating from 1985, is 2. We improve this result and provide a fully polynomial-time approximation scheme for this problem. In contrast, in the plane, there exists a simple linear-time algorithm which computes an exact (optimal) folding of the ruler in some convex case of minimum diameter. This brings up the interesting problem of finding the minimum area of a convex universal case (of unit diameter) for all rulers whose maximum link length is one.
1. Introduction
The carpenter's ruler folding problem is: Given a sequence of rigid rods (links) of various integral lengths connected end-to-end by hinges, to fold it so that its overall folded length is minimum. It was first posed in [Hopcroft et al. 1985], where the authors proved that the problem is NP-complete using a reduction from the NP-complete problem PARTITION (see [Garey and Johnson 1979; Cormen et al. 1990]). A simple linear-time factor 2 approximation algorithm, as well as a pseudo-polynomial O﹛L2n) time dynamic programming algorithm, where L is the maximum link length, where presented in [Hopcroft et al. 1985] (see also [Kozen 1992]).
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
Ordinary polytopes were introduced by Bisztriczky as a (nonsimplicial) generalization of cyclic polytopes. We show that the colex order of facets of the ordinary polytope is a shelling order. This shelling shares many nice properties with the shellings of simplicial polytopes. We also give a shallow triangulation of the ordinary polytope, and show how the shelling and the triangulation are used to compute the toric h-vector of the ordinary polytope. As one consequence, we get that the contribution from each shelling component to the h-vector is nonnegative. Another consequence is a combinatorial proof that the entries of the h-vector of any ordinary polytope are simple sums of binomial coefficients.
1. Introduction
This paper has a couple of main motivations. The first comes from the study of toric h-vectors of convex polytopes. The h-vector played a crucial role in the characterization of face vectors of simplicial polytopes [Billera and Lee 1981; McMullen and Shephard 1971; Stanley 1980]. In the simplicial case, the h-vector is linearly equivalent to the face vector, and has a combinatorial interpretation in a shelling of the polytope. The h -vector of a simplicial polytope is also the sequence of Betti numbers of an associated toric variety. In this context it generalizes to nonsimplicial polytopes. However, for nonsimplicial polytopes, we do not have a good combinatorial understanding of the entries of the h -vector. (Chan [1991] gives a combinatorial interpretation for the h-vector of cubical polytopes.)
The definition of the (toric) h-vector for general polytopes (and even more generally, for Eulerian posets) first appeared in [Stanley 1987]. Already there Stanley raised the issue of computing the h-vector from a shelling of the polytope.
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
We study metric properties of convex bodies B and their polars B°, where B is the convex hull of an orbit under the action of a compact group G. Examples include the Traveling Salesman Polytope in polyhedral combinatorics (G = Sn, the symmetric group), the set of nonnegative polynomials in real algebraic geometry (G = SO(n), the special orthogonal group), and the convex hull of the Grassmannian and the unit comass ball in the theory of calibrated geometries (G = SO(n), but with a different action). We compute the radius of the largest ball contained in the symmetric Traveling Salesman Polytope, give a reasonably tight estimate for the radius of the Euclidean ball containing the unit comass ball and review (sometimes with simpler and unified proofs) recent results on the structure of the set of nonnegative polynomials (the radius of the inscribed ball, volume estimates, and relations to the sums of squares). Our main tool is a new simple description of the ellipsoid of the largest volume contained in B°.
1. Introduction and Examples
Let Gbea compact group acting in a finite-dimensional real vector space V and let υ ∈ V be a point. The main object of this paper is the convex hull
B = B( υ) = conv(g υ: g ∈ G)
Objects such as B and B° appear in many different contexts. We give three examples below.
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
Let L be a collection of n pairwise disjoint segments in general position in the plane. We show that one can find a subcollection of Ω (n1//3) segments that can be completed to a noncrossing simple path by adding rectilinear edges between endpoints of pairs of segments. On the other hand, there is a set L of n segments for which no subset of size (2n)1//2 or more can be completed to such a path.
1. Introduction
Since the publication of the seminal paper of Erdös and Szekeres [1935], many similar results have been discovered, establishing the existence of various regular subconfigurations in large geometric arrangements. The classical tool for proving such theorems is Ramsey theory [Graham et al. 1990]. However, the size of the regular substructures guaranteed by Ramsey's theorem are usually very small (at most logarithmic) in terms of the size n of the underlying arrangement. In most cases, the results are far from optimal. One can obtain better bounds (n ϵ for some ϵ > 0) by introducing some linear orders on the elements of the arrangement and applying some Dilworth-type theorems [1950] for partially ordered sets [Pach and Töröcsik 1994; Larman et al. 1994; Pach and Tardos 2000]. A simple onedimensional prototype of such a statement is the Erdos-Szekeres lemma: any sequence of n real numbers has a monotone increasing or monotone decreasing subsequence of length . In this note, we give a new application of this idea.
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
I show that there are sets of n points in three dimensions, in general position, such that any triangulation of these points has only O(n5/3) simplices. This is the first nontrivial upper bound on the MinMax triangulation problem posed by Edelsbrunner, Preparata and West in 1990: What is the minimum over all general-position point sets of the maximum size of any triangulation of that set? Similar bounds in higher dimensions are also given.
1. Introduction
In the plane, all triangulations of a set of points use the same number of triangles. This is a simple consequence of each triangle having an interior angle sum of π, and each interior point of the convex hull contributing an angle sum of 2π, which must be used up by the triangles.
Neither the constant size of triangulations nor the constant angle sum of simplices holds in higher dimensions. A classic example is the cube, which can be decomposed in two ways: into five simplices (cutting off alternate vertices) or into six simplices (which are even congruent; it is a well-known simple geometric puzzle to assemble six congruent simplices, copies of conv((000), (100), (010), (011)), into a cube).
For higher-dimensional cubes, the same problem was studied in a number of papers [Böhm 1989; Broadie and Cottle 1984; Haiman 1991; Hughes 1993; Hughes 1994; Lee 1985; Marshall 1998; Orden and Santos 2003; Sallee 1984; Smith 2000]. This suggest that one should be interested in the possible values of the numbers of simplices for arbitrary point sets.
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
Topological complexity of semialgebraic sets in ℝk has been studied by many researchers over the past fifty years. An important measure of the topological complexity are the Betti numbers. Quantitative bounds on the Betti numbers of a semialgebraic set in terms of various parameters (such as the number and the degrees of the polynomials defining it, the dimension of the set etc.) have proved useful in several applications in theoretical computer science and discrete geometry. The main goal of this survey paper is to provide an up to date account of the known bounds on the Betti numbers of semialgebraic sets in terms of various parameters, sketch briefly some of the applications, and also survey what is known about the complexity of algorithms for computing them.
1. Introduction
Let R be a real closed field and S a semialgebraic subset of Rk, defined by a Boolean formula, whose atoms are of the form P = 0, P > 0 , P < 0 , where P ∈ p for some finite family of polynomials P ⊂ R[X1,…, Xk . It is well known [Bochnak et al. 1987] that such sets are finitely triangulable. Moreover, if the cardinality of P and the degrees of the polynomials in P are bounded, then the number of topological types possible for S is finite [Bochnak et al. 1987]. (Here, two sets have the same topological type if they are semialgebraically homeomorphic). A natural problem then is to bound the topological complexity of S in terms of the various parameters of the formula defining S.
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
The aim of this survey is to collect and explain some geometric results whose proof uses graph or hypergraph theory. No attempt has been made to give a complete list of such results. We rather focus on typical and recent examples showing the power and limitations of the method. The topics covered include forbidden configurations, geometric constructions, saturated hypergraphs in geometry, independent sets in graphs, the regularity lemma, and VC-dimension.
1. Introduction
Among n distinct points in the plane the unit distance occurs at most O(n3/2) times. The proof of this fact uses two things. The first is a theorem from graph theory saying that a graph on n vertices containing no K2,3 can have at most O(n3/2) edges. The second is a simple fact from plane geometry: the unit distance graph contains no if K2,3
This is the first application of graph theory in geometry, and is contained in a short and extremely influential paper of Paul Erdős [1946]. The first application of hypergraph theory in geometry is even earlier: it is the use of Ramsey's theorem in the famous Erdős and Szekeres result from 1935 (see below in the next section). Actually, Erdős and Szekeres proved Ramsey's theorem (without knowing it had been proved earlier) since they needed it for the geometric result.
The aim of this survey is to collect and explain some geometric results whose proof uses graph or hypergraph theory. Such applications vary in depth and difficulty. Often a very simple geometric statement adds an extra condition to the combinatorial structure at hand, which helps in the proof. At other times, the geometry is not so simple but is dictated by the combinatorics of the objects in question.
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
The famous Sylvester's problem is: Given finitely many noncollinear points in the plane, do they always span a line that contains precisely two of the points? The answer is yes, as was first shown by Gallai in 1944. Since then, many other proofs and generalizations of the problem appeared. We present two new proofs of Gallai's result, using the powerful method of allowable sequences.
1. Introduction
Sylvester [1893] raised the following problem: Given finitely many noncollinear points in the plane, do they always span a simple line (that is, a line that contains precisely two of the points)? The answer is yes, as was first shown by Gallai [1944].
By duality, the former question is equivalent to the question: Given finitely many straight lines in the plane, not all passing through the same point, do they always determine a simple intersection point (a point that lies on precisely two of the lines)?
A natural generalization is to find a lower bound on the number of simple lines (or simple points, in the dual version). The dual version of this question can be generalized to pseudolines. The best lower bound [Csima and Sawyer 1993] states that an arrangement of n pseudolines in the plane determines at least 6n/13 simple points. The conjecture [Borwein and Moser 1990] is that there are at least n/2 simple points for n ≠ 7,13. For the history of Sylvester's problem, with its many proofs and generalizations, see [Borwein and Moser 1990; Nilakantan 2005].
Edited by
Jacob E. Goodman, City College, City University of New York,Janos Pach, City College, City University of New York and New York University,Emo Welzl, Eidgenössische Technische Hochschule Zürich
In a three-dimensional arrangement of 25 congruent nonoverlapping infinite circular cylinders there are always two that do not touch each other.
1. Introduction
The following problem was posed by Littlewood [1968]: What is the maximum number of congruent infinite circular cylinders that can be arranged in ℝ 3so that any two of them are touching? Is it 7?
This problem is still open. The analogous problem concerning circular cylinders of finite length became known as a mathematical puzzle due to a the popular book [Gardner 1959]: Find an arrangement of 7 cigarettes so that any two touch each other. The question whether 7 is the largest such number is open. For constructions and for a more detailed account on both of these problems see the research problem collection [Moser and Pach ≥ 2005].
A very large bound for the maximal number of cylinders in Littlewood's original problem was found by the author in 1981 (an outline proof was presented at the Discrete Geometry meeting in Oberwolfach in that year). The bound was expressed in terms of various Ramsey constants, and so large that it merely showed the existence of a finite bound. In this paper we use a different approach to show that at most 24 cylinders can be arranged so that any two of them are touching:
THEOREM 1. In an arrangement of 25 congruent nonoverlaping infinite circular cylinders there are always two that do not touch each other.
We have seen in Chapter 6 that delay spread causes intersymbol interference (ISI), which can cause an irreducible error floor when the modulation symbol time is on the same order as the channel delay spread. Signal processing provides a powerful mechanism to counteract ISI. In a broad sense, equalization defines any signal processing technique used at the receiver to alleviate the ISI problem caused by delay spread. Signal processing can also be used at the transmitter to make the signal less susceptible to delay spread: spread-spectrum and multicarrier modulation fall in this category of transmitter signal processing techniques. In this chapter we focus on equalization; multicarrier modulation and spread spectrum are the topics of Chapters 12 and 13, respectively.
Mitigation of ISI is required when the modulation symbol time Ts is on the order of the channel's rms delay spread σTm. For example, cordless phones typically operate indoors, where the delay spread is small. Since voice is also a relatively low–data-rate application, equalization is generally not needed in cordless phones. However, the IS-136 digital cellular standard is designed for outdoor use, where σTm ≈ Ts, so equalization is part of this standard. Higher–data-rate applications are more sensitive to delay spread and generally require high-performance equalizers or other ISI mitigation techniques. In fact, mitigating the impact of delay spread is one of the most challenging hurdles for high-speed wireless data systems.
Infrastructure-based wireless networks have base stations, also called access points, deployed throughout a given area. These base stations provide access for mobile terminals to a backbone wired network. Network control functions are performed by the base stations, and often the base stations are connected together to facilitate coordinated control. This infrastructure is in contrast to ad hoc wireless networks, described in Chapter 16, which have no backbone infrastructure. Examples of infrastructure-based wireless networks include cellular phone systems, wireless LANs, and paging systems. Base station coordination in infrastructure-based networks provides a centralized control mechanism for transmission scheduling, dynamic resource allocation, power control, and handoff. As such, it can more efficiently utilize network resources to meet the performance requirements of individual users. Moreover, most networks with infrastructure are designed so that mobile terminals transmit directly to a base station, with no multihop routing through intermediate wireless nodes. In general these single-hop routes have lower delay and loss, higher data rates, and more flexibility than multihop routes. For these reasons, the performance of infrastructure-based wireless networks tends to be much better than in networks without infrastructure. However, it is sometimes more expensive or simply not feasible or practical to deploy infrastructure, in which case ad hoc wireless networks are the best option despite their typically inferior performance.
Cellular systems are a type of infrastructure-based network that make efficient use of spectrum by reusing it at spatially separated locations.