To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Josep Díaz, Universitat Politècnica de Catalunya, Barcelona,Maria Serna, Universitat Politècnica de Catalunya, Barcelona,Paul Spirakis, University of Patras, Greece,Jacobo Torán, Universität Ulm, Germany
In this chapter we provide an intuitive introduction to the topic of approximability and parallel computation. The method of approximation is one of the well established ways of coping with computationally hard optimization problems. Many important problems are known to be NP-hard, therefore assuming the plausible hypothesis that P≠NP, it would be impossible to obtain polynomial time algorithms to solve these problems.
In Chapter 2, we will give a formal definition of optimization problem, and a formal introduction to the topics of PRAM computation and approximability. For the purpose of this chapter, in an optimization problem the goal is to find a solution that maximizes or minimizes an objective function subjected to some constrains. Let us recall that in general to study the NPcompleteness of an optimization problem, we consider its decision version. The decision version of many optimization problems is NP-complete, while the optimization version is NP-hard (see for example the book by Garey and Johnson [GJ79]). To refresh the above concepts, let us consider the Maximum Cut problem (MAXCUT).
Given a graph G with a set V of n vertices and a set E of edges, the MAXCUT problem asks for a partition of V into two disjoint sets V1 and V2 that maximizes the number of edges crossing between V1 and V2. From now on through all the manuscript, all graphs have a finite number of vertices n. The foregoing statement of the MAXCUT problem is the optimization version and it is known to be NP-hard [GJ79].
Josep Díaz, Universitat Politècnica de Catalunya, Barcelona,Maria Serna, Universitat Politècnica de Catalunya, Barcelona,Paul Spirakis, University of Patras, Greece,Jacobo Torán, Universität Ulm, Germany
Josep Díaz, Universitat Politècnica de Catalunya, Barcelona,Maria Serna, Universitat Politècnica de Catalunya, Barcelona,Paul Spirakis, University of Patras, Greece,Jacobo Torán, Universität Ulm, Germany
In Chapter 2, we presented several complexity classes based on the approximability degree of the problems they contain: APX, PTAS, FPTAS and their parallel counterparts NCX, NCAS, FNCAS. These classes, defined in terms of degree of approximability, are known as the “computationally defined” approximation classes. We have seen that in order to show that a problem belongs to one of these classes, one can present an approximation algorithm or a reduction to a problem with known approximability properties. In this chapter we show that in some cases the approximation properties of a problem can be obtained directly from its syntactical definition. These results are based on Fagin's characterization of the class NP in terms of existential second-order logic [Fag75], which constitutes one of the most interesting connections between logic and complexity. Papadimitriou and Yannakakis discovered that Fagin's characterization can be adapted to deal with optimization problems. They defined classes of approximation problems according to their syntactical characterization [PY91]. The importance of this approach comes from the fact that approximation properties of the optimization problems can be derived from their characterization in terms of logical quantifiers. Papadimitriou and Yannakakis defined the complexity classes MaxSNP and MaxNP which contain optimization versions of many important NP problems. They showed that many MaxSNP problems are in fact complete for the class. The paradigmatic MaxSNP-complete problem is Maximum 3SAT. Recall that the Maximum 3SAT problem consists in, given a boolean formula F written in conjunctive normal form with three literals in each clause, finding a truth assignment that satisfies the maximum number of clauses.
Josep Díaz, Universitat Politècnica de Catalunya, Barcelona,Maria Serna, Universitat Politècnica de Catalunya, Barcelona,Paul Spirakis, University of Patras, Greece,Jacobo Torán, Universität Ulm, Germany
In the previous chapter, we gave a brief introduction to the topic of parallel approximability. We keep the discussion at an intuitive level, trying to give the feeling of the main ideas behind parallel approximability. In this chapter, we are going to review in a more formal setting the basic definitions about PRAM computations and approximation that we shall use through the text. We will also introduce the tools and notation needed. In any case, this chapter will not be a deep study of these topics. There exists a large body of literature for the reader who wishes to go further in the theory of PRAM computation, among others the books by Akl [Akl89], Gibbons and Rytter [GR88], Reif [Rei93] and JáJá [JaJ92]. There are also excellent short surveys on this topic, we just mention the one by Karp and Ramachandran [KR90] and the collection of surveys from the ALCOM school in Warwick [GS93]. In a similar way, many survey papers and lecture notes have been written on the topic of approximability, among others the Doctoral Dissertation of V. Kann [Kan92] with a recent update on the appendix of problems [CK95], the lecture notes of R. Motwani [Mot92], the survey by Ausiello et al. [ACP96] which includes a survey of non-approximability methods, the recent book edited by Hochbaum [Hoc96] and a forthcoming book by Ausiello et al. [ACG+96]
The PRAM Model of Computation
We begin this section by giving a formal introduction to our basic model of computation, the Parallel Random Access Machine. A PRAM consists of a number of sequential processors, each with its own memory, working synchronously and communicating between themselves through a common shared memory.
The lifetime of a player is defined to be the time where he gets his b-th hit, where a hit will occur with probability p. We consider the maximum statistics of N independent players. For b≠1 this is significantly more difficult than the known instance b=1. The expected value of the maximum lifetime of N players is given by logQN+(b−1)logQ logQN+ smaller order terms, where Q=1/(1−p).
It is shown that if n>n0(d) then any d-regular graph G=(V, E) on n vertices contains a set of u=[lfloor]n/2[rfloor] vertices which is joined by at most (d/2−c√d)u edges to the rest of the graph, where c>0 is some absolute constant. This is tight, up to the value of c.
We prove a generalization of a theorem of Ganter concerning the embedding of partial Steiner systems into Steiner systems. As an application we discuss a further version of the problem of Rosenfeld on embedding graphs into strongly regular graphs.
for transition probabilities λn,[lscr]=q[lscr] and λn,[lscr]=qn−1. We give closed forms for the distributions and the moments of the underlying random variables. Thereby we observe that the distributions can be easily described in terms of q-Stirling numbers of the second kind. Their occurrence in a purely time dependent Markov process allows a natural approximation for these numbers through the normal distribution. We also show that these Markov processes describe some parameters related to the study of random graphs as well as to the analysis of algorithms.
In generalisation of the beta law obtained under the GEM/Poisson–Dirichlet distribution in Hirth [12] we undertake here an analogous construction which results in the Dirichlet law. Our proof makes use of Hoppe's Pólya-like urn model in population genetics.
It is proved that the smallest cardinality among the maximal irredundant sets in an n–vertex graph with maximum degree Δ([ges ]2) is at least 2n/3Δ. This substantially improves a bound by Bollobás and Cockayne [1]. The class of graphs which attain this bound is characterised.
An intersecting system of type (∃, ∀, k, n) is a collection []={[Fscr]1, ...,[Fscr]m} of pairwise disjoint families of k-subsets of an n-element set satisfying the following condition. For every ordered pair [Fscr]i and [Fscr]j of distinct members of [] there exists an A∈[Fscr]i that intersects every B∈[Fscr]j. Let In(∃, ∀, k) denote the maximum possible cardinality of an intersecting system of type (∃, ∀, k, n). Ahlswede, Cai and Zhang conjectured that for every k≥1, there exists an n0(k) so that In(∃, ∀, k)=(n−1/k−1) for all n>n0(k). Here we show that this is true for k≤3, but false for all k≥8. We also prove some related results.
A sharper form of the Szarek–Talagrand ‘isomorphic’ version of the Sauer–Shelah lemma is proved. Also we prove an analogous ‘isomorphic’ version of the Karpovsky–Milman lemma, which is a generalization of that due to Sauer and Shelah.
It has been known for several years that the lattice of subspaces of a finite vector space has a decomposition into symmetric chains, i.e. a decomposition into disjoint chains that are symmetric with respect to the rank function of the lattice. This paper gives a positive answer to the long-standing open problem of providing an explicit construction of such a symmetric chain decomposition for a given lattice of subspaces of a finite (dimensional) vector space. The construction is done inductively using Schubert normal forms and results in a bracketing algorithm similar to the well-known algorithm for Boolean lattices.
This paper is concerned with the analysis of locally time-synchronized slot systems for broadcast in packet radio networks. Local synchronization has been proposed in practice as less expensive than global synchronization over very wide areas, or over mobile networks. In the case of two locally coordinated groups of stations, under the assumption that the phase shift on the clocks between the two groups is random, it is shown that the probability of no collision is maximized when occupied slots within each group are chosen consecutively, regardless of the number of total slots, or the number of occupied slots in either group.
We explore the ‘Hausdorff dimension at infinity’ for self-affine carpets defined on the square lattice. This notion of dimension (due to Barlow and Taylor), which is the correct notion from a probabilistic perspective, differs for these sets from more ‘naive’ indices of fractal dimension.
Certain convergent search algorithms can be turned into chaotic dynamic systems by renormalisation back to a standard region at each iteration. This allows the machinery of ergodic theory to be used for a new probabilistic analysis of their behaviour. Rates of convergence can be redefined in terms of various entropies and ergodic characteristics (Kolmogorov and Rényi entropies and Lyapunov exponent). A special class of line-search algorithms, which contains the Golden-Section algorithm, is studied in detail. Their associated dynamic systems exhibit a Markov partition property, from which invariant measures and ergodic characteristics can be computed. A case is made that the Rényi entropy is the most appropriate convergence criterion in this environment.
Given a string P called the pattern and a longer string T called the text, the exact matching problem is to find all occurrences, if any, of pattern P in text T.
For example, if P = aba and T = bbabaxababay then P occurs in T starting at locations 3, 7, and 9. Note that two occurrences of P may overlap, as illustrated by the occurrences of P at locations 7 and 9.
Importance of the exact matching problem
The practical importance of the exact matching problem should be obvious to anyone who uses a computer. The problem arises in widely varying applications, too numerous to even list completely. Some of the more common applications are in word processors; in utilities such as grep on Unix; in textual information retrieval programs such as Medline, Lexis, or Nexis; in library catalog searching programs that have replaced physical card catalogs in most large libraries; in internet browsers and crawlers, which sift through massive amounts of text available on the internet for material containing specific keywords; in internet news readers that can search the articles for topics of interest; in the giant digital libraries that are being planned for the near future; in electronic journals that are already being “published” on-line; in telephone directory assistance; in on-line encyclopedias and other educational CD-ROM applications; in on-line dictionaries and thesauri, especially those with cross-referencing features (the Oxford English Dictionary project has created an electronic on-line version of the OED containing 50 million words); and in numerous specialized databases.