To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Over forty years of intense theoretical research have failed to produce an adequate general, computational theory of bargaining and negotiation.
(Gresik and Satterthwaite, 1989)
Negotiation is a crucial part of commercial activities in physical as well as in electronic markets (see section 1.3). Current human-based negotiation is relatively slow, does not always uncover the best solution, and is, furthermore, constrained by issues of culture, ego, and pride (Beam and Segev, 1997). Experiments and field studies demonstrate that even in simple negotiations people often reach suboptimal agreements, ther eby “leaving money on the table” (Camerer, 1990). The end result is that the negotiators are often not able to reach agreements that would make each party better off.
The fact that negotiators fail to find better agreements highlights that negotiation is a search process. What makes negotiation different from the usual optimization search is that each side has private information, but neither typically knows the other's utility function. Furthermore, both sides often have an incentive to misrepresent their preferences (Oliver, 1997). Finding an optimal agreement in this environment is extremely challenging. Both sides are in competition but must jointly search for possible agreements. Although researchers in economics, game theory and behavioral sciences have investigated negotiation processes for a long time, a solid and comprehensive theoretical framework is still lacking. A basic principle of microeconomics and negotiation sciences is that there is not a single, “best” negotiation protocol for all possible negotiation situations.
The retrieval of relevant cases plays a crucial role in case-based reasoning. There are three major methods for the retrieval of relevant cases: computational approaches (based upon measures of similarity), representational approaches (based upon indexing structures) and hybrid approaches. This paper looks at recent successful implementations of case retrieval with regard to this classification framework. Similarly it emphasises computational and representational models applied to feature-vector case representations.
To an ever-growing degree, information technology is being based on interactive visual media, and visualisation techniques are emerging as the primary support to decision-making tasks. In this paper I briefly survey the most recent results in the field of advanced visual interfaces, focusing on visualisation techniques along with their design and their applications.
Reasoning with inconsistency involves some compromise on classical logic. There is a range of proposals for logics (called paraconsistent logics) for reasoning with inconsistency each with pros and cons. Selecting an appropriate paraconsistent logic for an application depends upon the requirements of the application. Here we review paraconsistent logics for the potentially significant application area of technology for structured text. Structured text is a general concept that is implicit in a variety of approaches to handling information. Syntactically, an item of structured text is a number of grammatically simple phrases together with a semantic label for each phrase. Items of structured text may be nested within larger items of structured text. The semantic labels in a structured text are meant to parameterize a stereotypical situation, and so a particular item of structured text is an instance of that stereotypical situation. Much information is potentially available as structured text, including tagged text in XML, text in relational and object-oriented databases, and the output from information extraction systems in the form of instantiated templates. In this review paper, we formalize the concept of structured text, and then focus on how we can identify inconsistency in items of structured text, and reason with these inconsistencies. Then we review key approaches to paraconsistent reasoning, and discuss the application of them to reasoning with inconsistency in structured text.
In this study we present a review of the emerging field of meta-knowledge components as practised over the past decade among a variety of practitioners. We use the artificially defined term “meta-knowledge” to encompass all those different but overlapping notions used by the artificial intelligence and software engineering communities to represent reusable modelling frameworks: ontologies, problem-solving methods, patterns and experience factories and bases, to name but a few. We then elaborate on how meta-knowledge is deployed in the context of system's design to improve its reliability by consistency-checking, enhance its reuse potential and manage its knowledge-sharing. We speculate on its usefulness and explore technologies for supporting deployment of meta-knowledge. We argue that, despite the different approaches being followed in systems design by divergent communities, meta-knowledge is present in all cases, in a tacit or explicit form, and its utilisation depends on pragmatic aspects which we try to identify and critically review on criteria of effectiveness.
This paper surveys the most representative approaches of knowledge-base revision. After a description of the revision characterization according to the AGM paradigm, the paper reviews different revision methods. In each case, the same example is used, as a reference example, to illustrate the different approaches. Closely connected with revision, some other non-monotonic approaches, like update, are briefly presented.
Destructive array update optimization is critical for writing scientific codes in functional languages. We present set constraints for an interprocedural update optimization that runs in polynomial time. This is a multi-pass optimization, involving interprocedural flow analyses for aliasing and liveness. We characterize and prove the soundness of these analyses using small-step operational semantics. We also prove that any sound liveness analysis induces a correct program transformation.
A fair amount has been written on the subject of reasoning about pointer algorithms. There was a peak about 1980 when everyone seemed to be tackling the formal verification of the Schorr–Waite marking algorithm, including Gries (1979, Morris (1982) and Topor (1979). Bornat (2000) writes: “The Schorr–Waite algorithm is the first mountain that any formalism for pointer aliasing should climb”. Then it went more or less quiet for a while, but in the last few years there has been a resurgence of interest, driven by new ideas in relational algebras (Möeller, 1993), in data refinement Butler (1999), in type theory (Hofmann, 2000; Walker and Morrisett, 2000), in novel kinds of assertion (Reynolds, 2000), and by the demands of mechanised reasoning (Bornat, 2000). Most approaches end up being based in the Floyd–Dijkstra–Hoare tradition with loops and invariant assertions. To be sure, when dealing with any recursively-defined linked structure some declarative notation has to be brought in to specify the problem, but no one to my knowledge has advocated a purely functional approach throughout. Mason (1988) comes close, but his Lisp expressions can be very impure. Möller (1999) also exploits an algebraic approach, and the structure of his paper has much in common with what follows.
This pearl explores the possibility of a simple functional approach to pointer manipulation algorithms.
Many polyvariant program analyses have been studied in the 1990s, including k-CFA, polymorphic splitting, and the cartesian product algorithm. The idea of polyvariance is to analyze functions more than once and thereby obtain better precision for each call site. In this paper we present an equivalence theorem which relates a co-inductively-defined family of polyvariant flow analyses and a standard type system. The proof embodies a way of understanding polyvariant flow information in terms of union and intersection types, and, conversely, a way of understanding union and intersection types in terms of polyvariant flow information. We use the theorem as basis for a new flow-type system in the spirit of the λCIL-calculus of Wells, Dimock, Muller and Turbak, in which types are annotated with flow information. A flow-type system is useful as an interface between a flow-analysis algorithm and a program optimizer. Derived systematically via our equivalence theorem, our flow-type system should be a good interface to the family of polyvariant analyses that we study.
This chapter collects open problems that in one way or the other relate to the material discussed in this book. They represent the complement of the material, in the sense that they attempt to describe what we do not know. We should keep in mind that it is most likely the case that only a tiny fraction of the knowable is known. Hence, there is a vast variety of questions that can be asked but not yet answered. The author of this book exercised subjective taste and judgment to collect a small subset of such questions, in the hope that they can give a glimpse of what is conceivable. Most of the problems are elementary in nature and have been stated elsewhere in the literature.
Two of the twenty-three problems have been solved since this book first appeared in 2001. These are P.8 Union of disks and P.9 Intersection of disks, both solved in [1]. Since the described approaches to the two problems are different from the eventual solution and perhaps useful to understand generalized versions of the problem, we decided to leave Sections P.8 and P.9 unchanged.
Empty convex hexagons
Let S be a set of n points in ℝ 2 and assume no three points are collinear. A convex k-gon is a subset of k points in convex position.
This chapter describes an algorithm for simplifying a given triangulated surface. We assume this surface represents a shape in three-dimensional space, and the goal is to represent approximately the same shape with fewer triangles. The particular algorithm combines topological and numerical computations and provides an opportunity to discuss combinatorial topology concepts in an applied situation. Section 4.1 describes the algorithm, which greedily contracts edges until the number of triangles that remain is as small as desired. Section 4.2 studies topological implications and characterizes edge contractions that preserve the topological type of the surface. Section 4.3 interprets the algorithm as constructing a simplicial map and establishes connections between the original and the simplified surfaces. Section 4.4 explains the numerical component of the algorithm used to prioritize edges for contraction.
Edge contraction algorithm
A triangulated surface is simplified by reducing the number of vertices. This section presents an algorithm that simplifies by repeated edge contraction. We discuss the operation, describe the algorithm, and introduce the error measure that controls which edges are contracted and in what sequence.
Edge contraction
Let K be a 2-complex, and assume for the moment that |K| is a 2-manifold. The contraction of an edge ab ∈ K removes ab together with the two triangles abx, aby, and it mends the hole by gluing xa to xb and ya to yb, as shown in Figure 4.1. Vertices a and b are glued to form a new vertex c.
The three sections in this chapter apply what we learned in Chapter 1 to the construction of triangle meshes in the plane. In mesh generation, the vertices are no longer part of the input but have to be placed by the algorithm itself. A typical instance of the meshing problem is given as a region, and the algorithm is expected to decompose that region into cells or elements. This chapter focuses on constructing meshes with triangle elements, and it pays attention to quality criteria, such as angle size and length variation. Section 2.1 shows how Delaunay triangulations can be adapted to constraints given as line segments that are required to be part of the mesh. Section 2.2 and 2.3 describe and analyze the Delaunay refinement method that adds new vertices at circumcenters of already existing Delaunay triangles.
Constrained triangulations
This section studies triangulations in the plane constrained by edges specified as part of the input. We show that there is a unique constrained triangulation that is closest, in some sense, to the (unconstrained) Delaunay triangulation.
Constraining line segments
The preceding sections constructed triangulations for a given set of points. The input now consists of a finite set of points, S ⊆ ℝ2, together with a finite set of line segments, L, each connecting two points in S. We require that any two line segments are either disjoint or meet at most in a common endpoint.
The primary purpose of this chapter is the introduction of standard topological language to streamline our discussions on triangulations and meshes. We will spend most of the effort to develop a better understanding of space, how it is connected, and how we can decompose it. The secondary purpose is the construction of a bridge between continuous and discrete concepts in geometry. The idea of a continuous and possibly even differential world is close to our intuitive understanding of physical phenomena, while the discrete setting is natural for computation. Section 3.1 introduces simplicial complexes as a fundamental discrete representation of continuous space. Section 3.2 talks about refining complexes by decomposing simplices into smaller pieces. Section 3.3 describes the topological notion of space and the important special case of manifolds. Section 3.4 discusses the Euler characteristic of a triangulated space.
Simplicial complexes
We use simplicial complexes as the fundamental tool to model geometric shapes and spaces. They generalize and formalize the somewhat loose geometric notions of a triangulation. Because of their combinatorial nature, simplicial complexes are perfect data structures for geometric modeling algorithms.
Simplices
A finite collection of points is affinely independent if no affine space of dimension i contains more than i + 1 of the points, and this is true for every i. A k-simplex is the convex hull of a collection of k + 1 affinely independent points, σ = conv S.