To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The papers in this special issue represent an early view on different aspects of a new research field in computing science. They were presented as best papers in a series of European workshops called ‘Self-adaptive and autonomic systems’ (SAACS), which is co-located with the DEXA conference every year.
Self-managing or autonomic computing is a highly emerging field having been championed by industry and academia alike. Essentially the management of the system is being handed to the system itself, therefore the issue of trust in terms of the decisions the system makes regarding itself, is of paramount interest. This paper considers four aspects of trust in self-adaptive computing systems with the aim to identify trends and pinpoint areas that require more investigation.
This paper presents an empirical investigation of policy-based self-management techniques for parallel applications executing in loosely-coupled environments. The dynamic and heterogeneous nature of these environments is discussed and the special considerations for parallel applications are identified. An adaptive strategy for the run-time deployment of tasks of parallel applications is presented. The strategy is based on embedding numerous policies which are informed by contextual and environmental inputs. The policies govern various aspects of behaviour, enhancing flexibility so that the goals of efficiency and performance are achieved despite high levels of environmental variability. A prototype self-managing parallel application is used as a vehicle to explore the feasibility and benefits of the strategy. In particular, several aspects of stability are investigated. The implementation and behaviour of three policies are discussed and sample results examined.
Today, management and tuning questions are approached using if… then… rules of thumb. This reactive approach requires expertise regarding system behavior, making it difficult to deal with unforeseen uses of a system’s resources and leading to system unpredictability and large system management overheads. We propose a What…if… approach that allows interactive exploration of the effects of system changes, thus converting complex tuning problem into simpler search problems. Through two concrete management problems, automating system upgrades and deciding on service migrations, we identify system design changes that enable a system to answer What…if… questions about itself.
This paper introduces Rudder, a peer-to-peer agent framework for supporting autonomic applications in decentralized distributed environments. The framework provides agents to discover, select, and compose elements, and defines agent interaction and negotiation protocols to enable appropriate application behaviors to be negotiated and enacted dynamically. The implementations of these protocols as well as agent coordination and negotiation activities are supported by Comet, a scalable decentralized coordination substrate. The operation and experimental evaluation of Rudder is presented.
This paper presents Collaborative Reinforcement Learning (CRL), a coordination model for online system optimization in decentralized multi-agent systems. In CRL system optimization problems are represented as a set of discrete optimization problems, each of whose solution cost is minimized by model-based reinforcement learning agents collaborating on their solution. CRL systems can be built to provide autonomic behaviours such as optimizing system performance in an unpredictable environment and adaptation to partial failures. We evaluate CRL using an ad hoc routing protocol that optimizes system routing performance in an unpredictable network environment.
Simulation has long played a part in testing new configurations and new functionality in a diverse range of software. Through such simulations, the boundaries of the system state are explored and the relationship of that state to other applications tested — sometimes to destruction. A critical differentiator between a simulation and a live, deployed application is that simulations are allowed to fail. As truly autonomous applications evolve, this capacity for simulation must be built in from the ground up or the benefits of experience — including the ability to tolerate failure — will be lost. This must be achieved without undermining the global correctness of visible application behaviour. We suggest an engineering approach to enable the introduction of such simulation with minimal or no recoding and we propose a composition architecture to allow for safe dynamic deployment in substantial autonomic systems. We have identified our approach as application Dreaming.
Light and Elementary Linear Logic, which form key components of the interface between logic and implicit computational complexity, were originally introduced by Girard as ‘stand-alone’ logical systems with a (somewhat awkward) sequent calculus of their own. The latter was later reformulated by Danos and Joinet as a proper subsystem of linear logic, whose proofs satisfy a certain structural condition. We extend this approach to polytime computation, finding two solutions: the first is obtained by a simple extension of Danos and Joinet's condition, closely resembles Asperti's Light Affine Logic and enjoys polystep strong normalisation (the polynomial bound does not depend on the reduction strategy); the second, which needs more complex conditions, exactly corresponds to Girard's Light Linear Logic.
Like the autonomic responses in the human body, autonomic computing systems recognize their own health problems and, where possible, respond to correct them. Failing that, external help is required. The purpose of this paper is to consider how autonomic systems might be structured to facilitate health monitoring. The approach uses a ‘pulse’ monitor for each autonomic element, which provides a reflex reaction facility and basic information on the current state (health) of that element. The pulse mechanism extends the NASA beacon monitor concept. The different ways that pulse information might be communicated and used are examined. The discussion is illustrated with a personal computing example.
In the previous chapter we learned that the restricted Delaunay triangulation is a good approximation of a densely sampled surface Σ from both topological and geometric view point. Unfortunately, we cannot compute this triangulation because the restricted Voronoi diagram Vor P|Σ cannot be computed without knowing Σ. As a remedy we approximate the restricted Voronoi diagram and compute a set of triangles that is a superset of all restricted Delaunay triangles. This set is pruned to extract a manifold surface which is output as an approximation to the sampled surface Σ.
Algorithm
First, we observe that each restricted Voronoi cell Vp|Σ = Vp ∩ Σ is almost flat if the sample is sufficiently dense. This follows from the Normal Variation Lemma 3.3 as the points in Vp|Σ cannot be far apart if ε is small. In particular, Vp|Σ lies within a thin neighborhood of the tangent plane τp at p. So, we need two approximations: (i) an approximation to τp or equivalently to np and (ii) an approximation to Vp|Σ based on the approximation to np. The following definitions of poles and cocones are used for these two approximations.
Poles and Cocones
Definition 4.1 (Poles). The farthest Voronoi vertex, denoted p+, in Vpis called the positive pole of p.
The algorithms for surface reconstruction in previous chapters assume that the input is noise-free. Although in practice all of them can handle some amount of displacements of the points away from the surface, they are not designed in principle to handle such data sets. As a result when the points are scattered around the sampled surface, these algorithms are likely to fail. In this chapter we describe an algorithm that is designed to tolerate noise in data.
The algorithm works with the Delaunay/Voronoi diagrams of the input points and draws upon some of the principles of the power crust algorithm. The power crust algorithm exploits the fact that the union of the polar balls approximates the solid bounded by the sampled surface. Obviously, this property does not hold in the presence of noise. Nevertheless, we have observed in Chapter 7 that, under some reasonable noise model, some of the Delaunay balls remain relatively big and can play the role of the polar balls. These balls are identified and partitioned into inner and outer balls. We show that the boundary of the union of the outer (or inner) big Delaunay balls is homeomorphic to the sampled surface. This immediately gives a homeomorphic surface reconstruction though the reconstructed surface may not interpolate the sample points.
Simply stated, the problem we study in this book is: how to approximate a shape from the coordinates of a given set of points from the shape. The set of points is called a point sample, or simply a sample of the shape. The specific shape that we will deal with are curves in two dimensions and surfaces in three dimensions. The problem is motivated by the availability of modern scanning devices that can generate a point sample from the surface of a geometric object. For example, a range scanner can provide the depth values of the sampled points on a surface from which the three-dimensional coordinates can be extracted. Advanced hand held laser scanners can scan a machine or a body part to provide a dense sample of the surfaces. A number of applications in computer-aided design, medical imaging, geographic data processing, and drug designs, to name a few, can take advantage of the scanning technology to produce samples and then compute a digital model of a geometric shape with reconstruction algorithms. Figure 1.1 shows such an example for a sample on a surface which is approximated by a triangulated surface interpolating the input points.
The reconstruction algorithms described in this book produce a piecewise linear approximation of the sampled curves and surfaces.
In the previous chapters we have assumed that the input points lie exactly on the sampled surface. Unfortunately, in practice, the input sample often does not satisfy this constraint. Noise introduced by measurement errors scatters the sample points away from the surface. Consequently, all analysis as presented in the previous chapters becomes invalid for such input points. In this chapter we develop a noise model that accounts for the scatter of the inputs and then analyze noisy samples based on this model. We will see that, as in the noise-free case, some key properties of the sampled surface can be computed from the Delaunay triangulation of a noisy sample. Specifically, we show that normals of the sampled surface can still be estimated from the Delaunay/Voronoi diagrams. Furthermore, the medial axis and hence the local feature sizes of the sampled surface can also be estimated from these diagrams. These results will be used in Chapters 8 and 9 where we present algorithms to reconstruct surfaces from noisy samples.
Noise Model
In the noise-free case ε-sampling requires each point on the surface have a sample point within a distance of ε times the local feature size. When noise is allowed, the sample points need not lie exactly on the surface and may scatter around it.
Most of the surface reconstruction algorithms face a difficulty when dealing with undersampled surfaces and noise. While the algorithm described in Chapter 5 can detect undersampling, it leaves holes in the surface near the undersampled regions. Although this may be desirable for reconstructing surfaces with boundaries, many applications such as CAD designs require that the output surface be watertight, that is, a surface that bounds a solid. Ideally, this means that the watertight surface should be a compact 2-manifold without any boundary. The two algorithms that are described in this chapter produce these types of surfaces when the input sample is sufficiently dense. However, the algorithms are designed keeping in mind that the sample may not be sufficiently dense everywhere. So, in practice, the algorithms may not produce a perfect manifold surface but their output is watertight in the following sense:
Watertight surface: A 2-complex embedded in ℝ3 whose underlying space is a boundary of the closure of a 3-manifold in ℝ3.
Notice that the above definition allows the watertight surface to be nonmanifold. The closure of a 3-manifold can indeed introduce nonmanifold property; for example, a surface pinched at a point can be in the closure of a 3-manifold.
Power Crust
In Chapter 4, we have seen that the poles for a dense point sample lie quite far away from all samples (proof of the Pole Lemma 4.1) and hence from the surface. Indeed, they lie close to the medial axis.