To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The primary purpose of this monograph has been twofold. Firstly, the objective has been to provide a systematic account of some of the basic models and techniques developed by the modern theory of complex networks. In addition, a second motivation has been to illustrate the broad range of socioeconomic phenomena to which this theory can be naturally applied. In the latter pursuit, our approach is polar to that commonly espoused by classical game theory and economic analysis. It stresses the implications of complexity on the interaction structure, while downplaying the role of incentives in shaping agents' behavior. But, as repeatedly argued, neither of those one-sided perspectives can be judged satisfactory. In general, both complexity and incentive considerations should jointly play a key role in any proper understanding (and thus modeling) of most social phenomena.
To further elaborate on this point, it is useful to recapitulate what are some of the main benefits to be expected from explicitly accounting for complexity in the analysis of socioeconomic environments.
First, of course, the theoretical approach undoubtedly becomes more realistic since, indeed, we find that so many interesting social problems in the real world are embedded in a complex and ever-changing social network (cf. Chapter 1). It is fitting, therefore, that those problems should also be modeled in ways that respect such underlying complexity.
The aim of this book is to provide a systematic account of a recent body of theoretical research that lies at the intersection of two fertile strands of literature. One of these strands, the study of complex networks, is a new field that has been developing at a fast pace during the last decade. The other one, social network analysis, has been an active area of research in sociology and economics for quite some time now – only lately, however, has it started to be seriously concerned with the implications of complexity. There is, I believe, much potential in bringing these two approaches together to shed light on network-based phenomena in complex social environments. This monograph is written with the intention of helping both the social scientist and other network researchers in this fascinating endeavor.
For the social scientist, the monograph may be used, inter alia, as a self-contained introduction to some of the main issues and techniques that mark the modern literature on complex networks. Since this literature has largely developed as an outgrowth of statistical physics, some of the powerful methodology being used is often alien to researchers from other disciplines. On the other hand, for the network theorist who lacks an economic background, the present monograph can fulfill a reciprocal role. Specifically, it may serve as an illustration of the questions and concerns that inform the economists' approach to the study of socioeconomic networks.
In this final chapter, we study the interplay of search, diffusion, and play in the formation and ongoing evolution of complex social networks. First, we illustrate how the issue of network formation is addressed by that part of the received economic literature that highlights its strategic dimension. As will be explained in Section 6.1, even though game-theoretic models are concerned with an undoubtedly important aspect of the phenomenon, they also display a significant drawback. Their standard methodology (i.e. the full-rationality paradigm) as well as many of their implicit assumptions (e.g. a largely stable environment) abstract from the inherent complexity that pervades the real world.
In contrast, the approach pursued in the bulk of this chapter adopts a polar standpoint. It stresses the complexity of social networks but, as a contrasting limitation, it largely eschews the strategic concerns that also play an important role in many cases. The analysis ranges from models where local search is the main driving force in network formation (Section 6.2) to those where the focus is on the interaction between network evolution and agents' embedded behavior (Section 6.3). Finally, a brief reconsideration of matters is conducted in Section 6.4, where it is also suggested that a genuine integration of strategic and complexity considerations should be one of the primary objectives of future network analysis.
GAME-THEORETIC MODELS OF NETWORK FORMATION
Here we shall not attempt to provide an extensive description of the large body of literature that studies network formation from a game-theoretic viewpoint.
In the previous chapter we learned that the restricted Delaunay triangulation is a good approximation of a densely sampled surface Σ from both topological and geometric view point. Unfortunately, we cannot compute this triangulation because the restricted Voronoi diagram Vor P|Σ cannot be computed without knowing Σ. As a remedy we approximate the restricted Voronoi diagram and compute a set of triangles that is a superset of all restricted Delaunay triangles. This set is pruned to extract a manifold surface which is output as an approximation to the sampled surface Σ.
Algorithm
First, we observe that each restricted Voronoi cell Vp|Σ = Vp ∩ Σ is almost flat if the sample is sufficiently dense. This follows from the Normal Variation Lemma 3.3 as the points in Vp|Σ cannot be far apart if ε is small. In particular, Vp|Σ lies within a thin neighborhood of the tangent plane τp at p. So, we need two approximations: (i) an approximation to τp or equivalently to np and (ii) an approximation to Vp|Σ based on the approximation to np. The following definitions of poles and cocones are used for these two approximations.
Poles and Cocones
Definition 4.1 (Poles). The farthest Voronoi vertex, denoted p+, in Vpis called the positive pole of p.
The algorithms for surface reconstruction in previous chapters assume that the input is noise-free. Although in practice all of them can handle some amount of displacements of the points away from the surface, they are not designed in principle to handle such data sets. As a result when the points are scattered around the sampled surface, these algorithms are likely to fail. In this chapter we describe an algorithm that is designed to tolerate noise in data.
The algorithm works with the Delaunay/Voronoi diagrams of the input points and draws upon some of the principles of the power crust algorithm. The power crust algorithm exploits the fact that the union of the polar balls approximates the solid bounded by the sampled surface. Obviously, this property does not hold in the presence of noise. Nevertheless, we have observed in Chapter 7 that, under some reasonable noise model, some of the Delaunay balls remain relatively big and can play the role of the polar balls. These balls are identified and partitioned into inner and outer balls. We show that the boundary of the union of the outer (or inner) big Delaunay balls is homeomorphic to the sampled surface. This immediately gives a homeomorphic surface reconstruction though the reconstructed surface may not interpolate the sample points.
Simply stated, the problem we study in this book is: how to approximate a shape from the coordinates of a given set of points from the shape. The set of points is called a point sample, or simply a sample of the shape. The specific shape that we will deal with are curves in two dimensions and surfaces in three dimensions. The problem is motivated by the availability of modern scanning devices that can generate a point sample from the surface of a geometric object. For example, a range scanner can provide the depth values of the sampled points on a surface from which the three-dimensional coordinates can be extracted. Advanced hand held laser scanners can scan a machine or a body part to provide a dense sample of the surfaces. A number of applications in computer-aided design, medical imaging, geographic data processing, and drug designs, to name a few, can take advantage of the scanning technology to produce samples and then compute a digital model of a geometric shape with reconstruction algorithms. Figure 1.1 shows such an example for a sample on a surface which is approximated by a triangulated surface interpolating the input points.
The reconstruction algorithms described in this book produce a piecewise linear approximation of the sampled curves and surfaces.
In the previous chapters we have assumed that the input points lie exactly on the sampled surface. Unfortunately, in practice, the input sample often does not satisfy this constraint. Noise introduced by measurement errors scatters the sample points away from the surface. Consequently, all analysis as presented in the previous chapters becomes invalid for such input points. In this chapter we develop a noise model that accounts for the scatter of the inputs and then analyze noisy samples based on this model. We will see that, as in the noise-free case, some key properties of the sampled surface can be computed from the Delaunay triangulation of a noisy sample. Specifically, we show that normals of the sampled surface can still be estimated from the Delaunay/Voronoi diagrams. Furthermore, the medial axis and hence the local feature sizes of the sampled surface can also be estimated from these diagrams. These results will be used in Chapters 8 and 9 where we present algorithms to reconstruct surfaces from noisy samples.
Noise Model
In the noise-free case ε-sampling requires each point on the surface have a sample point within a distance of ε times the local feature size. When noise is allowed, the sample points need not lie exactly on the surface and may scatter around it.
Most of the surface reconstruction algorithms face a difficulty when dealing with undersampled surfaces and noise. While the algorithm described in Chapter 5 can detect undersampling, it leaves holes in the surface near the undersampled regions. Although this may be desirable for reconstructing surfaces with boundaries, many applications such as CAD designs require that the output surface be watertight, that is, a surface that bounds a solid. Ideally, this means that the watertight surface should be a compact 2-manifold without any boundary. The two algorithms that are described in this chapter produce these types of surfaces when the input sample is sufficiently dense. However, the algorithms are designed keeping in mind that the sample may not be sufficiently dense everywhere. So, in practice, the algorithms may not produce a perfect manifold surface but their output is watertight in the following sense:
Watertight surface: A 2-complex embedded in ℝ3 whose underlying space is a boundary of the closure of a 3-manifold in ℝ3.
Notice that the above definition allows the watertight surface to be nonmanifold. The closure of a 3-manifold can indeed introduce nonmanifold property; for example, a surface pinched at a point can be in the closure of a 3-manifold.
Power Crust
In Chapter 4, we have seen that the poles for a dense point sample lie quite far away from all samples (proof of the Pole Lemma 4.1) and hence from the surface. Indeed, they lie close to the medial axis.
The subject of this book is the approximation of curves in two dimensions and surfaces in three dimensions from a set of sample points. This problem, called reconstruction, appears in various engineering applications and scientific studies. What is special about the problem is that it offers an application where mathematical disciplines such as differential geometry and topology interact with computational disciplines such as discrete and computational geometry. One of my goals in writing this book has been to collect and disseminate the results obtained by this confluence. The research on geometry and topology of shapes in the discrete setting has gained a momentum through the study of the reconstruction problem. This book, I hope, will serve as a prelude to this exciting new line of research.
To maintain the focus and brevity I chose a few algorithms that have provable guarantees. It happens to be, though quite naturally, they all use the well-known data structures of the Voronoi diagram and the Delaunay triangulation. Actually, these discrete geometric data structures offer discrete counterparts to many of the geometric and topological properties of shapes. Naturally, the Voronoi and Delaunay diagrams have been a common thread for the materials in the book.
This book originated from the class notes of a seminar course “Sample-Based Geometric Modeling” that I taught for four years at the graduate level in the computer science department of The Ohio State University.
The surface reconstruction algorithm in the previous chapter assumes that the sample is sufficiently dense, that is, ε is sufficiently small. However, the cases of undersampling where this density condition is not met are prevalent in practice. The input data may be dense only in parts of the sampled surface. Regions with small features such as high curvatures are often not well sampled. When sampled with scanners, occluded regions are not sampled at all. Nonsmooth surfaces such as the ones considered in CAD are bound to have undersampling since no finite point set can sample nonsmooth regions to satisfy the ε-sampling condition for a strictly positive ε. Even some surfaces with boundaries can be viewed as a case of undersampling. If Σ is a surface without boundary and Σ′ ⊂ Σ is a surface with boundary, a sample of Σ′ is also a sample of Σ. This sample may be dense for Σ′ and not for Σ.
In this chapter we describe an algorithm that detects the regions of undersampling. This detection helps in reconstructing surfaces with boundaries. Later, we will see that this detection also helps in repairing the unwanted holes created in the reconstructed surface due to undersampling.
Samples and Boundaries
Let P be an input point set that samples a surface Σ where Σ does not have any boundary.
The simplest class of manifolds that pose nontrivial reconstruction problems are curves in the plane. We will describe two algorithms for curve reconstruction, Crust and NN-Crust in this chapter. First, we will develop some general results that will be applied to prove the correctness of the both algorithms.
A single curve in the plane is defined by a map ξ: [0, 1] → ℝ2 where [0, 1] is the closed interval between 0 and 1 on the real line. The function ξ is one-to-one everywhere except at the endpoints where ξ(0) = ξ(1). The curve is C1-smooth if ξ has a continuous nonzero first derivative in the interior of [0, 1] and the right derivative at 0 is same as the left derivative at 1 both being nonzero. If ξ has continuous ith derivatives, i ≥ 1, at each point as well, the curve is called Ci-smooth. When we refer to a curve Σ in the plane, we actually mean the image of one or more such maps. By definition Σ does not self-intersect though it can have multiple components each of which is a closed curve, that is, without any endpoint.
For a finite sample to be a ε-sample for some ε > 0, it is essential that the local feature size f is strictly positive everywhere.