To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we study birth/immigration-death processes under mild (binomial) catastrophes. We obtain explicit expressions for both the time-dependent (transient) and the limiting (equilibrium) factorial moments, which are then used to construct the transient and equilibrium distribution of the population size. We demonstrate that our approach is also applicable to multidimensional systems such as stochastic processes operating under a random environment and other variations of the model at hand. We also obtain various stochastic order results for the number of individuals with respect to the system parameters, as well as the relaxation time.
We consider a monotone binary system with ternary components. “Ternary” means that each component can be in one of three states: up, middle (mid) and down. Handling such systems is a hard task, even if a part of the components have no mid state. Nevertheless, the permutation Monte Carlo methods, that proved very useful for dealing with binary components, can be efficiently used also for ternary monotone systems. It turns out that for “ternary” system there also exists a combinatorial invariant by means of which it becomes possible to count the number C(r;x) of system failure sets which have a given number r and x of components in up and down states, respectively. This invariant is called ternary D-spectrum and it is an analogue of the D-spectrum (or signature) of a system with binary components. Its value is the knowledge of system failure or path set properties which do not depend on stochastic mechanism governing component failures. In case of independent and identical components, knowing D-spectrum makes it easy to calculate system UP or DOWN probability for a variety of UP/DOWN definitions suitable for systems of many types, like communication networks, flow and supply networks, etc.
The ridgelet and curvelet transforms generalize the wavelet transform. Firstly they incorporate angular alignment information, and then in addition length of the alignment is covered. As with all of these transforms, multiple scales are supported. The motivation for these transforms is to build up an image from edge-related building blocks. Furthermore, as in previous chapters, the efficiency of computing these transforms is an important practical aspect.
In this chapter we consider the ridgelet transform and a number of algorithms for its implementation. Then we proceed to the curvelet transform and algorithms for it.
BACKGROUND AND EXAMPLE
Wavelets rely on a dictionary of roughly isotropic elements occurring at all scales and locations. They do not describe well highly anisotropic elements, and contain only a fixed number of directional elements, independent of scale. Despite the fact that they have had wide impact in image processing, they fail to efficiently represent objects with highly anisotropic elements such as lines or curvilinear structures (e.g. edges). The reason is that wavelets are nongeometrical and do not exploit the regularity of the edge curve. Following this reasoning,new constructions have been proposed such as ridgelets (Candès and Donoho 1999) and curvelets (Candès and Donoho 2001, 2002; Starck et al. 2002).
Ridgelets and curvelets are special members of the family of multiscale orientation-selective transforms, which have recently led to a flurry of research activity in the field of computational and applied harmonic analysis. Many other constructions belonging to this family have been investigated, and go by the name of contourlets (Do and Vetterli 2003b), directionlets (Velisavljevic et al. 2006), platelets (Willett and Nowak 2003), bandlets (Le Pennec and Mallat 2005; Peyre and Mallat 2007), grouplets (Mallat 2009), shearlets (Labate et al. 2005), dual-tree complex wavelet transform (Kingsbury 1998; Selesnick et al. 2005) (see Section 3.4), and other complex directional wavelet transforms (Fernandes et al. 2003; van Spaendonck et al. 2003; Fernandes et al. 2004).
The ridgelet and the curvelet (Candès and Donoho 1999, 2002) transforms were developed as an answer to the weakness of the separable wavelet transform in sparsely representing what appears to be simple building-block atoms in an image, that is, lines, curves and edges. Curvelets and ridgelets take the form of basis elements which exhibit high directional sensitivity and are highly anisotropic (Donoho and Duncan 2000; Candès and Donoho 2002; Starck et al. 2002).
A data set can be decomposed in many dictionaries, and we argue in this book that the “best” dictionary is the one providing the sparsest (most economical) representation. In practice, it is convenient to use dictionaries with a fast implicit transform (such as those described in detail in the previous chapters), which allows us to directly obtain the coefficients and reconstruct the signal from these coefficients using fast algorithms running in linear or almost linear time (unlike matrix-vector multiplications). We have also seen in Chapter 8 that fixed dictionaries can be gathered together in order to build a larger dictionary that can describe the data in a more versatile way. All these dictionaries are designed to handle specific contents and are restricted to signals and images that are of a certain type. For instance, Fourier represents stationary and periodic signals well, wavelets are good for analyzing isotropic objects of different scales, curvelets are designed for anisotropic and curvilinear features. Hence, the representation space that we use in our analysis can be seen as a prior we have on our data. Fixed dictionaries, though they have very fast implicit analysis and synthesis operators, which makes them attractive from a practical point of view, cannot guarantee sparse representations of new classes of signals of interest that present more complex patterns and features. What can one do if the data cannot be sufficiently sparsely represented by any of these fixed (or combined) existing dictionaries or if is not known the morphology of features contained in our data? Is there a way to make our data analysis more adaptive by optimizing for a dedicated dictionary? To answer these questions, a new field has emerged called Dictionary Learning (DL). Dictionary learning offers the possibility of learning an adaptive dictionary ɸ directly from the data (or from a set of exemplars that we believe represent the data well). DL is at the interface of machine learning and signal processing.
The problem of dictionary learning in its overdetermined form (that is, when the number of atoms in the dictionary is smaller than or equal to the ambient dimension of the signal) has been studied in depth and can be approached using many viable techniques such as principal component analysis (PCA) and its variants.
Many wavelet transforms on the sphere have been proposed in past years. Using the lifting scheme, Schröder and Sweldens (1995) developed an orthogonal Haar wavelet transform on any surface, which can be directly applied on the sphere. Its interest is, however, relatively limited because of the poor properties of the Haar function and the problems inherent to orthogonal transforms.
More interestingly, many papers have presented new continuous wavelet transforms (Antoine 1999; Tenorio et al. 1999; Cayón et al. 2001; Holschneider 1996). These works have been extended to directional wavelet transforms (Antoine et al. 2002; McEwen et al. 2007). All these continuous wavelet decompositions are useful for data analysis, but cannot be used for restoration purposes because of the lack of an inverse transform. Freeden and Windheuser (1997) and Freeden and Schneider (1998) proposed the first redundant wavelet transform, based on the spherical harmonics transform, which presents an inverse transform. Starck et al. (2006) proposed an invertible isotropic undecimated wavelet transform (IUWT) on the sphere, also based on spherical harmonics, which has the same property as the starlet transform, that is, the sum of the wavelet scales reproduces the original image. A similar wavelet construction (Marinucci et al. 2008; Faÿ and Guilloux 2011; Fay et al. 2008) used the so-called needlet filters. Wiaux et al. (2008) also proposed an algorithm which permits the reconstruction of an image from its steerable wavelet transform. Since reconstruction algorithms are available, these new tools can be used for many applications such as denoising, deconvolution, component separation (Moudden et al. 2005; Bobin et al. 2008; Delabrouille et al. 2009), and inpainting (Abrial et al. 2007; Abrial et al. 2008).
Extensions to the sphere of 2-D geometric multiscale decompositions, such as the ridgelet transform and the curvelet transform,were presented in Starck et al. (2006).
The goal of this chapter is to overview these multiscale transforms on the sphere. Section 12.2 overviews the hierarchical equal area isolatitude pixelization (HEALPix) of a sphere pixelization scheme and the spherical harmonics transform. Section 12.3 shows how a fast orthogonal Haar wavelet transform on the sphere can be built using HEALPix. In Section 12.5, we present an isotropic wavelet transform on the sphere which has similar properties as the starlet transform and therefore should be very useful for data denoising and deconvolution.
With the increasing computing power and memory storage capabilities of computers, it has become feasible to analyze 3-D data as a volume. Among the most simple transforms extended to 3-D are the separable wavelet transform (decimated, undecimated, or any other kind) and the discrete cosine transform (DCT), because these are separable transforms and thus the extension is straightforward. The DCT is mainly used in video compression, but has also been used in denoising (Rusanovskyy and Egiazarian 2005). As for the 3-D wavelets, they have already been used in denoising applications in many domains (Selesnick and Li 2003; Dima et al. 1999; Chen and Ning 2004).
However these separable transforms lack the directional nature that has facilitated the success of 2-D transforms such as curvelets. Consequently, a lot of effort has been made in recent years to build sparse 3-D data representations that better represent geometrical features contained in the data. The 3-D beamlet transform (Donoho and Levi 2002) and the 3-D ridgelet transform (Starck et al. 2005a) were, respectively, designed for 1-D and 2-D feature detection.Video denoising using the ridgelet transform was proposed in Carre et al. (2003). These transforms were combined with 3-D wavelets to build BeamCurvelets and RidCurvelets (Woiselle et al. 2010), which are extensions of the first-generation curvelets (Starck et al. 2002). Whereas most 3-D transforms are adapted to plate-like features, the BeamCurvelet transform is adapted to filaments of different scales and different orientations. Another extension of the curvelets to 3-D is the 3-D fast curvelet transform (Ying et al. 2005),which consists in of paving the Fourier domain with angular wedges in dyadic concentric squares, using the parabolic scaling law to fix the number of angles depending on the scale; it has atoms designed for representing surfaces in 3-D. The Surflet transform (Chandrasekaran et al. 2004) – a d-dimensional extension of the 2-D wedgelets (Donoho 1999; Romberg et al. 2002) – has been studied for compression purposes (Chandrasekaran et al. 2009). Surflets are an adaptive transform estimating each cube of a quad-tree decomposition of the data by two regions of constant value separated by a polynomial surface. Another possible representation uses the Surfacelets developed by Lu and Do (2005). It relies on the combination of a Laplacian pyramid and a d-dimensional directional filter bank.
Graphs are a powerful representation formalism that can be applied to a variety of aspects related to language processing. We provide an overview of how Natural Language Processing problems have been projected into the graph framework, focusing in particular on graph construction – a crucial step in modeling the data to emphasize the phenomena targeted.
Resources and their use and consumption form a central part of our life. Many branches of science and engineering are concerned with the question of which given resource objects can be converted into which target resource objects. For example, information theory studies the conversion of a noisy communication channel instance into an exchange of information. Inspired by work in quantum information theory, we develop a general mathematical toolbox for this type of question. The convertibility of resources into other ones and the possibility of combining resources is accurately captured by the mathematics of ordered commutative monoids. As an intuitive example, we consider chemistry, where chemical reaction equations such as
\mathrm{2H_2 + O_2} \lra \mathrm{2H_2O,}
are concerned both with a convertibility relation ‘→’ and a combination operation ‘+.’ We study ordered commutative monoids from an algebraic and functional-analytic perspective and derive a wealth of results which should have applications to concrete resource theories, such as a formula for rates of conversion. As a running example showing that ordered commutative monoids are also of purely mathematical interest without the resource-theoretic interpretation, we exemplify our results with the ordered commutative monoid of graphs.
While closely related to both Girard's linear logic and to Deutsch's constructor theory, our framework also produces results very reminiscent of the utility theorem of von Neumann and Morgenstern in decision theory and of a theorem of Lieb and Yngvason on the foundations of thermodynamics.
Concerning pure algebra, our observation is that some pieces of algebra can be developed in a context in which equality is not necessarily symmetric, i.e. in which the equality relation is replaced by an ordering relation. For example, notions like cancellativity or torsion-freeness are still sensible and very natural concepts in our ordered setting.
This paper studies optimal switching on and off of the entire service capacity of an M/M/∞ queue with holding, running and switching costs. The running costs depend only on whether the system is on or off, and the holding costs are linear. The goal is to minimize average costs per unit time. The main result is that an average-cost optimal policy either always runs the system or is an (M, N)-policy defined by two thresholds M and N, such that the system is switched on upon an arrival epoch when the system size accumulates to N and is switched off upon a departure epoch when the system size decreases to M. It is shown that this optimization problem can be reduced to a problem with a finite number of states and actions, and an average-cost optimal policy can be computed via linear programming. An example, in which the optimal (M, N)-policy outperforms the best (0, N)-policy, is provided. Thus, unlike the case of single-server queues studied in the literature, (0, N)-policies may not be average-cost optimal.
We analyze a continuous review inventory model with the marginal carrying cost of a unit of inventory given by an increasing function of its shelf age and the marginal delay cost of a backlogged demand unit by an increasing function of its delay duration. We show that, under a minor restriction, an (r, q)-policy is optimal when the demand process is a renewal process, and a state dependent (r, q)-policy is optimal when the demand is a Markov-modulated renewal process. We also derive various monotonicity properties for the optimal policy parameters r* and r* + q*.
Configuration and parameterization of optimization frameworks for the computational support of design exploration can become an exclusive barrier for the adoption of such systems by engineers. This work addresses the problem of defining the elements that constitute a multiple-objective design optimization problem, that is, design variables, constants, objective functions, and constraint functions. In light of this, contributions are reviewed from the field of evolutionary design optimization with respect to their concrete implementation for design exploration. Machine learning and natural language processing are supposed to facilitate feasible approaches to the support of configuration and parameterization. Hence, the authors further review promising machine learning and natural language processing methods for automatic knowledge elicitation and formalization with respect to their implementation for evolutionary design optimization. These methods come from the fields of product attribute extraction, clustering of design solutions, relationship discovery, computation of objective functions, metamodeling, and design pattern extraction.
This paper presents research in the development of heuristic evolutionary algorithms (EAs) for generating and exploring differentiated force-based structures. The algorithm is weighted toward design exploration of topological differentiation while including specific structural and material constraints. An embryological EA model is employed to “grow” networks of mass-spring elements achieving desired mesh densities that resolve themselves in tensile force (form-active) equilibrium. The primal quadrilateral quadrisection method serves as the foundation for a range of extensible subdivision methods. Unique to this research, the quad is addressed as a “cell” rather than a topological or geometric construct, allowing for the contents of the cell to vary in number of mass-spring elements and orientation. In this research, this approach has been termed the quadrilateral quadrisection with n variable topological transformation method. This research culminates with the introduction of a method for grafting meshes where emergent features from the evolved meshes can be transposed and replicated in an explicit yet informed manner. The EA and grafting methods function within a Java-based software called springFORM, developed in previous research, which utilizes a mass-spring based library for solving force equilibrium and allows for both active (manual) and algorithmic topology manipulation. In application to a specific complex tensile mesh, the design framework, which combines the generative EA and mesh grafting method, is shown to produce emergent and highly differentiated topological arrangements that negotiate the specific relationships among a desired maximal mesh density, geometric patterning, and equalized force distribution.
Nowadays, on the basis of significant work carried out, architectural adaption structures are considered to be intelligent entities, able to react to various internal or external influences. Their adaptive behavior can be examined in a digital or physical environment, generating a variety of alternative solutions or structural transformations. These are controlled through different computational approaches, ranging from interactive exploration ones, producing alternative emergent results, to automate optimization ones, resulting in acceptable fitting solutions. This paper examines the adaptive behavior of a kinetic structure, aiming to explore suitable solutions resulting in final appropriate shapes during the transformation process. A machine learning methodology that implements an artificial neural networks algorithm is integrated to the suggested structure. The latter is formed by units articulated together in a sequential composition consisting of primary soft mechanisms and secondary rigid components that are responsible for its reconfiguration and stiffness. A number of case studies that respond to unstructured environments are set as examples, to test the effectiveness of the proposed methodology to be used for handling a large number of input data and to optimize the complex and nonlinear transformation behavior of the kinetic system at the global level, as a result of the units’ local activation that influences nearby units in a chaotic and unpredictable manner.
A platform for experimenting with population-based design exploration algorithms is presented, called Dexen. The platform has been developed in order to address the needs of two distinct groups of users loosely labeled as researchers and designers. Whereas the researchers group focuses on creating and testing customized toolkits, the designers group focuses on applying these toolkits in the design process. A platform is required that is scalable and extensible: scalable to allow computationally demanding population-based exploration algorithms to be executed on distributed hardware within reasonable time frames, and extensible to allow researchers to easily implement their own customized toolkits consisting of specialized algorithms and user interfaces. In order to address these requirements, a three-tier client–server system architecture has been used that separates data storage, domain logic, and presentation. This separation allows customized toolkits to be created for Dexen without requiring any changes to the data or logic tiers. In the logic tier, Dexen uses a programming model in which tasks only communicate through data objects stored in a key-value database. The paper ends with a case study experiment that uses a multicriteria evolutionary algorithm toolkit to explore alternative configurations for the massing and façade design of a large residential development. The parametric models for developing and evaluating design variants are described in detail. A population of design variants are evolved, a number of which are selected for further analysis. The case study demonstrates how evolutionary exploration methods can be applied to a complex design scenario without requiring any scripting.
Building performance simulation and genetic algorithms are powerful techniques for helping designers make better design decisions in architectural design optimization. However, they are very time consuming and require a significant amount of computing power. More time is needed when two techniques work together. This has become the primary impediment in applying design optimization to real-world projects. This study focuses on reducing the computing time in genetic algorithms when building simulation techniques are involved. In this study, we combine two techniques (offline simulation and divide and conquer) to effectively improve the run time in these architectural design optimization problems, utilizing architecture-specific domain knowledge. The improved methods are evaluated with a case study of a nursing unit design to minimize the nurses’ travel distance and maximize daylighting performance in patient rooms. Results show the computing time can be saved significantly during the simulation and optimization process.