To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Performance evaluation of a parallel robot is a multicriteria problem. By taking Delta robot as an object of study, this paper presents the kinematic performance evaluation of a three translational degrees-of-freedom parallel robot from the viewpoint of singularity, isotropy, and velocity transmission. It is shown that the determinant of a Jacobian matrix cannot measure the distance from the singular configuration due to the existing inverse kinematic singularity of a Delta robot. The determinants of inverse and direct kinematic Jacobian matrices are adopted for the measurement of distance from the singular configuration based on the theory of numerical linear dependence. The denominator of the Jacobian matrix will be lost in the computation of the condition number when the end-effector is on the centerline of the workspace, so the Delta robot may also be nearly at a singular configuration when the condition number of the Jacobian matrix is equal to 1. The velocity transmission index whose physical meaning is the maximum input angular velocity when the end-effector translates in the unit velocity is presented. The evaluation of singularity, isotropy, and velocity transmission of a Delta robot is investigated by simulation. The velocity transmission index can also be used for the velocity transmission evaluation of a parallel robot with pure rotational degrees-of-freedom based on the principle of similarity. The physical meaning is modified to be the maximum input velocity when the end-effector rotates in the unit angular velocity.
Compressed sensing (CS) is an exciting, rapidly growing field that has attracted considerable attention in electrical engineering, applied mathematics, statistics, and computer science. Since its initial introduction several years ago, an avalanche of results have been obtained, both of theoretical and practical nature, and various conferences, workshops, and special sessions have been dedicated to this growing research field. This book provides the first comprehensive introduction to the subject, highlighting recent theoretical advances and a range of applications, as well as outlining numerous remaining research challenges.
CS offers a framework for simultaneous sensing and compression of finite-dimensional vectors, that relies on linear dimensionality reduction. Quite surprisingly, it predicts that sparse high-dimensional signals can be recovered from highly incomplete measurements by using efficient algorithms. To be more specific, let x be a length-n vector. In CS we do not measure x directly, but rather acquire m < n linear measurements of the form y = Ax using an m × n CS matrix A. Ideally, the matrix is designed to reduce the number of measurements as much as possible while allowing for recovery of a wide class of signals from their measurement vectors y. Thus, we would like to choose m ≪ n. However, this renders the matrix A rank-deficient, meaning that it has a nonempty nullspace. This implies that for any particular signal x0, an infinite number of signals x will yield the same measurements y = Ax = Ax0 for the chosen CS matrix.
Modern functional logic programming (FLP) languages use non-terminating and non-confluent constructor systems (CSs) as programs in order to define non-strict and non-deterministic functions. Two semantic alternatives have been usually considered for parameter passing with this kind of functions: call-time choice and run-time choice. While the former is the standard choice of modern FLP languages, the latter lacks some basic properties – mainly compositionality – that have prevented its use in practical FLP systems. Traditionally it has been considered that call-time choice induces a singular denotational semantics, while run-time choice induces a plural semantics. We have discovered that this latter identification is wrong when pattern matching is involved, and thus in this paper we propose two novel compositional plural semantics for CSs that are different from run-time choice.
We investigate the basic properties of our plural semantics – compositionality, polarity, and monotonicity for substitutions, and a restricted form of the bubbling property for CSs – and the relation between them and to previous proposals, concluding that these semantics form a hierarchy in the sense of set inclusion of the set of values computed by them. Besides, we have identified a class of programs characterized by a simple syntactic criterion for which the proposed plural semantics behave the same, and a program transformation that can be used to simulate one of the proposed plural semantics by term rewriting. At the practical level, we study how to use the new expressive capabilities of these semantics for improving the declarative flavor of programs. As call-time choice is the standard semantics for FLP, it still remains the best option for many common programming patterns. Therefore, we propose a language that combines call-time choice and our plural semantics, which we have implemented in the Maude system. The resulting interpreter is then employed to develop and test several significant examples showing the capabilities of the combined semantics.
This chapter surveys recent work in applying ideas from graphical models and message passing algorithms to solve large-scale regularized regression problems. In particular, the focus is on compressed sensing reconstruction via 11 penalized least-squares (known as LASSO or BPDN). We discuss how to derive fast approximate message passing algorithms to solve this problem. Surprisingly, the analysis of such algorithms allows one to prove exact high-dimensional limit results for the LASSO risk.
Introduction
The problem of reconstructing a high-dimensional vector x ∈ ℝn from a collection of observations y ∈ ℝm arises in a number of contexts, ranging from statistical learning to signal processing. It is often assumed that the measurement process is approximately linear, i.e. that
where A ∈ ℝm×n is a known measurement matrix, and w is a noise vector.
The graphical models approach to such a reconstruction problem postulates a joint probability distribution on (x, y) which takes, without loss of generality, the form
The conditional distribution p(dy|x) models the noise process, while the prior p(dx) encodes information on the vector x. In particular, within compressed sensing, it can describe its sparsity properties. Within a graphical models approach, either of these distributions (or both) factorizes according to a specific graph structure. The resulting posterior distribution p(dx|y) is used for inferring x given y.
There are many reasons to be skeptical about the idea that the joint probability distribution p(dx, dy) can be determined, and used for reconstructing x.
By
Alexey Castrodad, University of Minnesota, Minneapolis, USA,
Ignacio Ramirez, University of Minnesota, Minneapolis, USA,
Guillermo Sapiro, University of Minnesota, Minneapolis, USA,
Pablo Sprechmann, University of Minnesota, Minneapolis, USA,
Guoshen Yu, University of Minnesota, Minneapolis, USA
In this chapter the authors go beyond traditional sparse modeling, and address collaborative structured sparsity to add stability and prior information to the representation. In structured sparse modeling, instead of considering the dictionary atoms as singletons, the atoms are partitioned in groups, and a few groups are selected at a time for the signal encoding. A complementary way of adding structure, stability, and prior information to a model is via collaboration. Here, multiple signals, which are known to follow the same model, are allowed to collaborate in the coding. The first studied framework connects sparse modeling with Gaussian Mixture Models and leads to state-of-the-art image restoration. The second framework derives a hierarchical structure on top of the collaboration and is well fitted for source separation. Both models enjoy very important theoretical virtues as well.
Introduction
In traditional sparse modeling, it is assumed that a signal can be accurately represented by a sparse linear combination of atoms from a (learned) dictionary. A large class of signals, including most natural images and sounds, is well described by this model, as demonstrated by numerous state-of-the-art results in various signal processing applications.
From a data modeling point of view, sparsity can be seen as a form of regularization, that is, as a device to restrict or control the set of coefficient values which are allowed in the model to produce an estimate of the data.
Modern data are often composed of two or more morphologically distinct constituents, and one typical goal is the extraction of those components. Recently, sparsity methodologies have been successfully utilized to solve this problem, both theoretically as well as empirically. The key idea is to choose a deliberately overcomplete representation made of several frames each one providing a sparse expansion of one of the components to be extracted. The morphological difference between the components is then encoded as incoherence conditions of those frames. The decomposition principle is to minimize the ℓ1 norm of the frame coefficients. This chapter shall serve as an introduction to and a survey of this exciting area of research as well as a reference for the state of the art of this research field.
Introduction
Over the last few years, scientists have faced an ever growing deluge of data, which needs to be transmitted, analyzed, and stored. A close analysis reveals that most of these data might be classified as multimodal data, i.e., being composed of distinct subcomponents. Prominent examples are audio data, which might consist of a superposition of the sounds of different instruments, or imaging data from neurobiology, which is typically a composition of the soma of a neuron, its dendrites, and its spines. In both these exemplary situations, the data has to be separated into appropriate single components for further analysis. In the first case, separating the audio signal into the signals of the different instruments is a first step to enable the audio technician to obtain a musical score from a recording.
A novel 3-leg 5-DoF parallel manipulator (PM) with a UPU-type composite active constrained leg is proposed and its kinematics and statics are analyzed systematically. First, the formulae are derived for solving the inverse/forward displacements, inverse/forward velocities, and active/constraint forces. Second, the formulae are derived for solving inverse/forward accelerations. Third, a simulation mechanism of this PM is created and its workspace is constructed and analyzed. Finally, the analytic results are verified by its simulation mechanism.
Parametric signals, such as streams of short pulses, appear in many applications including bio-imaging, radar, and spread-spectrum communication. The recently developed finite rate of innovation (FRI) framework has paved the way to low-rate sampling of such signals, by exploiting the fact that only a small number of parameters per unit of time are needed to fully describe them. For example, a stream of pulses can be uniquely defined by the time delays of the pulses and their amplitudes, which leads to far fewer degrees of freedom than the signal's Nyquist rate samples. This chapter provides an overview of FRI theory, algorithms, and applications. We begin by discussing theoretical results and practical algorithms allowing perfect reconstruction of FRI signals from a minimal number of samples. We then turn to treat recovery from noisy measurements. Finally, we overview a diverse set of applications of FRI theory, in areas such as super-resolution, radar, and ultrasound.
Introduction
We live in an analog world, but we would like our digital computers to interact with it. For example, sound is a continuous-time phenomenon, which can be characterized by the variations in air pressure as a function of time. For digital processing of such real-world signals to be possible, we require a sampling mechanism which converts continuous signals to discrete sequences of numbers, while preserving the information present in those signals.
In classical sampling theory, which dates back to the beginning of the twentieth century [1–3], a bandlimited signal whose maximum frequency is fmax is sampled at or above the Nyquist rate 2fmax.
Compressed sensing (CS) is an exciting, rapidly growing, field that has attracted considerable attention in signal processing, statistics, and computer science, as well as the broader scientific community. Since its initial development only a few years ago, thousands of papers have appeared in this area, and hundreds of conferences, workshops, and special sessions have been dedicated to this growing research field. In this chapter, we provide an up-to-date review of the basics of the theory underlying CS. This chapter should serve as a review to practitioners wanting to join this emerging field, and as a reference for researchers. We focus primarily on the theory and algorithms for sparse recovery in finite dimensions. In subsequent chapters of the book, we will see how the fundamentals presented in this chapter are expanded and extended in many exciting directions, including new models for describing structure in both analog and discrete-time signals, new sensing design techniques, more advanced recovery results and powerful new recovery algorithms, and emerging applications of the basic theory and its extensions.
Introduction
We are in the midst of a digital revolution that is driving the development and deployment of new kinds of sensing systems with ever-increasing fidelity and resolution. The theoretical foundation of this revolution is the pioneering work of Kotelnikov, Nyquist, Shannon, and Whittaker on sampling continuous-time bandlimited signals [162, 195, 209, 247]. Their results demonstrate that signals, images, videos, and other data can be exactly recovered from a set of uniformly spaced samples taken at the so-called Nyquist rate of twice the highest frequency present in the signal of interest.
We represent the well-known surprise exam paradox in constructive and computable mathematics and offer solutions. One solution is based on Brouwer’s continuity principle in constructive mathematics, and the other involves type 2 Turing computability in classical mathematics. We also discuss the backward induction paradox for extensive form games in constructive logic.
For c ∈ (0,1) let n(c) denote the set of n-vertex perfect graphs with density c and let n(c) denote the set of n-vertex graphs without induced C5 and with density c.
We show thatwith otherwise, where H is the binary entropy function.
Further, we use this result to deduce that almost all graphs in n(c) have homogeneous sets of linear size. This answers a question raised by Loebl and co-workers.
In source reordering the order of the source words is permuted to minimize word order differences with the target sentence and then fed to a translation model. Earlier work highlights the benefits of resolving long-distance reorderings as a pre-processing step to standard phrase-based models. However, the potential performance improvement of source reordering and its impact on the components of the subsequent translation model remain unexplored. In this paper we study both aspects of source reordering. We set up idealized source reordering (oracle) models with/without syntax and present our own syntax-driven model of source reordering. The latter is a statistical model of inversion transduction grammar (ITG)-like tree transductions manipulating a syntactic parse and working with novel conditional reordering parameters. Having set up the models, we report translation experiments showing significant improvement on three language pairs, and contribute an extensive analysis of the impact of source reordering (both oracle and model) on the translation model regarding the quality of its input, phrase-table, and output. Our experiments show that oracle source reordering has untapped potential in improving translation system output. Besides solving difficult reorderings, we find that source reordering creates more monotone parallel training data at the back-end, leading to significantly larger phrase tables with higher coverage of phrase types in unseen data. Unfortunately, this nice property does not carry over to tree-constrained source reordering. Our analysis shows that, from the string-level perspective, tree-constrained reordering might selectively permute word order, leading to larger phrase tables but without increase in phrase coverage in unseen data.