To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Data trusts have been proposed as a mechanism through which data can be more readily exploited for a variety of aims, including economic development and social-benefit goals such as medical research or policy-making. Data trusts, and similar data governance mechanisms such as data co-ops, aim to facilitate the use and re-use of datasets across organizational boundaries and, in the process, to protect the interests of stakeholders such as data subjects. However, the current discourse on data trusts does not acknowledge another common stakeholder in the data value chain—the crowd workers who are employed to collect, validate, curate, and transform data. In this paper, we report on a preliminary qualitative investigation into how crowd data workers themselves feel datasets should be used and governed. We find that while overall remuneration is important to those workers, they also value public-benefit data use but have reservations about delayed remuneration and the trustworthiness of both administrative processes and the crowd itself. We discuss the implications of our findings for how data trusts could be designed, and how data trusts could be used to give crowd workers a more enduring stake in the product of their work.
We explore the problems that confront any attempt to explain or explicate exactly what a primitive logical rule of inference is, or consists in. We arrive at a proposed solution that places a surprisingly heavy load on the prospect of being able to understand and deal with specifications of rules that are essentially self-referring. That is, any rule $\rho $ is to be understood via a specification that involves, embedded within it, reference to rule $\rho $ itself. Just how we arrive at this position is explained by reference to familiar rules as well as less familiar ones with unusual features. An inquiry of this kind is surprisingly absent from the foundations of inferentialism—the view that meanings of expressions (especially logical ones) are to be characterized by the rules of inference that govern them.
I show that the logic $\textsf {TJK}^{d+}$, one of the strongest logics currently known to support the naive theory of truth, is obtained from the Kripke semantics for constant domain intuitionistic logic by (i) dropping the requirement that the accessibility relation is reflexive and (ii) only allowing reflexive worlds to serve as counterexamples to logical consequence. In addition, I provide a simplified natural deduction system for $\textsf {TJK}^{d+}$, in which a restricted form of conditional proof is used to establish conditionals.
Cloud storage faces many problems in the storage process which badly affect the system's efficiency. One of the most problems is insufficient buffer space in cloud storage. This means that the packets of data wait to have storage service which may lead to weakness in performance evaluation of the system. The storage process is considered a stochastic process in which we can determine the probability distribution of the buffer occupancy and the buffer content and predict the performance behavior of the system at any time. This paper modulates a cloud storage facility as a fluid queue controlled by Markovian queue. This queue has infinite buffer capacity which determined by the M/M/1/N queue with constant arrival and service rates. We obtain the analytical solution of the distribution of the buffer occupancy. Moreover, several performance measures and numerical results are given which illustrate the effectiveness of the proposed model.
Symbolic dynamics is a mature yet rapidly developing area of dynamical systems. It has established strong connections with many areas, including linear algebra, graph theory, probability, group theory, and the theory of computation, as well as data storage, statistical mechanics, and $C^*$-algebras. This Second Edition maintains the introductory character of the original 1995 edition as a general textbook on symbolic dynamics and its applications to coding. It is written at an elementary level and aimed at students, well-established researchers, and experts in mathematics, electrical engineering, and computer science. Topics are carefully developed and motivated with many illustrative examples. There are more than 500 exercises to test the reader's understanding. In addition to a chapter in the First Edition on advanced topics and a comprehensive bibliography, the Second Edition includes a detailed Addendum, with companion bibliography, describing major developments and new research directions since publication of the First Edition.
Dynamic movement primitives (DMP) are motion building blocks suitable for real-world tasks. We suggest a methodology for learning the manifold of task and DMP parameters, which facilitates runtime adaptation to changes in task requirements while ensuring predictable and robust performance. For efficient learning, the parameter space is analyzed using principal component analysis and locally linear embedding. Two manifold learning methods: kernel estimation and deep neural networks, are investigated for a ball throwing task in simulation and in a physical environment. Low runtime estimation errors are obtained for both learning methods, with an advantage to kernel estimation when data sets are small.
Magnant and Martin conjectured that the vertex set of any d-regular graph G on n vertices can be partitioned into $n / (d+1)$ paths (there exists a simple construction showing that this bound would be best possible). We prove this conjecture when $d = \Omega(n)$, improving a result of Han, who showed that in this range almost all vertices of G can be covered by $n / (d+1) + 1$ vertex-disjoint paths. In fact our proof gives a partition of V(G) into cycles. We also show that, if $d = \Omega(n)$ and G is bipartite, then V(G) can be partitioned into n/(2d) paths (this bound is tight for bipartite graphs).
Modules are requisite for the realization of modular reconfigurable manipulators. The design of modules in literature mainly revolves around geometric aspects and features such as lengths, connectivity and adaptivity. Optimizing and designing the modules based on dynamic performance is considered as a challenge here. The present paper introduces an Architecture-Prominent-Sectioning (APS) strategy for the planning of architecture of modules such that a reconfigurable manipulator possesses minimal joint torques during its operations. Proposed here is the transferring of complete structure into an equivalent system, perform optimization and map the resulting arrangement into possible architecture. The strategy has been applied on a set of modular configurations considering three-primitive-paths. The possibility of getting advanced/complex shapes is also discussed to incorporate the idea of a modular library.
The present work achieves a mathematical, in particular syntax-independent, formulation of dynamics and intensionality of computation in terms of games and strategies. Specifically, we give game semantics of a higher-order programming language that distinguishes programmes with the same value yet different algorithms (or intensionality) and the hiding operation on strategies that precisely corresponds to the (small-step) operational semantics (or dynamics) of the language. Categorically, our games and strategies give rise to a cartesian closed bicategory, and our game semantics forms an instance of a bicategorical generalisation of the standard interpretation of functional programming languages in cartesian closed categories. This work is intended to be a step towards a mathematical foundation of intensional and dynamic aspects of logic and computation; it should be applicable to a wide range of logics and computations.
There are no silver bullets in algorithm design, and no single algorithmic idea is powerful and flexible enough to solve every computational problem. Nor are there silver bullets in algorithm analysis, as the most enlightening method for analyzing an algorithm often depends on the problem and the application. However, typical algorithms courses rely almost entirely on a single analysis framework, that of worst-case analysis, wherein an algorithm is assessed by its worst performance on any input of a given size. The purpose of this book is to popularize several alternatives to worst-case analysis and their most notable algorithmic applications, from clustering to linear programming to neural network training. Forty leading researchers have contributed introductions to different facets of this field, emphasizing the most important models and results, many of which can be taught in lectures to beginning graduate students in theoretical computer science and machine learning.
This chapter investigates deductive practices in what is arguably their main current instantiation, namely practices of mathematical proofs. The dialogical hypothesis delivers a compelling account of a number of features of these practices; indeed, the fictive characters Prover and Skeptic can be viewed as embodied by real-life mathematicians. The chapter includes a discussion of the ontological status of proofs, the functions of proofs, practices of mathematicians such as peer review and collaboration, and a brief discussion of probabilistic and computational proofs. It also discusses three case studies: the reception of Gödel’s incompleteness results, a failed proof of the inconsistency of Peano Arithmetic, and a purported proof of the ABC conjecture.
Throughout this book, deduction has been examined and discussed from many angles and perspectives. However, one question has remained conspicuously unaddressed until now: Is deduction a correct, reliable method for reasoning? In other words, is deduction justified (Dummett, 1978)?
This investigation has focused extensively on the social conditions and factors influencing the emergence of deduction, both historically and ontogenetically. It is thus reasonable to ask whether it offers a social constructivist account of deduction, which in turn has implications for the justification problem. Indeed, on at least some versions of social constructivism, the very question of the correctness of deductive reasoning as a scientific method, understood in absolute terms, is seen as misguided.
Blood-side resistance to oxygen transport in extracorporeal membrane blood oxygenators (MBO) depends on fluid mechanics governing the laminar flow in very narrow channels, particularly the hemodynamics controlling the cell free layer (CFL) built-up at solid/blood interfaces. The CFL thickness constitutes a barrier to oxygen transport from the membrane towards the erythrocytes. Interposing hemicylindrical CFL disruptors in animal blood flows inside rectangular microchannels, surrogate systems of MBO mimicking their hemodynamics, proved to be effective in reducing (ca. 20%) such thickness (desirable for MBO to increase oxygen transport rates to the erythrocytes). The blockage ratio (non-dimensional measure of the disruptor penetration into the flow) increase is also effective in reducing CFL thickness (ca. 10–20%), but at the cost of risking clot formation (undesirable for MBO) for disruptors with penetration lengths larger than their radius, due to large residence times of erythrocytes inside a low-velocity CFL formed at the disruptor/wall edge.