To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Aiming at 3SPS+1PS parallel hip joint simulator, the maximum stress of branched chains under the suggested trajectory is obtained by elastodynamic analysis. Based on Corten-Dolan fatigue damage theory and Rain-flow counting method, the dynamic stress of each branched chain is statistically analyzed. The fatigue life prediction shows that branched-chain A2P2C2 is the weakest component for the simulator. Finally, the fatigue reliability is analyzed and the fatigue life and reliability under different structural parameters are discussed. The study shows that the fatigue life of each branched chain can be increased or balanced by increasing structural parameters or exchanging initial motion parameters.
Frege’s definition of the real numbers, as envisaged in the second volume of Grundgesetze der Arithmetik, is fatally flawed by the inconsistency of Frege’s ill-fated Basic Law V. We restate Frege’s definition in a consistent logical framework and investigate whether it can provide a logical foundation of real analysis. Our conclusion will deem it doubtful that such a foundation along the lines of Frege’s own indications is possible at all.
We study the logic of so-called lexicographic or priority merge for multi-agent plausibility models. We start with a systematic comparison between the logical behavior of priority merge and the more standard notion of pooling through intersection, used to define, for instance, distributed knowledge. We then provide a sound and complete axiomatization of the logic of priority merge, as well as a proof theory in labeled sequents that admits cut. We finally study Moorean phenomena and define a dynamic resolution operator for priority merge for which we also provide a complete set of reduction axioms.
Wenmackers and Romeijn [38] formalize ideas going back to Shimony [33] and Putnam [28] into an open-minded Bayesian inductive logic, that can dynamically incorporate statistical hypotheses proposed in the course of the learning process. In this paper, we show that Wenmackers and Romeijn’s proposal does not preserve the classical Bayesian consistency guarantee of merger with the true hypothesis. We diagnose the problem, and offer a forward-looking open-minded Bayesians that does preserve a version of this guarantee.
We present a region-based memory management scheme with support for generational garbage collection. The scheme features a compile-time region inference algorithm, which associates values with logical regions, and builds on a region type system that deploys region types at runtime to avoid the overhead of write barriers and to support partly tag-free garbage collection. The scheme is implemented in the MLKit Standard ML compiler, which generates native x64 machine code. Besides demonstrating a number of important formal properties of the scheme, we measure the scheme’s characteristics, for a number of benchmarks, and compare the performance of the generated executables with the performance of executables generated with the MLton state-of-the-art Standard ML compiler and configurations of the MLKit with and without region inference and generational garbage collection enabled. Although region inference often serves the purpose of generations, combining region inference with generational garbage collection is shown often to be superior to combining region inference with non-generational collection despite the overhead introduced by a larger amount of memory waste, due to region fragmentation.
The engineering design process can produce stress that endures even after it has been completed. This may be particularly true for students who engage with the process as novices. However, it is not known how individual components of the design process induce stress in designers. This study explored the cognitive experience of introductory engineering design students during concept generation, concept selection and physical modelling to identify stress signatures for these three design activities. Data were collected for the design activities using pre- and post-task surveys. Each design activity produced distinct markers of cognitive experience and a unique stress signature that was stable across design activity themes. Rankings of perceived sources of stress also differed for each design activity. Students, however, did not perceive any physiological changes due to the stress of design for any of the design activities. Findings indicate that physical modelling was the most stressful for students, followed by concept generation and then concept selection. Additionally, recommendations for instructors of introductory engineering design courses were provided to help them apply the results of this study. Better understanding of the cognitive experience of students during design can support instructors as they learn to better teach design.
This paper collects and presents unpublished notes of Kurt Gödel concerning the field of many-valued logic. In order to get a picture as complete as possible, both formal and philosophical notes, transcribed from the Gabelsberger shorthand system, are included.
Localization based on visual natural landmarks is one of the state-of-the-art localization methods for automated vehicles that is, however, limited in fast motion and low-texture environments, which can lead to failure. This paper proposes an approach to solve these limitations with an extended Kalman filter (EKF) based on a state estimation algorithm that fuses information from a low-cost MEMS Inertial Measurement Unit and a Time-of-Flight camera. We demonstrate our results in an indoor environment. We show that the proposed approach does not require any global reflective landmark for localization and is fast, accurate, and easy to use with mobile robots.
Hackathons are short-term events at which participants work in small groups to ideate, develop and present a solution to a problem. Despite their popularity, and significant relevance to design research, they have only recently come into research focus. This study presents a review of the existing literature on the characteristics of designing at hackathons. Hackathon participants are found to follow typical divergence–convergence patterns in their design process throughout the hackathon. Unique features include the initial effort to form teams and the significant emphasis on preparing and delivering a solution demo at the final pitch. Therefore, hackathons present themselves as a unique setting in which design is conducted and learned, and by extension, can be studied. Overall, the review provides a foundation to inform future research on design at hackathons. Methodological limitations of current studies on hackathons are discussed and the feasibility of more systematic studies of design in these types of settings is assessed. Further, we explore how the unique nature of the hackathon format and the diverse profiles of hackathon participants with regards to subject matter knowledge, design expertise and prior hackathon experience may affect design cognition and behaviour at each stage of the design process in distinctive ways.
I provide an analysis of sentences of the form ‘To be F is to be G’ in terms of exact truth-maker semantics—an approach that identifies the meanings of sentences with the states of the world directly responsible for their truth-values. Roughly, I argue that these sentences hold just in case that which makes something F also makes it G. This approach is hyperintensional and possesses desirable logical and modal features. In particular, these sentences are reflexive, transitive, and symmetric, and if they are true, then they are necessarily true, and it is necessary that all and only Fs are Gs. I motivate my account over Correia and Skiles’ [11] prominent alternative and close by defining an irreflexive and asymmetric notion of analysis in terms of the symmetric and reflexive notion.
This fully revised and updated edition of the bestselling Chief Data Officer's Playbook offers new insights into the role of the CDO and the data environment. Written by two of the world's leading experts in data driven transformation, it addresses the changes that have taken place in 'data', in the role of the 'CDO', and the expectations and ambitions of organisations. Most importantly, it will place the role of the CDO into the context of a c-suite player for organisations that wish to recover quickly and with long-term stability from the current global economic downturn.New coverage includes:the evolution of the CDO role, what those changes mean for organisations and individuals, and what the future might holda focus on ethics, the data revolution and all the areas that help readers take their first steps on the data journeynew conversations and experiences from an alumni of data leaders compiled over the past three yearsnew chapters and reflections on being a third generation CDO and on working across a broad spectrum of organisations who are all on different parts of their data journey.Written in a highly accessible and practical manner, The Chief Data Officer's Playbook, Second Edition brings the most up-to-date guidance to CDO's who wish to understand their position better; to those aspiring to become CDO's; to those who might be recruiting a CDO and to recruiters to understand an organisation seeking a CDO and the CDO landscape.
In this article, we propose a nonlinear Proportional+Derivative (PD) tracking controller with adaptive Fourier series compensation. The proposed controller uses a regressor-free adaptive scheme that relies on a trigonometric polynomial with varying coefficients to solve the control problem. Asymptotic convergence of the position and velocity errors is proven via a formal stability analysis based on Lyapunov and LaSalle theory for discontinuous systems. The proposed controller is validated on a 2-degrees of freedom robot manipulator. The experimental results validate the theoretically obtained results and reflect the effect of certain parameters in the transient behavior of the error dynamics. Certain robustness properties are also observed.
The soft actuator is made of superelastic material and embedded flexible material. In this paper, a kind of soft tube was designed and used to assemble two kinds of pneumatic soft actuators. The experiment and finite element analysis are used to comprehensively analyze and describe the bending, elongation, and torsion deformation of the soft actuator. The results show that the two soft actuators have the best actuation performance when the inner diameter of the soft tube is 4 mm. In addition, when the twisting pitch of the torsional actuator is 24 mm, its torsional performance is optimized. Finally, a device that can be used in the production line was assembled by utilizing those soft actuators, and some operation tasks were completed. This experiment provides some insights for the development of soft actuators with more complex motions in the future.
This paper addresses the motion planning and control problem of a system of 1-trailer robots navigating a dynamic environment cluttered with obstacles including a swarm of boids. A set of nonlinear continuous control laws is proposed via the Lyapunov-based Control Scheme for collision, obstacle, and swarm avoidances. Additionally, a leader–follower strategy is utilized to allow the flock to split and rejoin when approaching obstacles. The effectiveness of the control laws is demonstrated through numerical simulations, which show the split and rejoin maneuvers by the flock when avoiding obstacles while the swarm exhibits emergent behaviors.
With the increasing availability of behavioral data from diverse digital sources, such as social media sites and cell phones, it is now possible to obtain detailed information about the structure, strength, and directionality of social interactions in varied settings. While most metrics of network structure have traditionally been defined for unweighted and undirected networks only, the richness of current network data calls for extending these metrics to weighted and directed networks. One fundamental metric in social networks is edge overlap, the proportion of friends shared by two connected individuals. Here, we extend definitions of edge overlap to weighted and directed networks and present closed-form expressions for the mean and variance of each version for the Erdős–Rényi random graph and its weighted and directed counterparts. We apply these results to social network data collected in rural villages in southern Karnataka, India. We use our analytical results to quantify the extent to which the average overlap of the empirical social network deviates from that of corresponding random graphs and compare the values of overlap across networks. Our novel definitions allow the calculation of edge overlap for more complex networks, and our derivations provide a statistically rigorous way for comparing edge overlap across networks.
Distributed systems are hard to get right, model, test, debug, and teach. Their textbook definitions, typically given in a form of replicated state machines, are concise, yet prone to introducing programming errors if naïvely translated into runnable implementations.
In this work, we present Distributed Protocol Combinators (DPC), a declarative programming framework that aims to bridge the gap between specifications and runnable implementations of distributed systems, and facilitate their modeling, testing, and execution. DPC builds on the ideas from the state-of-the art logics for compositional systems verification. The contribution of DPC is a novel family of program-level primitives, which facilitates construction of larger distributed systems from smaller components, streamlining the usage of the most common asynchronous message-passing communication patterns, and providing machinery for testing and user-friendly dynamic verification of systems. This paper describes the main ideas behind the design of the framework and presents its implementation in Haskell. We introduce DPC through a series of characteristic examples and showcase it on a number of distributed protocols from the literature.
This paper extends our preceeding conference publication (Andersen & Sergey, 2019a) with an exploration of randomized testing for protocols and their implementations, and an additional case study demonstrating bounded model checking of protocols.
Deploying Internet of Things (IoT)-enabled virtual network function (VNF) chains to Cloud-Edge infrastructures requires determining a placement for each VNF that satisfies all set deployment requirements as well as a software-defined routing of traffic flows between consecutive functions that meets all set communication requirements. In this article, we present a declarative solution, EdgeUsher, to the problem of how to best place VNF chains to Cloud-Edge infrastructures. EdgeUsher can determine all eligible placements for a set of VNF chains to a Cloud-Edge infrastructure so to satisfy all of their hardware, IoT, security, bandwidth, and latency requirements. It exploits probability distributions to model the dynamic variations in the available Cloud-Edge infrastructure and to assess output eligible placements against those variations.
Nakamoto doublespend strategy, described in Bitcoin foundational article, leads to total ruin with positive probability. The simplest strategy that avoids this risk incorporates a stopping threshold when success is unlikely. We compute the exact profitability and the minimal double spend that is profitable for this strategy. For a given amount of the transaction, we determine the minimal number of confirmations to be requested by the recipient that makes the double-spend strategy non-profitable. This number of confirmations is only 1 or 2 for average transactions and for a small relative hashrate of the attacker. This is substantially lower than the original Nakamoto number, which is about six confirmations and is widely used. Nakamoto analysis is only based on the success probability of the attack instead of on a profitability analysis that we carry out.
In this paper, a new type of biped mobile robot is designed. Each leg of the robot is a 6 degree-of-freedom (DOF) parallel mechanism, and each leg has three relatively fixed landing points. The leg’s structure gives the robot better performance on large carrying capacity, strong environmental adaptability and fast moving speed simultaneously. At the same time, it helps the robot move more steadily and change direction more simply. Based on the structural features of the leg, the inverse kinematics model of the biped robot is established and a unified formula is obtained. According to an analysis of robot’s workspace, gait planning is completed and simulated. Finally, the special case that the robot can keep the upper body horizontal while walking on a slopy surface is validated.