To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Performance indexes are a powerful tool to evaluate the behavior of industrial manipulators throughout their workspace and improve their performance. When dealing with intrinsically redundant manipulators, the additional joint influences their performance; hence, it is fundamental to consider the influence of the redundant joint when evaluating the performance index. This work improves the formulation of the kinematic directional index (KDI) by considering redundant manipulators. The KDI represents an improvement over traditional indexes, as it takes into account the direction of motion when evaluating the performance of a manipulator. However, in its current formulation, it is not suitable for redundant manipulators. Therefore, we extend the index to redundant manipulators. This is achieved by adopting a geometric approach that allows identifying the appropriate redundancy to maximize the velocity of a serial manipulator along the direction of motion. This approach is applied to a 4-degree-of-freedom (DOF) planar redundant manipulator and a 7-DOF spatial articulated one. Experimental validation for the articulated robot is presented, demonstrating the effectiveness of the proposed method and its advantages.
It is widely thought that chance should be understood in reductionist terms: claims about chance should be understood as claims that certain patterns of events are instantiated. There are many possible reductionist theories of chance, differing as to which possible pattern of events they take to be chance-making. It is also widely taken to be a norm of rationality that credence should defer to chance: special cases aside, rationality requires that one’s credence function, when conditionalized on the chance-making facts, should coincide with the objective chance function. It is a shortcoming of a theory of chance if it implies that this norm of rationality is unsatisfiable. The primary goal of this paper is to show, on the basis of considerations concerning computability and inductive learning, that this shortcoming is more common than one would have hoped.
Oceans will play a crucial role in our efforts to combat the growing climate emergency. Researchers have proposed several strategies to harness greener energy through oceans and use oceans as carbon sinks. However, the risks these strategies might pose to the ocean and marine ecosystem are not well understood. It is imperative that we quickly develop a range of tools to monitor ocean processes and marine ecosystems alongside the technology to deploy these solutions on a large scale into the oceans. Large arrays of inexpensive cameras placed deep underwater coupled with machine learning pipelines to automatically detect, classify, count, and estimate fish populations have the potential to continuously monitor marine ecosystems and help study the impacts of these solutions on the ocean. In this paper, we successfully demonstrate the application of YOLOv4 and YOLOv7 deep learning models to classify and detect six species of fish in a dark artificially lit underwater video dataset captured 500 m below the surface, with a mAP of 76.01% and 85.0%, respectively. We show that 2,000 images per species, for each of the six species of fish is sufficient to train a machine-learning species classification model for this low-light environment. This research is a first step toward systems to autonomously monitor fish deep underwater while causing as little disruption as possible. As such, we discuss the advances that will be needed to apply such systems on a large scale and propose several avenues of research toward this goal.
This paper presents a comprehensive study of the forward and inverse kinematics of a six-degrees-of-freedom (DoF) spatial manipulator with a novel architecture. Developed by Systemantics India Pvt. Ltd., Bangalore, and designated as the H6A (i.e., Hybrid 6-Axis), this manipulator consists of two arm-like branches, which are attached to a rigid waist at the proximal end and are coupled together via a wrist assembly at the other. Kinematics of the manipulator is challenging due to the presence of two multi-DoF passive joints: a spherical joint in the right arm and a universal in the left. The forward kinematic problem has eight solutions, which are derived analytically in the closed form. The inverse kinematic problem leads to $160$ solutions and involves the derivation of a $40$-degree polynomial equation, whose coefficients are obtained as closed-form symbolic expressions of the pose parameters of the end-effector, thus ensuring the generality of the results over all possible inputs. Furthermore, the analyses performed lead naturally to the conditions for various singularities involved, including certain non-trivial architecture singularities. The results are illustrated via numerical examples which are validated extensively.
We study the problem of finding the root vertex in large growing networks. We prove that it is possible to construct confidence sets of size independent of the number of vertices in the network that contain the root vertex with high probability in various models of random networks. The models include uniform random recursive dags and uniform Cooper-Frieze random graphs.
Set-based concurrent engineering (SBCE), a process that develops sets of many design candidates for each subproblem throughout a design project, proposes several benefits compared to point-based processes, where only one design candidate for each subproblem is chosen for further development. These benefits include reduced rework, improved design quality, and retention of knowledge to use in future projects. Previous studies that introduce SBCE in practice achieved success and had very positive future outlooks, but SBCE encounters opposition because its core procedures appear wasteful as designers must divide their time among many designs throughout the process, most of which are ultimately not used. The impacts of these procedures can be explored in detail through open-source computational tools, but currently few exist to do this. This work introduces the Point/Set-Organized Research Teams (PSORT) modeling platform to simulate and analyze a set-based design process. The approach is used to verify statements made about SBCE and investigate its effects on project quality. Such an SBCE platform enables process exploration without needing to commit many projects and resources to any given design.
To harness the promises of digital transformation, different players take different paths. Departing from corporate-driven (e.g., the United States) and state-led (e.g., China) approaches, in various documents, the European Union states its goal to establish a citizen-centric data ecosystem. However, it remains contentious the extent to which the envisioned digital single market can enable the creation of public value and empower citizens. As an alternative, in this article, we argue in favor of a fair data ecosystem, defined as an approach capable of representing and keep in balance the data interests of all actors, while maintain a collective outlook. We build such ecosystem around data commons—as a third path to market and state approaches to the managing of resources—coupled with open data (OD) frameworks and spatial data infrastructures (SDIs). Indeed, based on literature, we claim that these three regimes complement each other, with OD and SDIs supplying infrastructures and institutionalization to data commons’ limited replicability and scalability. This creates the preconditions for designing the main roles, rules, and mechanisms of a data republic, as a possible enactment of a fair data ecosystem. While outlining here its main traits, the testing of the data republic model is open for further research.
Data sharing is a requisite for developing data-driven innovation and collaboration at the local scale. This paper aims to identify key lessons and recommendations for building trustworthy data governance at the local scale, including the public and private sectors. Our research is based on the experience gained in Rennes Metropole since 2010 and focuses on two thematic use cases: culture and energy. For each one, we analyzed how the power relations between actors and the local public authority shape the modalities of data sharing and exploitation. The paper will elaborate on challenges and opportunities at the local level, in perspective with the national and European frameworks.
We show that for a fixed $q$, the number of $q$-ary $t$-error correcting codes of length $n$ is at most $2^{(1 + o(1)) H_q(n,t)}$ for all $t \leq (1 - q^{-1})n - 2\sqrt{n \log n}$, where $H_q(n, t) = q^n/ V_q(n,t)$ is the Hamming bound and $V_q(n,t)$ is the cardinality of the radius $t$ Hamming ball. This proves a conjecture of Balogh, Treglown, and Wagner, who showed the result for $t = o(n^{1/3} (\log n)^{-2/3})$.
The recent progress of deep learning techniques has produced models capable of achieving high scores on traditional Natural Language Inference (NLI) datasets. To understand the generalization limits of these powerful models, an increasing number of adversarial evaluation schemes have appeared. These works use a similar evaluation method: they construct a new NLI test set based on sentences with known logic and semantic properties (the adversarial set), train a model on a benchmark NLI dataset, and evaluate it in the new set. Poor performance on the adversarial set is identified as a model limitation. The problem with this evaluation procedure is that it may only indicate a sampling problem. A machine learning model can perform poorly on a new test set because the text patterns presented in the adversarial set are not well represented in the training sample. To address this problem, we present a new evaluation method, the Invariance under Equivalence test (IE test). The IE test trains a model with sufficient adversarial examples and checks the model’s performance on two equivalent datasets. As a case study, we apply the IE test to the state-of-the-art NLI models using synonym substitution as the form of adversarial examples. The experiment shows that, despite their high predictive power, these models usually produce different inference outputs for equivalent inputs, and, more importantly, this deficiency cannot be solved by adding adversarial observations in the training data.
We extend a recent argument of Kahn, Narayanan and Park ((2021) Proceedings of the AMS 149 3201–3208) about the threshold for the appearance of the square of a Hamilton cycle to other spanning structures. In particular, for any spanning graph, we give a sufficient condition under which we may determine its threshold. As an application, we find the threshold for a set of cyclically ordered copies of $C_4$ that span the entire vertex set, so that any two consecutive copies overlap in exactly one edge and all overlapping edges are disjoint. This answers a question of Frieze. We also determine the threshold for edge-overlapping spanning $K_r$-cycles.
Maritime engineering relies on model forecasts for many different processes, including meteorological and oceanographic forcings, structural responses, and energy demands. Understanding the performance and evaluation of such forecasting models is crucial in instilling reliability in maritime operations. Evaluation metrics that assess the point accuracy of the forecast (such as root-mean-squared error) are commonplace, but with the increased uptake of probabilistic forecasting methods such evaluation metrics may not consider the full forecasting distribution. The statistical theory of proper scoring rules provides a framework in which to score and compare competing probabilistic forecasts, but it is seldom appealed to in applications. This translational paper presents the underlying theory and principles of proper scoring rules, develops a simple panel of rules that may be used to robustly evaluate the performance of competing probabilistic forecasts, and demonstrates this with an application to forecasting surface winds at an asset on Australia’s North–West Shelf. Where appropriate, we relate the statistical theory to common requirements by maritime engineering industry. The case study is from a body of work that was undertaken to quantify the value resulting from an operational forecasting product and is a clear demonstration of the downstream impacts that statistical and data science methods can have in maritime engineering operations.
We prove that if a unimodular random graph is almost surely planar and has finite expected degree, then it has a combinatorial embedding into the plane which is also unimodular. This implies the claim in the title immediately by a theorem of Angel, Hutchcroft, Nachmias and Ray [2]. Our unimodular embedding also implies that all the dichotomy results of [2] about unimodular maps extend in the one-ended case to unimodular random planar graphs.
Big data and algorithmic decision-making have been touted as game-changing developments in management research, but they have their limitations. Qualitative approaches should not be cast aside in the age of digitalisation, since they facilitate understanding of quantitative data and the questioning of assumptions and conclusions that may otherwise lead to faulty implications being drawn, and - crucially - inaccurate strategies, decisions and actions. This handbook comprises three parts: Part I highlights many of the issues associated with 'unthinking digitalisation', particularly concerning the overreliance on algorithmic decision-making and the consequent need for qualitative research. Part II provides examples of the various qualitative methods that can be usefully employed in researching various digital phenomena and issues. Part III introduces a range of emergent issues concerning practice, knowing, datafication, technology design and implementation, data reliance and algorithms, digitalisation.
Emphasizing the creative nature of mathematics, this conversational textbook guides students through the process of discovering a proof. The material revolves around possible strategies to approaching a problem without classifying 'types of proofs' or providing proof templates. Instead, it helps students develop the thinking skills needed to tackle mathematics when there is no clear algorithm or recipe to follow. Beginning by discussing familiar and fundamental topics from a more theoretical perspective, the book moves on to inequalities, induction, relations, cardinality, and elementary number theory. The final supplementary chapters allow students to apply these strategies to the topics they will learn in future courses. With its focus on 'doing mathematics' through 200 worked examples, over 370 problems, illustrations, discussions, and minimal prerequisites, this course will be indispensable to first- and second-year students in mathematics, statistics, and computer science. Instructor resources include solutions to select problems.
There are many textbooks on algorithms focusing on big-O notation and basic design principles. This book offers a unique approach to taking the design and analyses to the level of predictable practical efficiency, discussing core and classic algorithmic problems that arise in the development of big data applications, and presenting elegant solutions of increasing sophistication and efficiency. Solutions are analyzed within the classic RAM model, and the more practically significant external-memory model that allows one to perform I/O-complexity evaluations. Chapters cover various data types, including integers, strings, trees, and graphs, algorithmic tools such as sampling, sorting, data compression, and searching in dictionaries and texts, and lastly, recent developments regarding compressed data structures. Algorithmic solutions are accompanied by detailed pseudocode and many running examples, thus enriching the toolboxes of students, researchers, and professionals interested in effective and efficient processing of big data.