To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Data is one of the most valuable resources in the twenty-first century. Property rights are a tried-and-tested legal response to regulating valuable assets. With non-personal, machine-generated data within an EU context, mainstream IP options are not available, although certain types of machine generated data may be protected as trade secrets or within sui generis database protection. However, a new IP right is not needed. The formerly proposed EU data producer’s right is a cautionary tale for jurisdictions considering a similar model. A new property right would both strengthen the position of de facto data holders and drive up costs. However, with data, there are valuable lessons to be learned from constructed commons models.
Whether AI should be given legal personhood should not be framed in binary terms. Instead, this issue should be analysed in terms of a sliding-scale spectrum. On one axis, there is the quantity and quality of the bundle of rights and obligations that legal personhood entails. The other axis is the level of the relevant characteristics that courts may include in conferring legal personhood.
The conferral of personhood is a choice made by legal systems, but just because it can be done, does not mean that it should. Analogies made between AI systems and corporations are superficial and flawed. For instance, the demand for asset partitioning does not apply to AI systems in the same way that it does with corporations and may lead to moral hazards. Conferring personhood on AI systems would also need to be accompanied with governance structures equivalent to those that accompany corporate legal personhood. Further, the metaphorical ghost of data as property needs to be exorcised.
The venous blood test is a prevalent auxiliary medical diagnostic method. Venous blood collection equipment can improve blood collection’s success rate and stability, reduce the workload of medical staff, and improve the efficiency of diagnosis and treatment. This study proposed a rigid-flexible composite puncture (RFCP) strategy, based on which a small 7-degree-of-freedom (DOF) auxiliary venipuncture blood collection (VPBC) robot using a trocar needle was designed. The robot consists of a position and orientation adjustment mechanism and a RFCP end-effector, which can perform RFCP to avoid piercing the blood vessel’s lower wall during puncture. The inverse kinematics solution and validation of the robot were analyzed based on the differential evolution algorithm, after which the quintic polynomial interpolation algorithm was applied to achieve the robot trajectory planning control. Finally, the VPBC robot prototype was developed for experiments. The trajectory planning experiment verified the correctness of the inverse kinematics solution and trajectory planning, and the composite puncture blood collection experiment verified the feasibility of the RFCP strategy.
We present a data-driven emulator, a stochastic weather generator (SWG), suitable for estimating probabilities of prolonged heat waves in France and Scandinavia. This emulator is based on the method of analogs of circulation to which we add temperature and soil moisture as predictor fields. We train the emulator on an intermediate complexity climate model run and show that it is capable of predicting conditional probabilities (forecasting) of heat waves out of sample. Special attention is payed that this prediction is evaluated using a proper score appropriate for rare events. To accelerate the computation of analogs, dimensionality reduction techniques are applied and the performance is evaluated. The probabilistic prediction achieved with SWG is compared with the one achieved with a convolutional neural network (CNN). With the availability of hundreds of years of training data, CNNs perform better at the task of probabilistic prediction. In addition, we show that the SWG emulator trained on 80 years of data is capable of estimating extreme return times of order of thousands of years for heat waves longer than several days more precisely than the fit based on generalized extreme value distribution. Finally, the quality of its synthetic extreme teleconnection patterns obtained with SWG is studied. We showcase two examples of such synthetic teleconnection patterns for heat waves in France and Scandinavia that compare favorably to the very long climate model control run.
Aiming at the problems of small good workspace, many singular configurations, and limited carrying capacity of non-redundant parallel mechanisms, a full-redundant drive parallel mechanism is designed and developed, and its performance evaluation, good workspace identification, and scale optimization design are studied. First, the kinematics analysis of the planar 6R parallel mechanism is completed. Then, the motion/force transmission performance evaluation index of the mechanism is established, and the singularity analysis of the mechanism is completed. Based on this, the fully redundant driving mode of the mechanism is determined, and the good transmission workspace of the mechanism in this mode is identified. Then, the mapping relationship between the performance and scale of the mechanism is established by using the space model theory, and the scale optimization of the mechanism is completed. Finally, the robot prototype is made according to the optimal scale, and the performance verification is carried out based on the research of dynamics and control strategy. The results show that the fully redundant actuation parallel mechanism obtained by design optimization has high precision and large bearing capacity. The position repeatability and position accuracy are 0.053 mm and 0.635 mm, respectively, and the load weight ratio can reach 15.83%. The research results of this paper complement and improve the performance evaluation and scale optimization system of redundantly actuated parallel mechanisms.
In order to clarify and visualize the real state of the structural performances of ships in operation and establish a more optimal, data-driven framework for ship design, construction and operation, an industry-academia joint R&D project on the digital twin for ship structures (DTSS) was conducted in Japan. This paper presents the major achievements of the project. The DTSS aims to grasp the stress responses over the whole ship structure in waves by data assimilation that merges hull monitoring and numerical simulation. Three data assimilation methods, namely, the wave spectrum method, Kalman filter method, and inverse finite element method were used, and their effectiveness was examined through model and full-scale ship measurements. Methods for predicting short-term extreme responses and long-term cumulative fatigue damage were developed for navigation and maintenance support using statistical approaches. In comparison with conventional approaches, response predictions were significantly improved by DTSS using real response data in encountered waves. Utilization scenarios for DTSS in the maritime industry were presented from the viewpoints of navigation support, maintenance support, rule improvement, and product value improvement, together with future research needs for implementation in the maritime industry.
Despite the growing availability of sensing and data in general, we remain unable to fully characterize many in-service engineering systems and structures from a purely data-driven approach. The vast data and resources available to capture human activity are unmatched in our engineered world, and, even in cases where data could be referred to as “big,” they will rarely hold information across operational windows or life spans. This paper pursues the combination of machine learning technology and physics-based reasoning to enhance our ability to make predictive models with limited data. By explicitly linking the physics-based view of stochastic processes with a data-based regression approach, a derivation path for a spectrum of possible Gaussian process models is introduced and used to highlight how and where different levels of expert knowledge of a system is likely best exploited. Each of the models highlighted in the spectrum have been explored in different ways across communities; novel examples in a structural assessment context here demonstrate how these approaches can significantly reduce reliance on expensive data collection. The increased interpretability of the models shown is another important consideration and benefit in this context.
The authors‘ previous research has demonstrated that parallel mechanisms (PMs) with hybrid branch chains (i.e., branch chains containing planar or spatial loops) can possess symbolic forward position (SFP) solutions and motion decoupling (MD). In order to further study the conditions of a three-chain six degrees of freedom (DOF) parallel mechanism with SFP and MD, this paper proposes one 6-DOF branch chain A and two 5-DOF branch chains B and C. Based on these, a class of four 6-DOF PMs with three branch chains is devised. The symbolic position analysis of three of four such PMs is performed consequently, featuring partial MD and SFPs, which reveals that if the position or orientation of a point on the moving platform can be determined by the position of the hybrid branch chain, the PM exhibits partial MD and SFP. Finally, the accuracy of the symbolized forward and inverse solution algorithms is verified through numerical examples. This research brings a new insight into the design and position analysis of 6-DOF PMs, particularly those with SFP and partial MD.
Information literacy research is growing in importance, as evidenced by the steady increase in dissertations and research papers in this area. However, significant theoretical gaps remain.
Information Literacy Through Theory provides an approachable introduction to theory development and use within information literacy research. It provides a space for key theorists in the field to discuss, interrogate and reflect on the applicability of theory within information literacy research, as well as the implications for this work within a variety of contexts. Each chapter considers a particular theory as its focal point, from information literacy and the social to information literacy through an equity mindset, and unpacks what assumptions the theory makes about key concepts and the ways in which the theory enables or constrains our understanding of information literacy.
This book will provide a focal point for researchers, practitioners and students interested in the creation and advancement of conceptually rich information literacy research and practice.
Emerging reinforcement learning algorithms that utilize human traits as part of their conceptual architecture have been demonstrated to encourage cooperation in social dilemmas when compared to their unaltered origins. In particular, the addition of a mood mechanism facilitates more cooperative behaviour in multi-agent iterated prisoner dilemma (IPD) games, for both static and dynamic network contexts. Mood-altered agents also exhibit humanlike behavioural trends when environmental aspects of the dilemma are altered, such as the structure of the payoff matrix used. It is possible that other environmental effects from both human and agent-based research will interact with moody structures in previously unstudied ways. As the literature on these interactions is currently small, we seek to expand on previous research by introducing two more environmental dimensions; voluntary interaction in dynamic networks, and stability of interaction through varied network restructuring. From an initial Erdos–Renyi random network, we manipulate the structure of a network IPD according to existing methodology in human-based research, to investigate possible replication of their findings. We also facilitated strategic selection of opponents through the introduction of two partner evaluation mechanisms and tested two selection thresholds for each. We found that even minimally strategic play termination in dynamic networks is enough to enhance cooperation above a static level, though the thresholds for these strategic decisions are critical to desired outcomes. More forgiving thresholds lead to better maintenance of cooperation between kinder strategies than stricter ones, despite overall cooperation levels being relatively low. Additionally, moody reinforcement learning combined with certain play termination decision strategies can mimic trends in human cooperation affected by structural changes to the IPD played on dynamic networks—as can kind and simplistic strategies such as Tit-For-Tat. Implications of this in comparison with human data is discussed, and suggestions for diversification of further testing are made.
This paper presents an algorithm for solving the inverse dynamics of a parallel manipulator (PM) with offset universal joints (RR–joints) via the Newton–Euler method. The PM with RR–joints increase the joint stiffness and enlarge the workspace but introduces additional joint parameters and constraint torques, rendering the dynamics more complex. Unlike existing studies on PMs with RR–joints, which emphasize the kinematics and joint performance, this paper studies the dynamical model. First, an iterative algorithm is established through a rigid body velocity transformation, which calculates the input parameters of the link velocity and acceleration. A linear system of equations in matrix form is then established for the entire PM through the Newton–Euler method. By using the generalized minimal residual method (GMRES) to solve the equation system, all the forces and torques on the joints can be obtained, from which the required actuation force can be derived. This method is validated through numerical simulations using the automatic dynamic analysis of multibody systems software. The proposed method is suitable for establishing the dynamic model of complex PMs with redundant or hybrid structures.
Coverage path planning (CPP) is a subfield of path planning problems in which free areas of a given domain must be visited by a robot at least once while avoiding obstacles. In some situations, the path may be optimized for one or more criteria such as total distance traveled, number of turns, and total area covered by the robot. Accordingly, the CPP problem has been formulated as a multi-objective optimization (MOO) problem, which turns out to be a challenging discrete optimization problem, hence conventional MOO algorithms like Non-dominated Sorting Genetic Algorithm-2 (NSGA-II) do not work as it is. This study implements a modified NSGA-II to solve the MOO problem of CPP for a mobile robot. In this paper, the proposed method adopted two objective functions: (1) the total distance traveled by the robot and (2) the number of turns taken by the robot. The two objective functions are used to calculate energy consumption. The proposed method is compared to the hybrid genetic algorithm (HGA) and the traditional genetic algorithm (TGA) in a rectilinear environment containing obstacles of various complex shapes. In addition, the results of the proposed algorithm are compared to those generated by HGA, TGA, oriented rectilinear decomposition, and spatial cell diffusion and family of spanning tree coverage in existing research papers. The results of all comparisons indicate that the proposed algorithm outperformed the existing algorithms by reducing energy consumption by 5 to 60%. This paper provides the facility to operate the robot in different modes.
This article provides a structured description of openly available news topics and forecasts for armed conflict at the national and grid cell level starting January 2010. The news topics, as well as the forecasts, are updated monthly at conflictforecast.org and provide coverage for more than 170 countries and about 65,000 grid cells of size 55 × 55 km worldwide. The forecasts rely on natural language processing (NLP) and machine learning techniques to leverage a large corpus of newspaper text for predicting sudden onsets of violence in peaceful countries. Our goals are a) to support conflict prevention efforts by making our risk forecasts available to practitioners and research teams worldwide, b) to facilitate additional research that can utilize risk forecasts for causal identification, and c) to provide an overview of the news landscape.
Gradual typing integrates static and dynamic typing by introducing a dynamic type and a consistency relation. A problem of gradual type systems is that dynamic types can easily hide erroneous data flows since consistency relations are not transitive. Therefore, a more rigorous static check is required to reveal these hidden data flows statically. However, in order to preserve the expressiveness of gradually typed languages, static checks for gradually typed languages cannot simply reject programs with potentially erroneous data flows. By contrast, a more reasonable request is to show how these data flows can affect the execution of the program. In this paper, we propose and formalize Static Blame, a framework that can reveal hidden data flows for gradually typed programs and establish the correspondence between static-time data flows and runtime behavior. With this correspondence, we build a classification of potential errors detected from hidden data flows and formally characterize the possible impact of potential errors in each category on program execution, without simply rejecting the whole program. We implemented Static Blame on Grift, an academic gradually typed language, and evaluated the effectiveness of Static Blame by mutation analysis to verify our theoretical results. Our findings revealed that Static Blame exhibits a notable level of precision and recall in detecting type-related bugs. Furthermore, we conducted a manual classification to elucidate the reasons behind instances of failure. We also evaluated the performance of Static Blame, showing a quadratic growth in run time as program size increases.
We investigate whether ordinary quantification over objects is an extensional phenomenon, or rather creates non-extensional contexts; each claim having been propounded by prominent philosophers. It turns out that the question only makes sense relative to a background theory of syntax and semantics (here called a grammar) that goes well beyond the inductive definition of formulas and the recursive definition of satisfaction. Two schemas for building quantificational grammars are developed, one that invariably constructs extensional grammars (in which quantification, in particular, thus behaves extensionally) and another that only generates non-extensional grammars (and in which quantification is responsible for the failure of extensionality). We then ask whether there are reasons to favor one of these grammar schemas over the other, and examine an argument according to which the proper formalization of deictic utterances requires adoption of non-extensional grammars.
This paper explores citizens’ stances toward the use of artificial intelligence (AI) in public services in Norway. Utilizing a social contract perspective, the study analyzes the government–citizen relationship at macro, meso, and micro levels. A prototype of an AI-enabled public welfare service was designed and presented to 20 participants who were interviewed to investigate their stances on the described AI use. We found a generally positive attitude and identified three factors contributing to this: (a) the high level of trust in government (macro level); (b) the balanced value proposition between individual and collective needs (meso level); and (c) the reassurance provided by having humans in the loop and providing transparency into processes, data, and model’s logic (microlevel). The findings provide valuable insights into citizens’ stances for socially responsible AI in public services. These insights can inform policy and guide the design and implementation of AI systems in the public sector by foregrounding the government–citizen relationship.
A deployable manipulator has the characteristics of a small installation space and a large workspace, which has great application prospects in small unmanned platforms. Most existing deployable manipulators are designed based on rigid links, whose complexity and mass inevitably increase sharply with increasing numbers of rigid links and joints. Inspired by the remarkable properties of tape springs, this paper proposes novel deployable parallel tape-spring manipulators with low mass, simple mechanics, and a high deployed-to-folded ratio. First, a double C-shaped tape spring is presented to improve the stability of the structure. The combined fixed drive component (CFDC) and combined mobile drive component (CMDC) are designed. Then, novel 2-DOF and 3-DOF deployable translational parallel manipulators are proposed based on the CFDC and CMDC, and their degrees-of-freedom (DOFs), kinematics, and stability are analyzed. The coiled tape spring is regarded as an Archimedean spiral, which can significantly improve the accuracy of kinematic analysis. The correction coefficient of the Euler formula is obtained by comparison with simulation results and experimental results. Furthermore, the stability spaces of the 2-DOF and 3-DOF deployable parallel manipulators are given. Finally, a prototype is fabricated, and experiments are conducted to validate the proposed design and analysis.
A survey of Hong Kong residents finds that public support for government technology, as understood through the concept of smart cities, is associated with concept-awareness and official communications. The statistical analysis identifies moderating effects attributable to personal social media use and controls for personal ideological views about scope of government intervention and perceived political legitimacy of smart city policies. The study builds on a growing body of empirical scholarship about public support for government technology, while also addressing a practical trend in urban governance: the growing sophistication of technologies like artificial intelligence and their use in strengthening government capacities. The Hong Kong case exemplifies ambitious investments in technology by governments and, at the time of the survey, relatively high freedom of political expression. The study’s findings help refine theories about state-society relations in the rapidly evolving context of technology for public sector use.