To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The generic multiverse was introduced in [74] and [81] to explicate the portion of mathematics which is immune to our independence techniques. It consists, roughly speaking, of all universes of sets obtainable from a given universe by forcing extension. Usuba recently showed that the generic multiverse contains a unique definable universe, assuming strong large cardinal hypotheses. On the basis of this theorem, a non-pluralist about set theory could dismiss the generic multiverse as irrelevant to what set theory is really about, namely that unique definable universe. Whatever one’s attitude towards the generic multiverse, we argue that certain impure proofs ensure its ongoing relevance to the foundations of set theory. The proofs use forcing-fragile theories and absoluteness to prove ${\mathrm {ZFC}}$ theorems about simple “concrete” objects.
This paper investigates a closed-loop visual servo control scheme for controlling the position of a fully constrained cable-driven parallel robot (CDPR) designed for functional rehabilitation tasks. The control system incorporates real-time position correction using an Intel RealSense camera. Our CDPR features four cables exiting from pulleys, driven by AC servomotors, to move the moving platform (MP). The focus of this work is the development of a control scheme for a closed-loop visual servoing system utilizing depth/RGB images. The developed algorithm uses this data to determine the actual Cartesian position of the MP, which is then compared to the desired position to calculate the required Cartesian displacement. This displacement is fed into the inverse kinematic model to generate the servomotor commands. Three types of trajectories (circular, square, and triangular) are used to test the controller’s compliance with its position. Compared to the open-loop control of the robot, the new control system increases positional accuracy and effectively handles cable behavior, various perturbations, and modeling errors. The obtained results showed significant improvements in control performance, notably reduced root mean square error and maximal error in terms of position.
The estimation of workspace for parallel kinematic machines (PKMs) typically relies on geometric considerations, which is suitable for PKMs operating under light load conditions. However, when subjected to heavy load, PKMs may experience significant deformation in certain postures, potentially compromising their stiffness. Additionally, heavy load conditions can impact motor loading performance, leading to inadequate motor loading in specific postures. Consequently, in addition to geometric constraints, the workspace of PKMs under heavy load is also constrained by mechanism deformation and motor loading performance.
This paper aims at developing a new heavy load 6-PSS PKM for multi-degree of freedom forming process. Additionally, it proposes a new method for estimating the workspace, which takes into account both mechanism deformation and motor loading performance. Initially, the geometric workspace of the machine is predicted based on its geometric configuration. Subsequently, the workspace is predicted while considering the effects of mechanism deformation and motor loading performance separately. Finally, the workspace is synthesized by simultaneously accounting for both mechanism deformation and motor loading performance, and a new index of workspace utilization rate is proposed. The results indicate that the synthesized workspace of the machine diminishes as the load magnitude and load arm increase. Specifically, under a heavy load magnitude of 6000 kN and a load arm of 200 mm, the utilization rate of the synthesized workspace is only 9.9% of the geometric workspace.
Data is the foundation of any scientific, industrial, or commercial process. Its journey flows from collection to transport, storage, and processing. While best practices and regulations guide its management and protection, recent events have underscored their vulnerabilities. Academic research and commercial data handling have been marred by scandals, revealing the brittleness of data management. Data is susceptible to undue disclosures, leaks, losses, manipulation, or fabrication. These incidents often occur without visibility or accountability, necessitating a systematic structure for safe, honest, and auditable data management. We introduce the concept of Honest Computing as the practice and approach that emphasizes transparency, integrity, and ethical behaviour within the realm of computing and technology. It ensures that computer systems and software operate honestly and reliably without hidden agendas, biases, or unethical practices. It enables privacy and confidentiality of data and code by design and default. We also introduce a reference framework to achieve demonstrable data lineage and provenance, contrasting it with Secure Computing, a related but differently orientated form of computing. At its core, Honest Computing leverages Trustless Computing, Confidential Computing, Distributed Computing, Cryptography, and AAA security concepts. Honest Computing opens new ways of creating technology-based processes and workflows which permit the migration of regulatory frameworks for data protection from principle-based approaches to rule-based ones. Addressing use cases in many fields, from AI model protection and ethical layering to digital currency formation for finance and banking, trading, and healthcare, this foundational layer approach can help define new standards for appropriate data custody and processing.
The condition assessment of underground infrastructure (UI) is critical for maintaining the safety, functionality, and longevity of subsurface assets like tunnels and pipelines. This article reviews various data acquisition techniques, comparing their strengths and limitations in UI condition assessment. In collecting structured data, traditional methods like strain gauge can only obtain relatively low volumes of data due to low sampling frequency, manual data collection, and transmission, whereas more advanced and automatic methods like distributed fiber optic sensing can gather relatively larger volumes of data due to automatic data collection, continuous sampling, or comprehensive monitoring. Upon comparison, unstructured data acquisition methods can provide more detailed visual information that complements structured data. Methods like closed-circuit television and unmanned aerial vehicle produce large volumes of data due to their continuous video recording and high-resolution imaging, posing great challenges to data storage, transmission, and processing, while ground penetration radar and infrared thermography produce smaller volumes of image data that are more manageable. The acquisition of large volumes of UI data is the first step in its condition assessment. To enable more efficient, accurate, and reliable assessment, it is recommended to (1) integrate data analytics like artificial intelligence to automate the analysis and interpretation of collected data, (2) to develop robust big data management platforms capable of handling large volumes of data storage, processing and analysis, (3) to couple different data acquisition technologies to leverage the strengths of each technique, and (4) to continuously improve data acquisition methods to ensure efficient and reliable data acquisition.
Due to their significant role in creative design ideation, databases of causal ontology-based models for biological and technical systems have been developed. However, creating structured database entries through system models using a causal ontology requires the time and effort of experts. Researchers have worked toward developing methods that can automatically generate representations of systems from documents using causal ontologies by leveraging machine learning (ML) techniques. However, these methods use limited, hand-annotated data for building the ML models and have manual touchpoints that are not documented. While opportunities exist to improve the accuracy of these ML models, more importantly, it is required to understand the complete process of generating structured representations using causal ontology. This research proposes a new method and a set of rules to extract information relevant to the constructs of the SAPPhIRE model of causality from descriptions of technical systems in natural language and report the performance of this process. This process aims to understand the information in the context of the entire description. The method starts by identifying the system interactions involving material, energy and information and then builds the causal description of each system interaction using the SAPPhIRE ontology. This method was developed iteratively, verifying the improvements through user trials in every cycle. The user trials of this new method and rules with specialists and novice users of the SAPPhIRE modeling showed that the method helps in accurately and consistently extracting the information relevant to the constructs of the SAPPhIRE model from a given natural language description.
The distinction between the proofs that only certify the truth of their conclusion and those that also display the reasons why their conclusion holds has a long philosophical history. In the contemporary literature, the grounding relation—an objective, explanatory relation which is tightly connected with the notion of reason—is receiving considerable attention in several fields of philosophy. While much work is being devoted to characterising logical grounding in terms of deduction rules, no in-depth study focusing on the difference between grounding rules and logical rules exists. In this work, we analyse the relation between logical grounding and classical logic by focusing on the technical and conceptual differences that distinguish grounding rules and logical rules. The calculus employed to conduct the analysis provides moreover a strong confirmation of the fact that grounding derivations are logical derivations of a certain kind, without trivialising the distinction between grounding and logical rules, explanatory and non-explanatory parts of a derivation. By a further formal analysis, we negatively answer the question concerning the possible correspondence between grounding rules and intuitionistic logical rules.
Structural convergence is a framework for the convergence of graphs by Nešetřil and Ossona de Mendez that unifies the dense (left) graph convergence and Benjamini-Schramm convergence. They posed a problem asking whether for a given sequence of graphs $(G_n)$ converging to a limit $L$ and a vertex $r$ of $L$, it is possible to find a sequence of vertices $(r_n)$ such that $L$ rooted at $r$ is the limit of the graphs $G_n$ rooted at $r_n$. A counterexample was found by Christofides and Král’, but they showed that the statement holds for almost all vertices $r$ of $L$. We offer another perspective on the original problem by considering the size of definable sets to which the root $r$ belongs. We prove that if $r$ is an algebraic vertex (i.e. belongs to a finite definable set), the sequence of roots $(r_n)$ always exists.
Designers rely on many methods and strategies to create innovative designs. However, design research often overlooks the personality and attitudinal factors influencing method utility and effectiveness. This article defines and operationalizes the construct design mindset and introduces the Design Mindset Inventory (D-Mindset0.1), allowing us to measure and leverage statistical analyses to advance our understanding of its role in design. The inventory’s validity and reliability are evaluated by analyzing a large sample of engineering students (N = 473). Using factor analysis, we identified four underlying factors of D-Mindset0.1 related to the theoretical concepts: Conversation with the Situation, Iteration, Co-Evolution of Problem–Solution and Imagination. The latter part of the article finds statistical and theoretically meaningful relationships between design mindset and the three design-related constructs of sensation-seeking, self-efficacy and ambiguity tolerance. Ambiguity tolerance and self-efficacy emerge as positively correlated with design mindset. Sensation-seeking, which is only significantly correlated with subconstructs of D-Mindset0.1, is both negatively and positively correlated. These relationships lend validity D-Mindset0.1 and, by drawing on previously established relationships between the three personality traits and specific behaviors, facilitate further investigations of what its subconstructs capture.
We consider the task completion time of a repairable server system in which a server experiences randomly occurring service interruptions during which the server works slowly. Every service-state change preempts the task that is being processed. The server may then resume the interrupted task, it may replace the task with a different one, or it may restart the same task from the beginning, under the new service-state. The total time that the server takes to complete a task of random size including interruptions is called completion time. We study the completion time of a task under the last two cases as a function of the task size distribution, the service interruption frequency/severity, and the repair frequency. We derive closed form expressions for the completion time distribution in Laplace domain under replace and restart recovery disciplines and present their asymptotic behavior. In general, the heavy tailed behavior of completion times arises due to the heavy tailedness of the task time. However, in the preempt-restart service discipline, even in the case that the server still serves during interruptions albeit at a slower rate, completion times may demonstrate power tail behavior for exponential tail task time distributions. Furthermore, we present an $M/G/\infty$ queue with exponential service time and Markovian service interruptions. Our results reveal that the stationary first order moments, that is, expected system time and expected number in the system are insensitive to the way the service modulation affects the servers; system-wide modulation affecting every server simultaneously vs identical modulation affecting each server independently.
In this work, we consider extensions of the dual risk model with proportional gains by introducing dependence structures among gain sizes and gain interarrival times. Among others, we further consider the case where the proportionality parameter is randomly chosen, the case where it is a uniformly random variable, as well as the case where we may have upward as well as downward jumps. Moreover, we consider the case with causal dependence structure, as well as the case where the dependence is based on the generalized Farlie–Gumbel–Morgenstern copula. The ruin probability and the distribution of the time to ruin are investigated.
Experiments in engineering are typically conducted in controlled environments where parameters can be set to any desired value. This assumes that the same applies in a real-world setting, which is often incorrect as many experiments are influenced by uncontrollable environmental conditions such as temperature, humidity, and wind speed. When optimizing such experiments, the focus should be on finding optimal values conditionally on these uncontrollable variables. This article extends Bayesian optimization to the optimization of systems in changing environments that include controllable and uncontrollable parameters. The extension fits a global surrogate model over all controllable and environmental variables but optimizes only the controllable parameters conditional on measurements of the uncontrollable variables. The method is validated on two synthetic test functions, and the effects of the noise level, the number of environmental parameters, the parameter fluctuation, the variability of the uncontrollable parameters, and the effective domain size are investigated. ENVBO, the proposed algorithm from this investigation, is applied to a wind farm simulator with eight controllable and one environmental parameter. ENVBO finds solutions for the entire domain of the environmental variable that outperform results from optimization algorithms that only focus on a fixed environmental value in all but one case while using a fraction of their evaluation budget. This makes the proposed approach very sample-efficient and cost-effective. An off-the-shelf open-source version of ENVBO is available via the NUBO Python package.
In an era of globalized research endeavors, the interplay between government funding programs, funding decisions, and their influence on successful research collaborations and grant application success rates has emerged as a critical focus of inquiry. This study embarks on an in-depth analysis of cross-country funding dynamics over the past three decades, with a specific emphasis on support for academic-industry collaboration versus sole academic or industry funding. Drawing insights from comprehensive datasets and policy trends, our research illuminates the evolving landscape of research funding and collaboration policies. We examine funding by Innosuisse (Swiss Innovation Project Funding) and SBIR (US Small Business Innovation Research), exploring the rates of future grant success for both academic and industry partners. We find strong evidence of rich-get-richer phenomenon in the Innosuisse program for both academic partners and industry partners in terms of winning future grants. For SBIR we find weaker levels of continued funding to the same partners with most attaining at most a few grants. With the increasing prevalence of academic-industry collaborations among both funders, it is worth considering additional efforts to ensure that novel ideas and new individuals and teams are supported.
This article interrogates three claims made in relation to the use of data in relation to peace. That more data, faster data, and impartial data will lead to better policy and practice outcomes. Taken together, this data myth relies on a lack of curiosity about the provenance of data and the infrastructure that produces it and asserts its legitimacy. Our discussion is concerned with issues of power, inclusion, and exclusion, and particularly how knowledge hierarchies attend to the collection and use of data in relation to conflict-affected contexts. We therefore question the axiomatic nature of these data myth claims and argue that the structure and dynamics of peacebuilding actors perpetuate the myth. We advocate a fuller reflection of the data wave that has overtaken us and echo calls for an ethics of numbers. In other words, this article is concerned with the evidence base for evidence-based peacebuilding. Mindful of the policy implications of our concerns, the article puts forward five tenets of good practice in relation to data and the peacebuilding sector. The concluding discussion further considers the policy implications of the data myth in relation to peace, and particularly, the consequences of casting peace and conflict as technical issues that can be “solved” without recourse to human and political factors.
This analysis provides a critical account of AI governance in the modern “smart city” through a feminist lens. Evaluating the case of Sidewalk Labs’ Quayside project—a smart city development that was to be implemented in Toronto, Canada—it is argued that public–private partnerships can create harmful impacts when corporate actors seek to establish new “rules of the game” regarding data regulation. While the Quayside project was eventually abandoned in 2020, it demonstrates key observations for the state of urban algorithmic governance both within Canada and internationally. Articulating the need for a revitalised and participatory smart city governance programme prioritizes meaningful engagement in the forms of transparency and accountability measures. Taking a feminist lens, it argues for a two-pronged approach to governance: integrating collective engagement from the outset in the design process and ensuring the civilian data protection through a robust yet localized rights-based privacy regulation strategy. Engaging with feminist theories of intersectionality in relation to technology and data collection, this framework articulates the need to understand the broader histories of social marginalization when implementing governance strategies regarding artificial intelligence in cities.
Artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFISs) are machine learning techniques that enable modeling and prediction of various properties in the milling process of alloy 2017A, including quality, cost, and energy consumption (QCE). To utilize ANNs or ANFIS for QCE prediction, researchers must gather a dataset consisting of input–output pairs that establish the relationship between QCE and various input variables such as machining parameters, tool properties, and material characteristics. Subsequently, this dataset can be employed to train a machine learning model using techniques like backpropagation or gradient descent. Once the model has been trained, predictions can be made on new input data by providing the desired input variables, resulting in predicted QCE values as output. This study comprehensively examines and identifies the scientific contributions of strategies, machining sequences, and cutting parameters on surface quality, machining cost, and energy consumption using artificial intelligence (ANN and ANFIS). The findings indicate that the optimal neural architecture for ANNs, utilizing the Bayesian regularization (BR) algorithm, is a {3-10-3} architecture with an overall mean square error (MSE) of 2.74 × 10−3. Similarly, for ANFIS, the optimal structure yielding better error and correlation for the three output variables (Etot, Ctot, and Ra) is a {2, 2, 2} structure. The results demonstrate that using the BR algorithm with a multi-criteria output response yields favorable outcomes compared to the ANFIS.
The international community, and the UN in particular, is in urgent need of wise policies, and a regulatory institution to put data-based systems, notably AI, to positive use and guard against their abuse. Digital transformation and “artificial intelligence (AI)”—which can more adequately be called “data-based systems (DS)”—present ethical opportunities and risks. Helping humans and the planet to flourish sustainably in peace and guaranteeing globally that human dignity is respected not only offline but also online, in the digital sphere, and the domain of DS requires two policy measures: (1) human rights-based data-based systems (HRBDS) and (2) an International Data-Based Systems Agency (IDA): IDA should be established at the UN as a platform for cooperation in the field of digital transformation and DS, fostering human rights, security, and peaceful uses of DS.
Anticipating future migration trends is instrumental to the development of effective policies to manage the challenges and opportunities that arise from population movements. However, anticipation is challenging. Migration is a complex system, with multifaceted drivers, such as demographic structure, economic disparities, political instability, and climate change. Measurements encompass inherent uncertainties, and the majority of migration theories are either under-specified or hardly actionable. Moreover, approaches for forecasting generally target specific migration flows, and this poses challenges for generalisation.
In this paper, we present the results of a case study to predict Irregular Border Crossings (IBCs) through the Central Mediterranean Route and Asylum requests in Italy. We applied a set of Machine Learning techniques in combination with a suite of traditional data to forecast migration flows. We then applied an ensemble modelling approach for aggregating the results of the different Machine Learning models to improve the modelling prediction capacity.
Our results show the potential of this modelling architecture in producing forecasts of IBCs and Asylum requests over 6 months. The explained variance of our models through a validation set is as high as 80%. This study offers a robust basis for the construction of timely forecasts. In the discussion, we offer a comment on how this approach could benefit migration management in the European Union at various levels of policy making.
Public procurement is a fundamental aspect of public administration. Its vast size makes its oversight and control very challenging, especially in countries where resources for these activities are limited. To support decisions and operations at public procurement oversight agencies, we developed and delivered VigIA, a data-based tool with two main components: (i) machine learning models to detect inefficiencies measured as cost overruns and delivery delays, and (ii) risk indices to detect irregularities in the procurement process. These two components cover complementary aspects of the procurement process, considering both active and passive waste, and help the oversight agencies to prioritize investigations and allocate resources. We show how the models developed shed light on specific features of the contracts to be considered and how their values signal red flags. We also highlight how these values change when the analysis focuses on specific contract types or on information available for early detection. Moreover, the models and indices developed only make use of open data and target variables generated by the procurement processes themselves, making them ideal to support continuous decisions at overseeing agencies.
We propose a physics-constrained convolutional neural network (PC-CNN) to solve two types of inverse problems in partial differential equations (PDEs), which are nonlinear and vary both in space and time. In the first inverse problem, we are given data that is offset by spatially varying systematic error (i.e., the bias, also known as the epistemic uncertainty). The task is to uncover the true state, which is the solution of the PDE, from the biased data. In the second inverse problem, we are given sparse information on the solution of a PDE. The task is to reconstruct the solution in space with high resolution. First, we present the PC-CNN, which constrains the PDE with a time-windowing scheme to handle sequential data. Second, we analyze the performance of the PC-CNN to uncover solutions from biased data. We analyze both linear and nonlinear convection-diffusion equations, and the Navier–Stokes equations, which govern the spatiotemporally chaotic dynamics of turbulent flows. We find that the PC-CNN correctly recovers the true solution for a variety of biases, which are parameterized as non-convex functions. Third, we analyze the performance of the PC-CNN for reconstructing solutions from sparse information for the turbulent flow. We reconstruct the spatiotemporal chaotic solution on a high-resolution grid from only 1% of the information contained in it. For both tasks, we further analyze the Navier–Stokes solutions. We find that the inferred solutions have a physical spectral energy content, whereas traditional methods, such as interpolation, do not. This work opens opportunities for solving inverse problems with partial differential equations.