To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Datafication—the increase in data generation and advancements in data analysis—offers new possibilities for governing and tackling worldwide challenges such as climate change. However, employing data in policymaking carries various risks, such as exacerbating inequalities, introducing biases, and creating gaps in access. This paper articulates 10 core tensions related to climate data and its implications for climate data governance, ranging from the diversity of data sources and stakeholders to issues of quality, access, and the balancing act between local needs and global imperatives. Through examining these tensions, the article advocates for a paradigm shift towards multi-stakeholder governance, data stewardship, and equitable data practices to harness the potential of climate data for the public good. It underscores the critical role of data stewards in navigating these challenges, fostering a responsible data ecology, and ultimately contributing to a more sustainable and just approach to climate action and broader social issues.
As the field of migration studies evolves in the digital age, big data analytics emerge as a potential game-changer, promising unprecedented granularity, timeliness, and dynamism in understanding migration patterns. However, the epistemic value added by this data explosion remains an open question. This paper critically appraises the claim, investigating the extent to which big data augments, rather than merely replicates, traditional data insights in migration studies. Through a rigorous literature review of empirical research, complemented by a conceptual analysis, we aim to map out the methodological shifts and intellectual advancements brought forth by big data. The potential scientific impact of this study extends into the heart of the discipline, providing critical illumination on the actual knowledge contribution of big data to migration studies. This, in turn, delivers a clarified roadmap for navigating the intersections of data science, migration research, and policymaking.
Objective: The study aims to build a comprehensive network structure of psychopathology based on patient narratives by combining the merits of both qualitative and quantitative research methodologies. Research methods: The study web-scraped data from 10,933 people who disclosed a prior DSM/ICD11 diagnosed mental illness when discussing their lived experiences of mental ill health. The study then used Python 3 and its associated libraries to run network analyses and generate a network graph. Key findings: The results of the study revealed 672 unique experiences or symptoms that generated 30023 links or connections. The study also identified that of all 672 reported experiences/symptoms, five were deemed the most influential; “anxiety,” “fear,” “auditory hallucinations,” “sadness,” and “depressed mood and loss of interest.” Additionally, the study uncovered some unusual connections between the reported experiences/symptoms. Discussion and recommendations: The study demonstrates that applying a quantitative analytical framework to qualitative data at scale is a useful approach for understanding the nuances of psychopathological experiences that may be missed in studies relying solely on either a qualitative or a quantitative survey-based approach. The study discusses the clinical implications of its results and makes recommendations for potential future directions.
In this paper, we introduce the concept of a multilayer network game in a cooperative setup. We consider the notion of simultaneous contribution of individual players or links to two different networks (say, X and Z). Our model nests both classical network games and bi-cooperative network games. The calculation of the utility of players within a specific network in the presence of an additional/alternative network provides a broader spectrum of real-world decision dynamics. The subsequent challenge involves achieving an optimal distribution of payoffs among the players forming the networks. The link-based rule best fits to our model as it delves into the influence of the alternative links in the network. We have designed an extended Position value to address the complexities arising from scenarios where networks overlap. Further, it is shown that the Position value is uniquely characterized by the Efficiency and Balanced Link Contribution axioms.
This study aims to explore the dependencies on the cryptocurrency market using social network tools. We focus on the correlations observed in the cryptocurrency returns. Based on the sample of cryptocurrencies listed between January 2015 and December 2022 we examine which cryptos are central to the overall market and how often major players change. Static network analysis based on the whole sample shows that the network consists of several communities strongly connected and central, as well as a few that are disconnected and peripheral. Such a structure of the network implies high systemic risk. The day-by-day snapshots show that the network evolves rapidly. We construct the ranking of major cryptos based on centrality measures utilizing the TOPSIS method. We find that when single measures are considered, Bitcoin seems to have lost its first-mover advantage in late 2016. However, in the overall ranking, it still appears among the top positions. The collapse of any of the cryptocurrencies from the top of the rankings poses a serious threat to the entire market.
High utility itemsets mining (HUIM) is an important sub-field of frequent itemset mining (FIM). Recently, HUIM has received much attention in the field of data mining. High utility itemsets (HUIs) have proven to be quite useful in marketing, retail marketing, cross-marketing, and e-commerce. Traditional HUIM approaches suffer from a drawback as they need a user-defined minimum utility ($ min\_util $) threshold. It is not easy for the users to set the appropriate $ min\_util $ threshold to find actionable HUIs. To target this drawback, top-k HUIM has been introduced. top-k HUIM is more suitable for supermarket managers and retailers to prepare appropriate strategies to generate higher profit. In this paper, we provide an in-depth survey of the current status of top-k HUIM approaches. The paper presents the task of top-k HUIM and its relevant definitions. It reviews the top-k HUIM approaches and presents their advantages and disadvantages. The paper also discusses the important strategies of the top-k HUIM, their variations, and research opportunities. The paper provides a detailed summary, analysis, and future directions of the top-k HUIM field.
We study the problem of fitting a piecewise affine (PWA) function to input–output data. Our algorithm divides the input domain into finitely many regions whose shapes are specified by a user-provided template and such that the input–output data in each region are fit by an affine function within a user-provided error tolerance. We first prove that this problem is NP-hard. Then, we present a top-down algorithmic approach for solving the problem. The algorithm considers subsets of the data points in a systematic manner, trying to fit an affine function for each subset using linear regression. If regression fails on a subset, the algorithm extracts a minimal set of points from the subset (an unsatisfiable core) that is responsible for the failure. The identified core is then used to split the current subset into smaller ones. By combining this top-down scheme with a set-covering algorithm, we derive an overall approach that provides optimal PWA models for a given error tolerance, where optimality refers to minimizing the number of pieces of the PWA model. We demonstrate our approach on three numerical examples that include PWA approximations of a widely used nonlinear insulin–glucose regulation model and a double inverted pendulum with soft contacts.
Continuum robot-based surgical systems are becoming an effective tool for minimally invasive surgery. A flexible, dexterous, and compact robot structure is suitable for carrying out complex surgical operations. In this paper, we propose performance metrics for dexterity based on data density. Data density at a point in the workspace is higher if the number of reachable points is higher, with a unique configuration lying in a small square box around a point. The computation of these metrics is performed with forward kinematics using the Monte Carlo method and, hence, is computationally efficient. The data density at a particular point is a measure of dexterity at that point. In contrast, the dexterity distribution property index is a measure of how well dexterity is distributed across the workspace according to desired criteria. We compare the dexterity distribution property index across the workspace with the dexterity index based on the dexterous solid angle and manipulability-based approach. A comparative study reveals that the proposed method is simple and straightforward because it uses only the position of the reachable point as the input parameter. The method can quantify and compare the performance of different geometric designs of hyper-redundant and multisegment continuum robots based on dexterity.
In this paper, we introduce a new class of $T_0$ spaces called wb-sober spaces, which is strictly larger than the class of open well-filtered spaces. Unlike open well-filtered spaces, wb-sober spaces are defined more intuitively by requiring certain special subsets, termed wb-irreducible closed sets, to have singleton closures. We establish several key results about these spaces, including (1) every open well-filtered space is wb-sober, but not vice versa; (2) every strongly core-coherent wb-sober space is open well-filtered; (3) a space is core-compact iff its irreducible closed sets are wb-irreducible, providing a characterization of core-compactness; (4) every core-compact wb-sober space is sober, thereby generalizing the Jia-Jung problem. In addition, we investigate the core-coherence of the Xi-Zhao model. We prove that a $T_1$ space contains finite number of isolated points iff its Xi-Zhao model is core-coherent iff its Xi-Zhao model is strongly core-coherent. Based on this result, we then propose a general approach to constructing a non-routine open well-filtered but not well-filtered dcpo.
This paper proposes a seat support mechanism to solve the problem of misalignment between the chest examination and probe scanning areas when the patient is bent in a seated-style echocardiography robot. To guide the patient to an appropriate body position where their chest is within the examination range of the chest examination unit while minimizing the physical load of the patient, the posture of the patient must satisfy the three following conditions. (i) The breech must be in contact with the seat surface, (ii) the legs must be vertical to the floor, and (iii) the chest and mechanism must be parallel while the probe scanning and chest examination ranges must match. The human body was modeled to derive a posture that satisfies the aforementioned conditions for the height of each individual, and a seat support mechanism with four degrees of freedom was installed to guide the user to the derived posture. By installing this mechanism, the body load of the left biceps brachii, right biceps brachii, left latissimus dorsi, and right latissimus dorsi was reduced to 64.7%, 52.7%, 86.4%, and 80.2%, respectively. The sharpness of the image contours was improved to 103.8%.
A framework with sets of attacking arguments ($\textit{SETAF}$) is an extension of the well-known Dung’s Abstract Argumentation Frameworks ($\mathit{AAF}$s) that allows joint attacks on arguments. In this paper, we provide a translation from Normal Logic Programs ($\textit{NLP}$s) to $\textit{SETAF}$s and vice versa, from $\textit{SETAF}$s to $\textit{NLP}$s. We show that there is pairwise equivalence between their semantics, including the equivalence between $L$-stable and semi-stable semantics. Furthermore, for a class of $\textit{NLP}$s called Redundancy-Free Atomic Logic Programs ($\textit{RFALP}$s), there is also a structural equivalence as these back-and-forth translations are each other’s inverse. Then, we show that $\textit{RFALP}$s are as expressive as $\textit{NLP}$s by transforming any $\textit{NLP}$ into an equivalent $\textit{RFALP}$ through a series of program transformations already known in the literature. We also show that these program transformations are confluent, meaning that every $\textit{NLP}$ will be transformed into a unique $\textit{RFALP}$. The results presented in this paper enhance our understanding that $\textit{NLP}$s and $\textit{SETAF}$s are essentially the same formalism.
This paper explain how the geometric notions of local contractibility and properness are related to the $\Sigma$-types and $\Pi$-types constructors of dependent type theory. We shall see how every Grothendieck fibration comes canonically with such a pair of notions—called smooth and proper maps—and how this recovers the previous examples and many more. This paper uses category theory to reveal a common structure between geometry and logic, with the hope that the parallel will be beneficial to both fields. The style is mostly expository, and the main results are proved in external references.
This study used a mixed-methods approach to evaluate the efficacy of mobile-assisted language learning (MALL) in teaching English phrasal verbs (PVs) in a 12-week study. The participants were 122 EFL college students divided equally into an experimental and a control group. The experimental group was assigned PV learning on an iOS-based application (henceforth referred to as “app”) for eight weeks; the control group learned the same PVs through paper-based material. Pre-tests, post-tests, and weekly class tests were conducted, and one-way ANOVAs were performed to evaluate the differences between the two groups using their pre-test and post-test scores, with repeated measures ANOVA used to analyse the learning gains in weekly tests. The results revealed that the experimental group significantly outperformed the control group in the post-test (F = 6.09, p = .015, Cohen’s d = 0.45) and weekly tests (F = 31.68, p = .000). A Likert-scale-based e-questionnaire consisting of 19 items was administered to the experimental group to obtain their perceptions of the app’s usefulness for learning English PVs. The overall results suggest that MALL, particularly with this specific mobile app, may enhance students’ ability to understand and use English PVs, a key aspect of vocabulary skills. The findings can be used to encourage instructors to employ MALL for teaching the English lexicon for better learning outcomes in EFL settings.
This study investigated how multimedia glossing affects incidental vocabulary learning from a listening task on mobile devices. A total of 118 English language learners were asked to listen to a story with 25 glossed target words on their mobile phones. In order to examine the effects of different types of glossing, participants were divided into four groups with access to four glosses during their listening: L1 textual, L2 textual, L1 textual and pictorial, and L2 textual and pictorial. Two vocabulary tests (i.e. definition-supply test and meaning-recognition test) were administrated immediately after treatment and two weeks later to measure vocabulary gain for target words. The results indicated that participants who had access to L1 textual and pictorial glosses had significantly higher vocabulary gains than other conditions, especially in meaning-recall word knowledge. Finally, a detailed discussion of the findings was provided to explain the results based on the theoretical framework of the study.
This paper presents an eight wire-driven parallel robot (WDPR-8) designed to serve as a suspension manipulator for aircraft models during wind tunnel testing. The precision of these tests is significantly influenced by the system’s stability and workspace, both of which are shaped by the geometric configuration of the structure and the tension in the wires. To acquire the efficiency principle of the suspension scheme design for the model, a kinematics model for a WDPR-8 was established. Based on the kinematics model, the stiffness of a WDPR-8 was theoretically studied, and the analytical expression of stiffness matrix of a WDPR was deduced. The stiffness matrix was composed of two terms, one of which is determined by the configuration of suspension system and the other term is determined by the wire tension. Based on the analysis result, a set of suspension scheme was discussed under the calculation of stiffness matrix and workspace analysis. In the discussion process, in addition to the stiffness-maximum calculation, another criterion as force closure is presented, which is useful for increasing the stiffness and workspace of the robot. Finally, a prototype was established according to the analysis result, and the workspace experiments are conducted. Test results indicate that the workspace meets the design requirements, validating the system suspension design method of a WDPR for aircraft model suspension in wind tunnel test considering of the systematic stiffness and workspace.
In environmental science, where information from sensor devices are sparse, data fusion for mapping purposes is often based on geostatistical approaches. We propose a methodology called adaptive distance attention that enables us to fuse sparse, heterogeneous, and mobile sensor devices and predict values at locations with no previous measurement. The approach allows for automatically weighting the measurements according to a priori quality information about the sensor device without using complex and resource-demanding data assimilation techniques. Both ordinary kriging and the general regression neural network (GRNN) are integrated into this attention with their learnable parameters based on deep learning architectures. We evaluate this method using three static phenomena with different complexities: a case related to a simplistic phenomenon, topography over an area of 196 $ {km}^2 $ and to the annual hourly $ {NO}_2 $ concentration in 2019 over the Oslo metropolitan region (1026 $ {km}^2 $). We simulate networks of 100 synthetic sensor devices with six characteristics related to measurement quality and measurement spatial resolution. Generally, outcomes are promising: we significantly improve the metrics from baseline geostatistical models. Besides, distance attention using the Nadaraya–Watson kernel provides as good metrics as the attention based on the kriging system enabling the possibility to alleviate the processing cost for fusion of sparse data. The encouraging results motivate us in keeping adapting distance attention to space-time phenomena evolving in complex and isolated areas.
Motion assistance for elderly people is a field of application for service robotic systems that can be characterized by requirements and constraints of human–machine interaction and by the specificity of the user’s conditions. The main aspects of characterization and constraints are examined for the application of service systems that can be specifically conceived or adapted for elderly motion assistance by having to consider conditions of motion deficiency and muscular strength weakness as well as psychological aptitudes of users. The analysis is discussed in general terms with reference to elderly people who may not even suffer from specific pathologies. Therefore, the discussion focuses on the need for motion exercise in proper environments, including domestic ones and frame familiar to a user. The challenges of such applications oriented toward elderly users are discussed as requiring research and design of solutions in terms of specific portability, user-oriented operation, low costs, and clinical-physiotherapeutic functionality. Results of the author’s team experiences are presented as an example of problems and attempted solutions to meet the new challenges of service systems for motion assistance applications for elderly people.
To maximize its value, the design, development and implementation of structural health monitoring (SHM) should focus on its role in facilitating decision support. In this position paper, we offer perspectives on the synergy between SHM and decision-making. We propose a classification of SHM use cases aligning with various dimensions that are closely linked to the respective decision contexts. The types of decisions that have to be supported by the SHM system within these settings are discussed along with the corresponding challenges. We provide an overview of different classes of models that are required for integrating SHM in the decision-making process to support the operation and maintenance of structures and infrastructure systems. Fundamental decision-theoretic principles and state-of-the-art methods for optimizing maintenance and operational decision-making under uncertainty are briefly discussed. Finally, we offer a viewpoint on the appropriate course of action for quantifying, validating, and maximizing the added value generated by SHM. This work aspires to synthesize the different perspectives of the SHM, Prognostic Health Management, and reliability communities, and provide directions to researchers and practitioners working towards more pervasive monitoring-based decision-support.
In homotopy type theory, few constructions have proved as troublesome as the smash product. While its definition is just as direct as in classical mathematics, one quickly realises that in order to define and reason about functions over iterations of it, one has to verify an exponentially growing number of coherences. This has led to crucial results concerning smash products remaining open. One particularly important such result is the fact that the smash product forms a (1-coherent) symmetric monoidal product on the universe of pointed types. This fact was used, without a complete proof, by, for example, Brunerie ((2016) PhD thesis, Université Nice Sophia Antipolis) to construct the cup product on integral cohomology and is, more generally, a fundamental result in traditional algebraic topology. In this paper, we salvage the situation by introducing a simple informal heuristic for reasoning about functions defined over iterated smash products. We then use the heuristic to verify, for example, the hexagon and pentagon identities, thereby obtaining a proof of symmetric monoidality. We also provide a formal statement of the heuristic in terms of an induction principle concerning the construction of homotopies of functions defined over iterated smash products. The key results presented here have been formalised in the proof assistant Cubical Agda.