To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Cyberbullying is the wilful and repeated infliction of harm on an individual using the Internet and digital technologies. Similar to face-to-face bullying, cyberbullying can be captured formally using the Routine Activities Model (RAM) whereby the potential victim and bully are brought into proximity of one another via the interaction on online social networking (OSN) platforms. Although the impact of the COVID-19 (SARS-CoV-2) restrictions on the online presence of minors has yet to be fully grasped, studies have reported that 44% of pre-adolescents have encountered more cyberbullying incidents during the COVID-19 lockdown. Transparency reports shared by OSN companies indicate an increased take-downs of cyberbullying-related comments, posts or content by artificially intelligen moderation tools. However, in order to efficiently and effectively detect or identify whether a social media post or comment qualifies as cyberbullying, there are a number factors based on the RAM, which must be taken into account, which includes the identification of cyberbullying roles and forms. This demands the acquisition of large amounts of fine-grained annotated data which is costly and ethically challenging to produce. In addition where fine-grained datasets do exist they may be unavailable in the target language. Manual translation is costly and expensive, however, state-of-the-art neural machine translation offers a workaround. This study presents a first of its kind experiment in leveraging machine translation to automatically translate a unique pre-adolescent cyberbullying gold standard dataset in Italian with fine-grained annotations into English for training and testing a native binary classifier for pre-adolescent cyberbullying. In addition to contributing high-quality English reference translation of the source gold standard, our experiments indicate that the performance of our target binary classifier when trained on machine-translated English output is on par with the source (Italian) classifier.
Dielectric elastomers (DEs) find applications in many areas, particularly in the field of soft robotics. When modeling and simulating DE-based actuators and sensors, a substantial portion of the literature assumes the selected DE material to behave in some perfectly hyperelastic manner, and the vast majority have assumed invariant permittivity. However, studies on simple planar DEs have revealed instabilities and hastened breakdowns when a variable permittivity is allowed. This is partly due to the intertwined electromechanical properties of DEs rooted on their labyrinthine polymeric microstructures. This work focuses on studying the effects of a varying (with stretch) permittivity on the out-of-plane deformation of a circular DE, using a model derived from principles of strain-induced polymer birefringence. In addition, we utilize the Edward–Vilgis model, which attempts to account for effects related to crosslinking, and length extension, slippage, and entanglement of polymer chains. Our approach reveals the presence of “stagnation” regions in the electromechanical behavior of the DE actuator material. These stagnation regions are characterized by both electrical and mechanical critical electrostrictive coefficient ratios. Mechanically, certain values of the electrostrictive coefficient ratio predict cases where deformation does not occur in response to a change in voltage. Electrically, certain cases are predicted where changes in capacitance cannot be measured in response to changes in deformation. Thus, some combined conditions of loading and material properties could limit the effectiveness of DE membranes in either actuation or sensing. Therefore, our results reveal mechanisms that could be useful to designers of actuators and sensors and unveil an opportunity for exploring new theoretical materials with potential novel applications. Furthermore, since there are known analogous formulations between electrical and optical properties, criticality principles studied in this article could be extended to optomechanical coupling.
In the field of robot reinforcement learning (RL), the reality gap has always been a problem that restricts the robustness and generalization of algorithms. We propose Simulation Twin (SimTwin) : a deep RL framework that can help directly transfer the model from simulation to reality without any real-world training. SimTwin consists of a RL module and an adaptive correct module. We train the policy using the soft actor-critic algorithm only in a simulator with demonstration and domain randomization. In the adaptive correct module, we design and train a neural network to simulate the human error correction process using force feedback. Subsequently, we combine the above two modules through digital twin to control real-world robots, correct simulator parameters by comparing the difference between simulator and reality automatically, and then generalize the correct action through the trained policy network without additional training. We demonstrate the proposed method in an open cabinet task; the experiments show that our framework can reduce the reality gap without any real-world training.
Sensor placement optimization (SPO) is usually applied during the structural health monitoring sensor system design process to collect effective data. However, the failure of a sensor may significantly affect the expected performance of the entire system. Therefore, it is necessary to study the optimal sensor placement considering the possibility of sensor failure. In this article, the research focusses on an SPO giving a fail-safe sensor distribution, whose sub-distributions still have good performance. The performance of the fail-safe sensor distribution with multiple sensors placed in the same position will also be studied. The adopted data sets include the mode shapes and corresponding labels of structural states from a series of tests on a glider wing. A genetic algorithm is used to search for sensor deployments, and the partial results are validated by an exhaustive search. Two types of optimization objectives are investigated, one for modal identification and the other for damage identification. The results show that the proposed fail-safe sensor optimization method is beneficial for balancing the system performance before and after sensor failure.
We show that the $4$-state anti-ferromagnetic Potts model with interaction parameter $w\in (0,1)$ on the infinite $(d+1)$-regular tree has a unique Gibbs measure if $w\geq 1-\dfrac{4}{d+1_{_{\;}}}$ for all $d\geq 4$. This is tight since it is known that there are multiple Gibbs measures when $0\leq w\lt 1-\dfrac{4}{d+1}$ and $d\geq 4$. We moreover give a new proof of the uniqueness of the Gibbs measure for the $3$-state Potts model on the $(d+1)$-regular tree for $w\geq 1-\dfrac{3}{d+1}$ when $d\geq 3$ and for $w\in (0,1)$ when $d=2$.
Given the complexity of cyber-physical systems (CPS), such as swarms of drones, often deviations, from a planned mission or protocol, occur which may in some cases lead to harm and losses. To increase the robustness of such systems, it is necessary to detect when deviations happen and diagnose the cause(s) for a deviation. We build on our previous work on soft agents, a formal framework based on using rewriting logic for specifying and reasoning about distributed CPS, to develop methods for diagnosis of CPS at design time. We accomplish this by (1) extending the soft agents framework with Fault Models; (2) proposing a protocol specification language and the definition of protocol deviations; and (3) development of workflows/algorithms for detection and diagnosis of protocol deviations. Our approach is partially inspired by existing work using counterfactual reasoning for fault ascription. We demonstrate our machinery with a collection of experiments.
This article considers the link removal problem in a strongly connected directed network with the goal of minimizing the dominant eigenvalue of the network’s adjacency matrix while maintaining its strong connectivity. Due to the complexity of the problem, this article focuses on computing a suboptimal solution. Furthermore, it is assumed that the knowledge of the overall network topology is not available. This calls for distributed algorithms which rely solely on the local information available to each individual node and information exchange between each node and its neighbors. Two different strategies based on matrix perturbation analysis are presented, namely simultaneous and iterative link removal strategies. Key ingredients in implementing both strategies include novel distributed algorithms for estimating the dominant eigenvectors of an adjacency matrix and for verifying strong connectivity of a directed network under link removal. It is shown via numerical simulations on different type of networks that in general the iterative link removal strategy yields a better suboptimal solution. However, it comes at a price of higher communication cost in comparison to the simultaneous link removal strategy.
The transition to open data practices is straightforward albeit surprisingly challenging to implement largely due to cultural and policy issues. A general data sharing framework is presented along with two case studies that highlight these challenges and offer practical solutions that can be adjusted depending on the type of data collected, the country in which the study is initiated, and the prevailing research culture. Embracing the constraints imposed by data privacy considerations, especially for biomedical data, must be emphasized for data outside of the United States until data privacy law(s) are established at the Federal and/or State level.
Numerical estimators of differential entropy and mutual information can be slow to converge as sample size increases. The offset Kozachenko–Leonenko (KLo) method described here implements an offset version of the Kozachenko–Leonenko estimator that can markedly improve convergence. Its use is illustrated in applications to the comparison of trivariate data from successive scene color images and the comparison of univariate data from stereophonic music tracks. Publicly available code for KLo estimation of both differential entropy and mutual information is provided for R, Python, and MATLAB computing environments at https://github.com/imarinfr/klo.
Today’s conflicts are becoming increasingly complex, fluid, and fragmented, often involving a host of national and international actors with multiple and often divergent interests. This development poses significant challenges for conflict mediation, as mediators struggle to make sense of conflict dynamics, such as the range of conflict parties and the evolution of their political positions, the distinction between relevant and less relevant actors in peace-making, or the identification of key conflict issues and their interdependence. International peace efforts appear ill-equipped to successfully address these challenges. While technology is already being experimented with and used in a range of conflict related fields, such as conflict predicting or information gathering, less attention has been given to how technology can contribute to conflict mediation. This case study contributes to emerging research on the use of state-of-the-art machine learning technologies and techniques in conflict mediation processes. Using dialogue transcripts from peace negotiations in Yemen, this study shows how machine-learning can effectively support mediating teams by providing them with tools for knowledge management, extraction and conflict analysis. Apart from illustrating the potential of machine learning tools in conflict mediation, the article also emphasizes the importance of interdisciplinary and participatory, cocreation methodology for the development of context-sensitive and targeted tools and to ensure meaningful and responsible implementation.
This paper explores participatory and socially engaged practices in ubiquitous music (ubimus). We discuss recent advances that target timbre as their focus while incorporating semantic strategies for knowledge transfer among participants. Creative Semantic Anchoring (ASC from the original in Portuguese) is a creative-action metaphor that shows promising preliminary results in collaborative asynchronous activities. Given its grounding in local resources and its support for explicit knowledge, ASC features a good potential to boost socially distributed knowledge. We discuss three strategies that consolidate and expand this approach within ubiquitous music and propose the label Radical ASC. We investigate the implications of this framework through the analysis of two artistic projects: Atravessamentos and Ntrallazzu.
Earth-contact mechanism (ECM), a type of mechanism to keep the system in contact with the earth and to move with the terrain changes. This paper uses the virtual equivalent parallel mechanism (VEPM) to convert the terrain data into the kinematical variables of the moving platform in the VEPM, and further analyzes the performance of the VEPM at each terrain point. Then, the comprehensive performance of the VEPM is chosen as the optimization goal, and a task-oriented dimensional optimization approach combined with the particle swarm algorithm and the neural network algorithm is proposed. This paper conducted a comparative experiment to verify the superiority of the new approach in optimizing the ECM’s comprehensive performance, whose performance analysis also can be applied into the layout design of the ECM. This paper proposed an analysis method to construct the ECM’s performance map based on the digital terrain map, which helps the control system and operator to make the optimal control decision.
The lack of specialized personnel and assistive technology to assist in rehabilitation therapies is one of the challenges facing the health sector today, and it is projected to increase. For researchers and engineers, it represents an opportunity to innovate and develop devices that improve and optimize rehabilitation services for the benefit of society. Among the different types of injuries, hand injuries occur most frequently. These injuries require a rehabilitation process in order for the hand to regain its functionality. This article presents the fabrication and instrumentation of an end-effector prototype, based on a five-bar configuration, for finger rehabilitation that executes a natural flexion-extension movement. The dimensions were obtained through the gradient method optimization and evaluated through Matlab. Experimental tests were carried out to demonstrate the prototype’s functionality and the effectiveness of a five-bar mechanism acting in a vertical plane, where gravity influences the mechanism’s performance. Position control using fifth-order polynomials with via points was implemented in the joint space. The design of the end-effector was also evaluated by performing a theoretical comparison, calculated as a function of a real flexion-extension trajectory of the fingers and the angle of rotation obtained through an IMU. As a result, controlling the two degrees of freedom of the mechanism at several points of the trajectory assures the end-effector trajectory and therefore the fingers’ range of motion, which helps for full patient recovery.
Productivity can be increased by manipulators tracking the desired trajectory with some constraints. Humans as moving obstacles in a shared workspace are one of the most challenging problems for cable-driven parallel mechanisms (CDPMs) that are considered in this research. One of the essential primary issues in CDPM is collision avoidance among cables and humans in the shared workspace. This paper presents a model and simulation of a reconfigurable, fully constrained CDPM enabling detection and avoidance of cable–human collision. In this method, unlike conventional CDPMs where the attachment points are fixed, the attachment points on the rails can be moved (up and down on their rails), and then the geometric configuration is adapted. Karush–Kuhn–Tucker method is proposed, which focuses on estimating the shortest distance among moving obstacles (human limbs) and all cables. When cable and limbs are close to colliding, the new idea of reconfiguration is presented by moving the cable’s attachment point on the rail to increase the distance between the cables and human limbs while they are both moving. Also, the trajectory of the end effector remains unchanged. Some simulation results of reconfiguration theory as a new approach are shown for the eight-cable-driven parallel manipulator, including the workspace boundary variation. The proposed method could find a collision-free predefined path, according to the simulation results.
Given a plane graph $G=(V,E)$, a Petrie tour of G is a tour P of G that alternately turns left and right at each step. A Petrie tour partition of G is a collection ${\mathscr P}=\{P_1,\ldots,P_q\}$ of Petrie tours so that each edge of G is in exactly one tour $P_i \in {\mathscr P}$. A Petrie tour P is called a Petrie cycle if all its vertices are distinct. A Petrie cycle partition of G is a collection ${\mathscr C}=\{C_1,\ldots,C_p\}$ of Petrie cycles so that each vertex of G is in exactly one cycle $C_i \in {\mathscr C}$. In this paper, we study the properties of 3-regular plane graphs that have Petrie cycle partitions and 4-regular plane multi-graphs that have Petrie tour partitions. Given a 4-regular plane multi-graph $G=(V,E)$, a 3-regularization of G is a 3-regular plane graph $G_3$ obtained from G by splitting every vertex $v\in V$ into two degree-3 vertices. G is called Petrie partitionable if it has a 3-regularization that has a Petrie cycle partition. The general version of this problem is motivated by a data compression method, tristrip, used in computer graphics. In this paper, we present a simple characterization of Petrie partitionable graphs and show that the problem of determining if G is Petrie partitionable is NP-complete.
With the excellent characteristic of intrinsic compliance, pneumatic artificial muscle can improve the interaction comfort of wearable robotic devices. This paper resolves the safety tracking control problem of a pneumatically actuated lower limb exoskeleton system. A single-parameter adaptive fuzzy control strategy is proposed with high control precision and full state constraints for the safe gait training tasks. Based on the barrier Lyapunov function, all signals in the closed-loop system can be bounded in finite time, which guarantees the deviation of the exoskeleton’s moving trajectory within a bounded range. Furthermore, with the proposed single-parameter adaptive law, the computational burden and the complexity of the control are reduced significantly. Finally, numerical simulations, no-load tracking experiments, and passive and active gait training experiments with healthy subjects validate the effectiveness of the proposed method.
We consider the simultaneous propagation of two contagions over a social network. We assume a threshold model for the propagation of the two contagions and use the formal framework of discrete dynamical systems. In particular, we study an optimization problem where the goal is to minimize the total number of new infections subject to a budget constraint on the total number of available vaccinations for the contagions. While this problem has been considered in the literature for a single contagion, our work considers the simultaneous propagation of two contagions. This optimization problem is NP-hard. We present two main solution approaches for the problem, namely an integer linear programming (ILP) formulation to obtain optimal solutions and a heuristic based on a generalization of the set cover problem. We carry out a comprehensive experimental evaluation of our solution approaches using many real-world networks. The experimental results show that our heuristic algorithm produces solutions that are close to the optimal solution and runs several orders of magnitude faster than the ILP-based approach for obtaining optimal solutions. We also carry out sensitivity studies of our heuristic algorithm.
There has been much recent interest in developing data-driven models for weather and climate predictions. However, there are open questions regarding their generalizability and robustness, highlighting a need to better understand how they make their predictions. In particular, it is important to understand whether data-driven models learn the underlying physics of the system against which they are trained, or simply identify statistical patterns without any clear link to the underlying physics. In this paper, we describe a sensitivity analysis of a regression-based model of ocean temperature, trained against simulations from a 3D ocean model setup in a very simple configuration. We show that the regressor heavily bases its forecasts on, and is dependent on, variables known to be key to the physics such as currents and density. By contrast, the regressor does not make heavy use of inputs such as location, which have limited direct physical impacts. The model requires nonlinear interactions between inputs in order to show any meaningful skill—in line with the highly nonlinear dynamics of the ocean. Further analysis interprets the ways certain variables are used by the regression model. We see that information about the vertical profile of the water column reduces errors in regions of convective activity, and information about the currents reduces errors in regions dominated by advective processes. Our results demonstrate that even a simple regression model is capable of learning much of the physics of the system being modeled. We expect that a similar sensitivity analysis could be usefully applied to more complex ocean configurations.
Humans are naturally endowed with the ability to write in a particular style. They can, for instance, rephrase a formal letter in an informal way, convey a literal message with the use of figures of speech or edit a novel by mimicking the style of some well-known authors. Automating this form of creativity constitutes the goal of style transfer. As a natural language generation task, style transfer aims at rewriting existing texts, and specifically, it creates paraphrases that exhibit some desired stylistic attributes. From a practical perspective, it envisions beneficial applications, like chatbots that modulate their communicative style to appear empathetic, or systems that automatically simplify technical articles for a non-expert audience.
Several style-aware paraphrasing methods have attempted to tackle style transfer. A handful of surveys give a methodological overview of the field, but they do not support researchers to focus on specific styles. With this paper, we aim at providing a comprehensive discussion of the styles that have received attention in the transfer task. We organize them in a hierarchy, highlighting the challenges for the definition of each of them and pointing out gaps in the current research landscape. The hierarchy comprises two main groups. One encompasses styles that people modulate arbitrarily, along the lines of registers and genres. The other group corresponds to unintentionally expressed styles, due to an author’s personal characteristics. Hence, our review shows how these groups relate to one another and where specific styles, including some that have not yet been explored, belong in the hierarchy. Moreover, we summarize the methods employed for different stylistic families, hinting researchers towards those that would be the most fitting for future research.
In this article, we introduce an extended, freely available resource for the Romanian language, named RoLEX. The dataset was developed mainly for speech processing applications, yet its applicability extends beyond this domain. RoLEX includes over 330,000 curated entries with information regarding lemma, morphosyntactic description, syllabification, lexical stress and phonemic transcription. The process of selecting the list of word entries and semi-automatically annotating the complete lexical information associated with each of the entries is thoroughly described.
The dataset’s inherent knowledge is then evaluated in a task of concurrent prediction of syllabification, lexical stress marking and phonemic transcription. The evaluation looked into several dataset design factors, such as the minimum viable number of entries for correct prediction, the optimisation of the minimum number of required entries through expert selection and the augmentation of the input with morphosyntactic information, as well as the influence of each task in the overall accuracy. The best results were obtained when the orthographic form of the entries was augmented with the complete morphosyntactic tags. A word error rate of 3.08% and a character error rate of 1.08% were obtained this way. We show that using a carefully selected subset of entries for training can result in a similar performance to the performance obtained by a larger set of randomly selected entries (twice as many). In terms of prediction complexity, the lexical stress marking posed most problems and accounts for around 60% of the errors in the predicted sequence.