To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article considers the intersecting of remembering and imagining vis à vis individual and cultural amnesia. It focuses on two artists’ films, Shona Illingworth’s video installation Time Present (2016) and Trinh-T Minh-Ha’s film, Forgetting Vietnam (2015). Time Present portrays the experience of an individual living with amnesia and further relates it to the immobility that denotes the cultural representation of the island of St Kilda (Outer Hebrides). Forgetting Vietnam questions the problematic legacy of the Vietnam War and its recollection by bridging personal and shared experiences through a portrait of Vietnam itself. Both Illingworth and Trinh use the film’s features of frames and movement to convey the emotional and affective resonances of the experiences and places presented to generate the possibility of presence. This article closely examines Time Present and Forgetting Vietnam with a focus on the films’ respective structures and thematic developments and reads them by suggesting the intersecting of remembering and imagining culturally and its potentiality for engaging with absence and silenced histories through decentralized approaches.
Forests play a crucial role in the Earth’s system processes and provide a suite of social and economic ecosystem services, but are significantly impacted by human activities, leading to a pronounced disruption of the equilibrium within ecosystems. Advancing forest monitoring worldwide offers advantages in mitigating human impacts and enhancing our comprehension of forest composition, alongside the effects of climate change. While statistical modeling has traditionally found applications in forest biology, recent strides in machine learning and computer vision have reached important milestones using remote sensing data, such as tree species identification, tree crown segmentation, and forest biomass assessments. For this, the significance of open-access data remains essential in enhancing such data-driven algorithms and methodologies. Here, we provide a comprehensive and extensive overview of 86 open-access forest datasets across spatial scales, encompassing inventories, ground-based, aerial-based, satellite-based recordings, and country or world maps. These datasets are grouped in OpenForest, a dynamic catalog open to contributions that strives to reference all available open-access forest datasets. Moreover, in the context of these datasets, we aim to inspire research in machine learning applied to forest biology by establishing connections between contemporary topics, perspectives, and challenges inherent in both domains. We hope to encourage collaborations among scientists, fostering the sharing and exploration of diverse datasets through the application of machine learning methods for large-scale forest monitoring. OpenForest is available at the following url: https://github.com/RolnickLab/OpenForest.
Data for Policy (dataforpolicy.org), a trans-disciplinary community of research and practice, has emerged around the application and evaluation of data technologies and analytics for policy and governance. Research in this area has involved cross-sector collaborations, but the areas of emphasis have previously been unclear. Within the Data for Policy framework of six focus areas, this report offers a landscape review of Focus Area 2: Technologies and Analytics. Taking stock of recent advancements and challenges can help shape research priorities for this community. We highlight four commonly used technologies for prediction and inference that leverage datasets from the digital environment: machine learning (ML) and artificial intelligence systems, the internet-of-things, digital twins, and distributed ledger systems. We review innovations in research evaluation and discuss future directions for policy decision-making.
Stochastic generators are essential to produce synthetic realizations that preserve target statistical properties. We propose GenFormer, a stochastic generator for spatio-temporal multivariate stochastic processes. It is constructed using a Transformer-based deep learning model that learns a mapping between a Markov state sequence and time series values. The synthetic data generated by the GenFormer model preserve the target marginal distributions and approximately capture other desired statistical properties even in challenging applications involving a large number of spatial locations and a long simulation horizon. The GenFormer model is applied to simulate synthetic wind speed data at various stations in Florida to calculate exceedance probabilities for risk management.
This study compares the design practices and performance of ChatGPT 4.0, a large language model (LLM), against graduate engineering students in a 48-h prototyping hackathon, based on a dataset comprising more than 100 prototypes. The LLM participated by instructing two participants who executed its instructions and provided objective feedback, generated ideas autonomously and made all design decisions without human intervention. The LLM exhibited similar prototyping practices to human participants and finished second among six teams, successfully designing and providing building instructions for functional prototypes. The LLM’s concept generation capabilities were particularly strong. However, the LLM prematurely abandoned promising concepts when facing minor difficulties, added unnecessary complexity to designs, and experienced design fixation. Communication between the LLM and participants was challenging due to vague or unclear descriptions, and the LLM had difficulty maintaining continuity and relevance in answers. Based on these findings, six recommendations for implementing an LLM like ChatGPT in the design process are proposed, including leveraging it for ideation, ensuring human oversight for key decisions, implementing iterative feedback loops, prompting it to consider alternatives, and assigning specific and manageable tasks at a subsystem level.
Experience in teaching functional programming (FP) on a relational basis has led the author to focus on a graphical style of expression and reasoning in which a geometric construct shines: the (semi) commutative square. In the classroom this is termed the “magic square” (MS), since virtually everything that we do in logic, FP, database modeling, formal semantics and so on fits in some MS geometry. The sides of each magic square are binary relations and the square itself is a comparison of two paths, each involving two sides. MSs compose and have a number of useful properties. Among several examples given in the paper ranging over different application domains, free-theorem MSs are shown to be particularly elegant and productive. Helped by a little bit of Galois connections, a generic, induction-free theory for ${\mathsf{foldr}}$ and $\mathsf{foldl}$ is given, showing in particular that ${\mathsf{foldl} \, {{s}}{}\mathrel{=}\mathsf{foldr}{({flip} \unicode{x005F}{s})}{}}$ holds under conditions milder than usually advocated.
We say that a Kripke model is a GL-model (Gödel and Löb model) if the accessibility relation $\prec $ is transitive and converse well-founded. We say that a Kripke model is a D-model if it is obtained by attaching infinitely many worlds $t_1, t_2, \ldots $, and $t_\omega $ to a world $t_0$ of a GL-model so that $t_0 \succ t_1 \succ t_2 \succ \cdots \succ t_\omega $. A non-normal modal logic $\mathbf {D}$, which was studied by Beklemishev [3], is characterized as follows. A formula $\varphi $ is a theorem of $\mathbf {D}$ if and only if $\varphi $ is true at $t_\omega $ in any D-model. $\mathbf {D}$ is an intermediate logic between the provability logics $\mathbf {GL}$ and $\mathbf {S}$. A Hilbert-style proof system for $\mathbf {D}$ is known, but there has been no sequent calculus. In this paper, we establish two sequent calculi for $\mathbf {D}$, and show the cut-elimination theorem. We also introduce new Hilbert-style systems for $\mathbf {D}$ by interpreting the sequent calculi. Moreover, we show that D-models can be defined using an arbitrary limit ordinal as well as $\omega $. Finally, we show a general result as follows. Let X and $X^+$ be arbitrary modal logics. If the relationship between semantics of X and semantics of $X^+$ is equal to that of $\mathbf {GL}$ and $\mathbf {D}$, then $X^+$ can be axiomatized based on X in the same way as the new axiomatization of $\mathbf {D}$ based on $\mathbf {GL}$.
The cumulative residual extropy has been proposed recently as an alternative measure of extropy to the cumulative distribution function of a random variable. In this paper, the concept of cumulative residual extropy has been extended to cumulative residual extropy inaccuracy (CREI) and dynamic cumulative residual extropy inaccuracy (DCREI). Some lower and upper bounds for these measures are provided. A characterization problem for the DCREI measure under the proportional hazard rate model is studied. Nonparametric estimators for CREI and DCREI measures based on kernel and empirical methods are suggested. Also, a simulation study is presented to evaluate the performance of the suggested measures. Simulation results show that the kernel-based estimator performs better than the empirical-based estimator. Finally, applications of the DCREI measure for model selection are provided using two real data sets.
The virtual model control (VMC) method establishes a direct correlation model between the end-effector and the main body by selecting appropriate virtual mechanical components. This approach facilitates direct force control while circumventing the necessity for complex dynamic modeling. However, the simplification inherent in this modeling can result in inaccuracies in the calculation of joint driving torques, ultimately diminishing control precision. Moreover, VMC typically depends on predefined models for control, which constrains its adaptability in dynamically complex environments and under varying movement conditions. To address these limitations, this paper proposes the BP-VMC method, which is based on a backpropagation neural network (BPNN). Initially, a quadruped robot model was established through kinematic analysis. Subsequently, a decomposed VMC model was developed, and BPNN was introduced to facilitate the adaptive tuning of virtual parameters. This approach resulted in the creation of a virtual mechanical component model with adaptive capabilities, compensating for errors arising from simplified modeling. Finally, a simulation control system was constructed based on the BP-VMC control framework to validate the optimization of control performance. Simulation experiments demonstrated that, in comparison to traditional VMC methods, the BP-VMC method exhibits enhanced control accuracy and stability, achieving a 30% reduction in trajectory tracking error, a 40% reduction in velocity tracking error, and a 20–30% improvement in instability indices across various working conditions. This evidence underscores the BP-VMC method’s robust adaptability in dynamic environments.
Recursive types and bounded quantification are prominent features in many modern programming languages, such as Java, C#, Scala, or TypeScript. Unfortunately, the interaction between recursive types, bounded quantification, and subtyping has shown to be problematic in the past. Consequently, defining a simple foundational calculus that combines those features and has desirable properties, such as decidability, transitivity of subtyping, conservativity, and a sound and complete algorithmic formulation, has been a long-time challenge.
This paper shows how to extend $F_{\le}$ with iso-recursive types in a new calculus called $F_{\le}^{\mu}$. $F_{\le}$ is a well-known polymorphic calculus with bounded quantification. In $F_{\le}^{\mu}$, we add iso-recursive types and correspondingly extend the subtyping relation with iso-recursive subtyping using the recently proposed nominal unfolding rules. In addition, we use so-called structural folding/unfolding rules for typing iso-recursive expressions, inspired by the structural unfolding rule proposed by Abadi et al. (1996). The structural rules add expressive power to the more conventional folding/unfolding rules in the literature, and they enable additional applications. We present several results, including: type soundness; transitivity; the conservativity of $F_{\le}^{\mu}$ over $F_{\le}$; and a sound and complete algorithmic formulation of $F_{\le}^{\mu}$. We study two variants of $F_{\le}^{\mu}$. The first one uses an extension of the $\textrm{kernel}~F_{\le}$ (a well-known decidable variant of $F_{\le}$). This extension accepts equivalent rather than equal bounds and is shown to preserve decidable subtyping. The second variant employs the $\textrm{full}~F_{\le}$ rule for bounded quantification and has undecidable subtyping. Moreover, we also study an extension of the kernel version of $F_{\le}^{\mu}$, called $F_{\le\ge}^{\mu\wedge}$, with a form of intersection types and lower bounded quantification. All the properties from the kernel version of $F_{\le}^{\mu}$ are preserved in $F_{\le\ge}^{\mu\wedge}$. All the results in this paper have been formalized in the Coq theorem prover.
This paper provides a consistent first-order theory solving the knower paradoxes of Kaplan and Montague, with the following main features: 1. It solves the knower paradoxes by providing a faithful formalization of the principle of veracity (that knowledge requires truth), using both a knowledge and a truth predicate. 2. It is genuinely untyped i.e., it is untyped not only in the sense that it uses a single knowledge predicate applying to all sentences in the language (including sentences in which this predicate occurs), but in the sense that its axioms quantify over all sentences in the language, thus supporting comprehensive reasoning with untyped knowledge ascriptions. 3. Common knowledge predicates can be defined in the system using self-reference. These facts, together with a technique based on Löb’s theorem, enables it to support comprehensive reasoning with untyped common knowledge ascriptions (without having any axiom directly addressing common knowledge).
We propose Rényi information generating function (RIGF) and discuss its properties. A connection between the RIGF and the diversity index is proposed for discrete-type random variables. The relation between the RIGF and Shannon entropy of order q > 0 is established and several bounds are obtained. The RIGF of escort distribution is derived. Furthermore, we introduce the Rényi divergence information generating function (RDIGF) and discuss its effect under monotone transformations. We present nonparametric and parametric estimators of the RIGF. A simulation study is carried out and a real data relating to the failure times of electronic components is analyzed. A comparison study between the nonparametric and parametric estimators is made in terms of the standard deviation, absolute bias, and mean square error. We have observed superior performance for the newly proposed estimators. Some applications of the proposed RIGF and RDIGF are provided. For three coherent systems, we calculate the values of the RIGF and other well-established uncertainty measures, and similar behavior of the RIGF is observed. Further, a study regarding the usefulness of the RDIGF and RIGF as model selection criteria is conducted. Finally, three chaotic maps are considered and then used to establish a validation of the proposed information generating function.
This work studies the reliability function of K-out-of-N systems with a general repair time distribution and a single repair facility. It introduces a new repair mechanism using an effort function, described by a nonlinear ordinary differential equation. Three theoretical results are obtained: regularity properties preventing simultaneous failures and repairs, derivation of a Kolmogorov forward system for micro-state and macro-state probabilities, and comparison of reliability functions of two K-out-of-N systems. An additional hypothesis on the model’s parameters allows us to obtain an ordering relation between the reliability functions. A numerical example demonstrates the model’s practical application and confirms the theoretical results.
The various global refugee and migration events of the last few years underscore the need for advancing anticipatory strategies in migration policy. The struggle to manage large inflows (or outflows) highlights the demand for proactive measures based on a sense of the future. Anticipatory methods, ranging from predictive models to foresight techniques, emerge as valuable tools for policymakers. These methods, now bolstered by advancements in technology and leveraging nontraditional data sources, can offer a pathway to develop more precise, responsive, and forward-thinking policies.
This paper seeks to map out the rapidly evolving domain of anticipatory methods in the realm of migration policy, capturing the trend toward integrating quantitative and qualitative methodologies and harnessing novel tools and data. It introduces a new taxonomy designed to organize these methods into three core categories: Experience-based, Exploration-based, and Expertise-based. This classification aims to guide policymakers in selecting the most suitable methods for specific contexts or questions, thereby enhancing migration policies.
This study is predicated on the limited scholarly exploration of the connection between logos and the architectural spaces associated with these brands. The primary objective of this paper is to investigate the relationship between a brand’s corporate identity and its architectural structures through a holistic approach, leveraging artificial intelligence (AI) as a design tool. To achieve this, this study conducts an interdisciplinary literature review, synthesizing existing works in both architecture and branding. The research methodology follows a qualitative, exploratory framework, focusing on the formal and aesthetic evaluations of AI-driven visual outputs. In this context, the central aim of this study is to explore the use of contemporary technologies as a design instrument within the architectural domain. Another key objective is to examine the application of AI as a methodological tool for architectural design within the context of corporate identity. To this end, architectural forms were visually generated using text-to-image and image-to-image, with the resulting products assessed in terms of architectural presentation techniques, visual quality, and aesthetic strategies. For the study’s empirical component, brands ranked at the top of the 2023 Best Global Brands report were selected as the sample, and AI-driven architectural productions were created based on their logos. The findings suggest that AI, with its diverse styles and capabilities, can serve as a design parameter within architectural practice. This study contributes to the discourse on the evolving intersection of AI, branding, and architectural design, proposing new perspectives on the integration of these domains in the design process.
Language is central to issues of displacement and education. This paper examines how English language teachers in refugee settings negotiated and exercised autonomy in teaching and learning in the context of the COVID-19 pandemic. It draws on the notion of autonomy and its dynamics in language classrooms in refugee settings. The paper focuses on one displacement context – Jordan’s refugee settings – to offer a fine-grained analysis of teachers’ accounts to synthesise how teachers negotiated the transition to online teaching and developed practices and relations across different sites. The study recognises teachers’ rights in contributing their own experience and expertise and draws on the Participatory Ethnographic Evaluation Research (PEER) methodology, which involved working closely with a group of six language teachers as peer researchers, who conducted in-depth interviews with two of their peers. The analysis examines the ways in which autonomy was exercised, mobilised, resourced, constrained and shaped by contextual factors during the pandemic and thus provides a nuanced understanding of teachers’ experiences. The study points to the importance of understanding teacher autonomy in the context of language teaching in technology-poor environments. By providing critical insights into the dynamics of teacher autonomy in unique professional settings, it contributes to the broader discourse on digital language learning and agency, roles and skills needed by teachers to support crisis preparedness for the future.
In this paper, a cellular robot for space trusses is structured so that it can perform tasks such as moving the truss and assembling the truss. There may be some spatial operating mechanisms on the space truss that cause obstacles to the robot’s movement, especially other mobile mechanical devices that are working, which are dynamic obstacles, so a suitable path planning for the robot is needed. In path planning, A-star algorithm has the advantages of efficient searching speed and good optimization effect, but it can’t deal with the path planning problem with dynamic obstacles, so this paper improves Lifelong Planning A-star (LPA-star) algorithm so that the improved algorithm satisfies the dynamic path planning task. Then a three-dimensional truss mathematical model is established, a dynamic obstacle environment is set up, the improved LPA-star algorithm is used for path planning, and the unimproved LPA-star algorithm and the improved A-star algorithm are used to compare with it. The simulation results show that in the environment set up in this paper, the optimal path length of the improved LPA-star algorithm is shortened by about 25% and the algorithm search time is shortened by about 55% compared with the improved A-star algorithm; while the unimproved LPA-star algorithm is unable to accomplish the dynamic path planning task. Therefore, the improved LPA-star algorithm can reduce the robot’s moving distance and time consumption.
Onboard localization for multi-robot systems stands as a critical area of research with wide-ranging applications. This paper introduces an innovative framework for multi-robot localization, uniquely characterized by its onboard capability, thereby negating the dependency on external infrastructure. Our approach harnesses the inherent capabilities of each robot, enabling them to localize and synchronize their movements independently. The integration of cooperative localization algorithms with formation control mechanisms empowers a group of robots to sustain a predefined formation while following a linear trajectory. The efficacy of our framework is substantiated through comprehensive simulations and real-world experimental validations. We rigorously assess the system’s resilience to localization inaccuracies and external disturbances, demonstrating its adaptability and consistency in maintaining formation under diverse conditions. Furthermore, we explore the scalability of our approach, highlighting its potential to manage varying numbers of robots and its applicability in tasks such as collaborative transportation.
Walking mechanisms offer advantages over wheels or tracks for locomotion but often require complex designs. This paper presents the kinematic design and analysis of a novel overconstrained spatial a single degree-of-freedom leg mechanism for walking robots. The mechanism is generated by combining spherical four-bar linkages into two interconnecting loops, resulting in an overconstrained design with compact scalability. Kinematic analysis is applied using recurrent unit vector methods. Dimensional synthesis is performed using the Firefly optimization algorithm to achieve a near-straight trajectory during the stance phase for efficient walking. Constraints for mobility, singularity avoidance, and transmission angle are also implemented. The optimized design solution is manufactured using 3D printing and experimentally tested. Results verify the kinematic properties including near-straight-line motion during stance. The velocity profile shows low perpendicular vibrations. Advantages of the mechanism include compact scalability allowing variable stride lengths, smooth motion from overconstraint, and simplicity of a single actuator. The proposed overconstrained topology provides an effective option for the leg design of walking robots and mechanisms.
Machine learning has become a dominant problem-solving technique in the modern world, with applications ranging from search engines and social media to self-driving cars and artificial intelligence. This lucid textbook presents the theoretical foundations of machine learning algorithms, and then illustrates each concept with its detailed implementation in Python to allow beginners to effectively implement the principles in real-world applications. All major techniques, such as regression, classification, clustering, deep learning, and association mining, have been illustrated using step-by-step coding instructions to help inculcate a 'learning by doing' approach. The book has no prerequisites, and covers the subject from the ground up, including a detailed introductory chapter on the Python language. As such, it is going to be a valuable resource not only for students of computer science, but also for anyone looking for a foundation in the subject, as well as professionals looking for a ready reckoner.