To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We present an opinion dynamics model framework discarding two common assumptions in the literature: (a) that there is direct influence between beliefs of neighboring agents, and (b) that agent belief is static in the absence of social influence. Agents in our framework learn from random experiences which possibly reinforce their belief. Agents determine whether they switch opinions by comparing their belief to a threshold. Subsequently, influence of an alter on an ego is not direct incorporation of the alter’s belief into the ego’s but by adjusting the ego’s decision-making criteria. We provide an instance from the framework in which social influence between agents generalizes majority rules updating. We conduct a sensitivity analysis as well as a pair of experiments concerning heterogeneous population parameters. We conclude that the framework is capable of producing consensus, polarization and fragmentation with only assimilative forces between agents which typically, in other models, lead exclusively to consensus.
In the previous two decades, knowledge graphs (KGs) have evolved significantly, inspiring developers to build ever-more context-related KGs. Due to this development, artificial intelligence (AI) applications can now access open domain-specific information in a format that is both semantically rich and machine comprehensible. In this article, a framework that depicts functional design for indoor workspaces and urban adaptive design, in order to help architects, artists, and interior designers for the design and construction of an urban or indoor workspace, based on the emotions of human individuals, is introduced. For the creation of online adaptive environments, the framework may incorporate emotional, physiological, visual, and textual measures. Additionally, an information retrieval mechanism that extracts critical information from the framework in order to assist the architects, artists, and the interior designers is presented. The framework provides access to commonsense knowledge about the (re-)design of an urban area and an indoor workspace, by suggesting objects that need to be placed, and other modifications that can be applied to the location, in order to achieve positive emotions. The emotions referred reflect to the emotions experienced by an individual when being in the indoor or urban area, which are pointers for the functionality, the memorability, and the admiration of the location. The framework also performs semantic matching between entities from the web KG ConceptNet, using semantic knowledge from ConceptNet and WordNet, with the ones existing in the KG of the framework. The paper provides a set of predefined SPARQL templates that specifically handle the ontology upon which the knowledge retrieval system is based. The framework has an additional argumentation function that allows users to challenge the knowledge retrieval component findings. In the event that the user prevails in the reasoning, the framework will learn new knowledge.
While the number of international students attending UK universities has been increasing in recent years, the 2021/22 and 2022/23 academic years saw a decline in applications from EU-domiciled students. However, the extent and varying impact of this decline remain to be estimated and disentangled from the impacts of the COVID-19 pandemic. Using difference-in-differences (DID) in a hierarchical regression framework and Universities and Colleges Admissions Service (UCAS) data, we aim to quantify the decline in the number of student applications post-Brexit. We find evidence of an overall decline of 65% in the 2021 academic year in successful applications from EU students as a result of Brexit. This decline is more pronounced for non-Russell Group institutions, as well as for Health and Life Sciences and Arts and Languages. Furthermore, we explore the spatial heterogeneity of the impact of Brexit across EU countries of origin, observing the greatest effects for Poland and Germany, though this varies depending on institution type and subject. We also show that higher rates of COVID-19 stringency in the country of origin led to greater applications for UK higher education institutions. Our results are important for government and institutional policymakers seeking to understand where losses occur and how international students respond to external shocks and policy changes. Our study quantifies the distinct impacts of Brexit and COVID-19 and offers valuable insights to guide strategic interventions to sustain the UK’s attractiveness as a destination for international students.
Causal machine learning tools are beginning to see use in real-world policy evaluation tasks to flexibly estimate treatment effects. One issue with these methods is that the machine learning models used are generally black boxes, that is, there is no globally interpretable way to understand how a model makes estimates. This is a clear problem for governments who want to evaluate policy as it is difficult to understand whether such models are functioning in ways that are fair, based on the correct interpretation of evidence and transparent enough to allow for accountability if things go wrong. However, there has been little discussion of transparency problems in the causal machine learning literature and how these might be overcome. This article explores why transparency issues are a problem for causal machine learning in public policy evaluation applications and considers ways these problems might be addressed through explainable AI tools and by simplifying models in line with interpretable AI principles. It then applies these ideas to a case study using a causal forest model to estimate conditional average treatment effects for returns on education study. It shows that existing tools for understanding black-box predictive models are not as well suited to causal machine learning and that simplifying the model to make it interpretable leads to an unacceptable increase in error (in this application). It concludes that new tools are needed to properly understand causal machine learning models and the algorithms that fit them.
Controller synthesis offers a correct-by-construction methodology to ensure the correctness and reliability of safety-critical cyber-physical systems (CPS). Controllers are classified based on the types of controls they employ, which include reset controllers, feedback controllers and switching logic controllers. Reset controllers steer the behavior of a CPS to achieve system objectives by restricting its initial set and redefining its reset map associated with discrete jumps. Although the synthesis of feedback controllers and switching logic controllers has received considerable attention, research on reset controller synthesis is still in its early stages, despite its theoretical and practical significance. This paper outlines our recent efforts to address this gap. Our approach reduces the problem to computing differential invariants and reach-avoid sets. For polynomial CPS, the resulting problems can be solved by further reduction to convex optimizations. Moreover, considering the inevitable presence of time delays in CPS design, we further consider synthesizing reset controllers for CPS that incorporate delays.
The walk matrix associated to an $n\times n$ integer matrix $\mathbf{X}$ and an integer vector $b$ is defined by ${\mathbf{W}} \,:\!=\, (b,{\mathbf{X}} b,\ldots, {\mathbf{X}}^{n-1}b)$. We study limiting laws for the cokernel of $\mathbf{W}$ in the scenario where $\mathbf{X}$ is a random matrix with independent entries and $b$ is deterministic. Our first main result provides a formula for the distribution of the $p^m$-torsion part of the cokernel, as a group, when $\mathbf{X}$ has independent entries from a specific distribution. The second main result relaxes the distributional assumption and concerns the ${\mathbb{Z}}[x]$-module structure.
The motivation for this work arises from an open problem in spectral graph theory, which asks to show that random graphs are often determined up to isomorphism by their (generalised) spectrum. Sufficient conditions for generalised spectral determinacy can, namely, be stated in terms of the cokernel of a walk matrix. Extensions of our results could potentially be used to determine how often those conditions are satisfied. Some remaining challenges for such extensions are outlined in the paper.
This paper aims at exploring the dynamic interplay between advanced technological developments in AI and Big Data and the sustained relevance of theoretical frameworks in scientific inquiry. It questions whether the abundance of data in the AI era reduces the necessity for theory or, conversely, enhances its importance. Arguing for a synergistic approach, the paper emphasizes the need for integrating computational capabilities with theoretical insight to uncover deeper truths within extensive datasets. The discussion extends into computational social science, where elements from sociology, psychology, and economics converge. The application of these interdisciplinary theories in the context of AI is critically examined, highlighting the need for methodological diversity and addressing the ethical implications of AI-driven research. The paper concludes by identifying future trends and challenges in AI and computational social science, offering a call to action for the scientific community, policymakers, and society. Being positioned at the intersection of AI, data science, and social theory, this paper illuminates the complexities of our digital era and inspires a re-evaluation of the methodologies and ethics guiding our pursuit of knowledge.
Urban logistics has emerged as a priority to improve goods distribution and mobility within urban centers worldwide. Brazil presents a unique set of challenges in this regard due to issues such as excessive reliance on road transportation, lack of regulations, inadequate infrastructure, cargo theft, and the intricate interplay of cargo transportation with urban traffic. These challenges collectively exert a substantial influence on the economic, urban, and environmental performance of cities. This article introduces a novel approach aimed at assessing and benchmarking urban logistics performance between Brazilian cities with potential applicability to other contexts. The methodology was based on data envelopment analysis to evaluate efficiency based on key indicators, including GDP Gross Domestic Product, population size, commercial establishments, urban area coverage, cargo fleet size, and travel time. By applying this methodology to 12 Brazilian cities, the study improves the understanding of their relative efficiency levels concerning urban logistics and provides key insights for policymaking. The results also show the relevance of the proposed methodology and contribute to provide a perspective of different administrative and logistical facets through the lens of macroeconomic indicators, contributing to a holistic understanding of urban logistics dynamics.
The design of motion control systems for legged robots has always been a challenge. This article first proposes a motion control method for legged robots based on the gradient central pattern generator (GD-CPG). The periodic signals output from the GD-CPG neural network are used as the drive signals of each thigh joint of the legged robots, which are then converted into the driving signal of the knee and ankle joints by the thigh–knee mapping function and the knee–ankle mapping function. The proposed control algorithm is adapted to quadruped and hexapod robots. To improve the ability of legged robots to cope with complex terrains, this article further proposes the responsive gradient-CPG motion control method for legged robots. From the perspective of bionics, a biological vestibular sensory feedback mechanism is established in the control system. The mechanism adjusts the robot’s motion state in real time through the attitude angle of the body measured during the robot’s motion, to keep the robot’s body stable when it moves in rugged terrains. Compared with the traditional feedback model that only balances the body pitch, this article also adds the balancing functions of body roll and yaw to balance the legged robot’s motion from more dimensions and improve the linear motion capability. This article also introduces a differential evolutionary algorithm and designs a fitness function to adaptively optimize vestibular sensory feedback parameters. The validity, robustness, and transferability of the method are verified through simulations and physical experiments.
Cybersecurity has emerged as a paramount concern in today’s digital age, especially when considering the vast range of digital assets now in circulation, among which non-fungible tokens (NFTs) hold significant prominence. This chapter delves deeply into the intricate landscape of cybersecurity as it pertains to NFTs. By meticulously analyzing the multifaceted technical challenges and potential vulnerabilities inherent to NFTs from a cybersecurity perspective, this chapter seeks to provide an overview of the landscape as of this writing. Furthermore, this chapter explores how existing laws, policies, and societal norms have addressed these issues thus far, and speculates on how they might evolve in the future to more effectively bridge the governance gaps and safeguard these unique digital assets.
This chapter delves into the intricate relationship between digital assets, specifically non-fungible tokens (NFTs), and the regulatory landscape of anti-money laundering (AML) and counter financing of terrorism (CFT). With the rapid emergence of NFTs, new challenges and opportunities have arisen, necessitating an exploration of evolving regulatory frameworks and enforcement measures to combat AML and CFT risks associated with digital assets. This chapter focuses on the unique characteristics of NFTs, AML, and CFT risks within the NFT market, global regulatory developments, compliance challenges, technological solutions, enforcement actions, collaborative efforts, and future trends. By analyzing these aspects, this chapter aims to provide insights for policy-makers, regulators, scholars, and industry participants in effectively addressing financial crime risks in the digital asset landscape.
In the evolving landscape of technological discourse, non-fungible tokens (NFTs) have risen as pivotal instruments, notably within gaming and digital art. However, their implications are broader, touching upon real-world applications such as land titles and supply chain management. As the Web 3.0 architecture evolves, the role of NFTs in domain nomenclature and email addresses is increasingly significant. Yet, with the existence of alternate methods for these operations, a pertinent question emerges: Why opt for NFTs or blockchain-based solutions? Despite uncertainties surrounding adoption, many early adopters are zealously securing addresses on these avant-garde networks. This chapter delves into the conditions and reasons for considering this nascent technology.
This chapter highlights the dangers of linguistic inaccuracies and misunderstandings that permeate discussions on blockchain technology and non-fungible tokens (NFTs), impacting policy and legal outcomes. It identifies two critical issues hindering effective legislation: a lack of comprehension of blockchain technology’s technical nuances and a failure to appreciate the link between blockchain-related terminology and the intricacies of varying blockchain protocols. By borrowing frequently misused terms without questioning their technical accuracy, policy-makers may unwittingly stifle innovation and develop legal regimes that are ill-suited for their intended purpose. This chapter explores six specific language landmines prevalent in blockchain and NFT discussions, urging researchers, lawmakers, industry members, and other stakeholders to bridge the understanding gap. By addressing these linguistic pitfalls, the chapter advocates for informed and comprehensive policy-making that keeps pace with the evolving landscape of blockchain technology and its applications, including NFTs.
Non-fungible tokens (NFTs) introduce unique concerns related to the privacy of personal data. To create an NFT, users upload data to publicly accessible and searchable databases. This data can encompass information essential for the creation, transfer, and storage of the NFT, as well as personal details pertaining to the creator. Additionally, users might inadvertently engage with technology crafted to gather personal data. Traditional paradigms of privacy have not evolved in tandem with advancements in NFT and blockchain technology. To pinpoint where current privacy paradigms falter, this chapter delves into an introduction of NFTs, elucidating their foundational technical mechanisms and processes. Subsequently, the chapter juxtaposes current and historical privacy frameworks with NFTs, underscoring how these models may be either overly expansive or excessively restrictive for this emerging technology. This chapter suggests that Helen Nissenbaum’s concept of “contextual integrity” might offer the requisite flexibility to cater to the distinct attributes of NFTs. In conclusion, while there is a pronounced societal drive to safeguard citizen data and privacy, the overarching aim remains the enhancement of the collective good. Balancing this objective, governments should be afforded the latitude to equate society’s privacy interests with its imperative for transparency.