To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Society’s most well-intended efforts to solve sustainability challenges have not yet achieved the expected gains due to rebound effects (i.e., negative consequences of interventions arising from induced changes in system behaviour). Rebound effects offset about 40% of potential sustainability gains, but the understanding of design as a key leverage point for preventing rebound effects is still untapped. In this position paper, three fundamental scientific gaps hampering the prevention of rebound effects are discussed: (1) limited knowledge about the rebound effects triggered by efficiency–effectiveness–sufficiency strategies; (2) the influence of the counterintuitive behaviour of complex socio-technical systems in giving rise to rebound effects is not yet understood and (3) the bounded rationality within design limits the understanding of rebound effects at a broader systemic level. To address the aforementioned gaps, novel methodologies, simulation models and strategies to enable the design of reboundless interventions (i.e., products, product/service-systems and socio-technical systems that are resilient to rebound effects) are required. Building on the strong foundation of systems and design theory, this position paper argues for the need to bridge the interdisciplinary gap in the interplay of design and rebound effects, qualitative and quantitative models, engineering and social sciences, and theory and practice.
Nigeria has a significant gender financial inclusion gap with women disproportionately represented among the financially excluded. Artificial intelligence (AI) powered financial technologies (fintech) present distinctive advantages for enhancing women’s inclusion. This includes efficiency gains, reduced transaction costs, and personalized services tailored to women’s needs. Nonetheless, AI harbours a paradox. While it promises to address financial inclusion, it can also inadvertently perpetuate and amplify gender bias. The critical question is thus, how can AI effectively address the challenges of women’s financial exclusion in Nigeria? Using publicly available data, this research undertakes a qualitative analysis of AI-powered Fintech services in Nigeria. Its objective is to understand how innovations in financial services correspond to the needs of potential users like unbanked or underserved women. The research finds that introducing innovative financial services and technology is insufficient to ensure inclusion. Financial inclusion requires the availability, accessibility, affordability, appropriateness, sustainability, and alignment of services with the needs of potential users, and policy-driven strategies that aid inclusion.
After the O(log n)-approximation algorithms for Asymmetric TSP, the first algorithm to beat the cycle cover algorithm by more than a constant factor was found in 2009 by Asadpour, Goemans, Mądry, Oveis Gharan, and Saberi. Their approach is based on finding a "thin" (oriented) spanning tree and then adding edges to obtain a tour. A major open question is how thin trees are guaranteed to exist.
The O(log n/loglog n)-approximation algorithm by Asadpour et al. samples a random spanning tree from the maximum entropy distribution. To show how this works, we discuss interesting connections between random spanning trees and electrical networks. Some results of this chapter will be used again in Chapters 10 and 11.
This chapter is about the proof of the main payment theorem for hierarchies by Karlin, Klein, and Oveis Gharan, a key piece of their better-than-3/2-approximation algorithm for Symmetric TSP. Because the proof is very long and technical, we will not give a complete proof here but rather focus on explaining the key combinatorial ideas.
This chapter is structured as follows. First, we describe the general proof strategy and prove the theorem in an idealized setting. Then we discuss a few crucial properties of λ-uniform distributions. The following sections focus on the main ideas needed to address the hurdles we ignored in the idealized setting described initially.
Finally, we show how the Karlin–Klein–Oveis Gharan algorithm can be derandomized.
In this chapter and Chapter 8, we describe a constant-factor approximation algorithm for the Asymmetric TSP. Such an algorithm was first devised by Svensson, Tarnawski, and Végh. We present the improved version by Traub and Vygen, with an additional improvement that has not been published before.
The overall algorithm consists of four main components, three of which we will present in this chapter. First, we show that we can restrict attention to instances whose cost function is given by a solution to the dual LP with laminar support and an additional strong connectivity property. Second, we reduce such instances to so-called vertebrate pairs. Third, we will adapt Svensson’s algorithm from Chapter 6 to deal with vertebrate pairs. The remaining piece, an algorithm for subtour cover, will be presented in Chapter 8.
By combining the removable pairing technique presented in Chapter 12 with a new approach based on ear-decompositions and matroid intersection, Sebő and Vygen improved the approximation ratio for Graph TSP from 13/9 to 7/5. We will present this algorithm, which is still the best-known approximation algorithm for Graph TSP, in this chapter.
An interesting feature of this algorithm is that it is purely combinatorial, does not need to solve a linear program, and runs in O(n3) time. To describe the algorithm, we review some matching theory, including a theorem of Frank that links ear-decompositions to T-joins. A slight variant of the Graph TSP algorithm is a 4/3-approximation algorithm for finding a smallest 2-edge-connected spanning subgraph, which was the best known for many years. The proofs will also imply corresponding upper bounds on the integrality ratios.
Improved health data governance is urgently needed due to the increasing use of digital technologies that facilitate the collection of health data and growing demand to use that data in artificial intelligence (AI) models that contribute to improving health outcomes. While most of the discussion around health data governance is focused on policy and regulation, we present a practical perspective. We focus on the context of low-resource government health systems, using first-hand experience of the Zanzibar health system as a specific case study, and examine three aspects of data governance: informed consent, data access and security, and data quality. We discuss the barriers to obtaining meaningful informed consent, highlighting the need for more research to determine how to effectively communicate about data and AI and to design effective consent processes. We then report on the process of introducing data access management and information security guidelines into the Zanzibar health system, demonstrating the gaps in capacity and resources that must be addressed during the implementation of a health data governance policy in a low-resource government system. Finally, we discuss the quality of service delivery data in low-resource health systems such as Zanzibar’s, highlighting that a large quantity of data does not necessarily ensure its suitability for AI development. Poor data quality can be addressed to some extent through improved data governance, but the problem is inextricably linked to the weakness of a health system, and therefore AI-quality data cannot be obtained through technological or data governance measures alone.
In the literature, there are polarized views regarding the capabilities of technology to embed societal values. One aisle of the debate contends that technical artifacts are value-neutral since values are not peculiar to inanimate objects. Scholars on the other side of the aisle argue that technologies tend to be value-laden. With the call to embed ethical values in technology, this article explores how AI and other adjacent technologies are designed and developed to foster social justice. Drawing insights from prior studies, this paper identifies seven African moral values considered central to actualizing social justice; of these, two stand out—respect for diversity and ethnic neutrality. By introducing use case analysis along with the Discovery, Translation, and Verification (DTV) framework and validating via Focus Group Discussion, this study revealed novel findings: first, ethical value analysis is best carried out alongside software system analysis. Second, to embed ethics in technology, interdisciplinary expertise is required. Third, the DTV approach combined with the software engineering methodology provides a promising way to embed moral values in technology. Against this backdrop, the two highlighted ethical values—respect for diversity and ethnic neutrality—help ground the pursuit of social justice.
This article constructs the moduli stack of torsion-free $G$-jet-structures in homotopy type theory with one monadic modality. This yields a construction of this moduli stack for any $\infty$-topos equipped with any stable factorization systems.
In the intended applications of this theory, the factorization systems are given by the deRham-Stack construction. Homotopy type theory allows a formulation of this abstract theory with surprisingly low complexity. This is witnessed by the accompanying formalization of large parts of this work.
The EUMigraTool (EMT) provides short-term and mid-term predictions of asylum seekers arriving in the European Union, drawing on multiple sources of public information and with a focus on human rights. After 3 years of development, it has been tested in real environments by 17 NGOs working with migrants in Spain, Italy, and Greece.
This paper will first describe the functionalities, models, and features of the EMT. It will then analyze the main challenges and limitations of developing a tool for non-profit organizations, focusing on issues such as (1) the validation process and accuracy, and (2) the main ethical concerns, including the challenging exploitation plan when the main target group are NGOs.
The overall purpose of this paper is to share the results and lessons learned from the creation of the EMT, and to reflect on the main elements that need to be considered when developing a predictive tool for assisting NGOs in the field of migration.
In the mid to late 19th century, much of Africa was under colonial rule, with the colonisers exercising power over the labour and territory of Africa. However, as much as Africa has predominantly gained independence from traditional colonial rule, another form of colonial rule still dominates the African landscape. This similitude of these different forms of colonialism is found in the power dominance exhibited by Western technological corporations, just like the traditional colonialists. In this digital age, digital colonialism manifests in Africa through the control and ownership of critical digital infrastructure by foreign entities, leading to unequal data flow and asymmetrical power dynamics. This usually occurs under the guise of foreign corporations providing technological assistance to the continent.
By drawing references from the African continent, this article examines the manifestations of digital colonialism and the factors that aid its occurrence on the continent. It further explores the manifestations of digital colonialism in technologies such as Artificial Intelligence (AI) while analysing the occurrence of data exploitation on the continent and the need for African ownership in cultivating the digital future of the African continent. The paper also recognises the benefits linked to the use of AI and makes a cautious approach toward the deployment of AI tools in Africa. It then concludes by recommending the implementation of laws, regulations, and policies that guarantee the inclusiveness, transparency, and ethical values of new technologies, with strategies toward achieving a decolonised digital future on the African continent.
Precise pose estimation is crucial to various robots. In this paper, we present a localization method using correlative scan matching (CSM) technique for indoor mobile robots equipped with 2D-LiDAR to provide precise and fast pose estimation based on the common occupancy map. A pose tracking module and a global localization module are included in our method. On the one hand, the pose tracking module corrects accumulated odometry errors by CSM in the classical Bayesian filtering framework. A low-pass filter associating the predictive pose from odometer with the corrected pose by CSM is applied to improve precision and smoothness of the pose tracking module. On the other hand, our localization method can autonomously detect localization failures with several designed trigger criteria. Once a localization failure occurs, the global localization module can recover correct robot pose quickly by leveraging branch-and-bound method that can minimize the volume of CSM-evaluated candidates. Our localization method has been validated extensively in simulated, public dataset-based, and real environments. The experimental results reveal that the proposed method achieves high-precision, real-time pose estimation, and quick pose retrieve and outperforms other compared methods.
This is a foundation for algebraic geometry, developed internal to the Zariski topos, building on the work of Kock and Blechschmidt (Kock (2006) [I.12], Blechschmidt (2017)). The Zariski topos consists of sheaves on the site opposite to the category of finitely presented algebras over a fixed ring, with the Zariski topology, that is, generating covers are given by localization maps for finitely many elements $f_1,\dots, f_n$ that generate the ideal $(1)=A\subseteq A$. We use homotopy-type theory together with three axioms as the internal language of a (higher) Zariski topos. One of our main contributions is the use of higher types – in the homotopical sense – to define and reason about cohomology. Actually computing cohomology groups seems to need a principle along the lines of our “Zariski local choice” axiom, which we justify as well as the other axioms using a cubical model of homotopy-type theory.
In this study, a novel kinematic modeling method of parallel mechanism is proposed. It can obtain position and posture space simultaneously in a single model. Compared with the traditional method only based on inverse kinematics, the novel method can significantly improve computational performance. The original evaluation metric $\mathfrak{R}$ is proposed to evaluate the performance of the two modeling methods. Three groups of experiments with different calculation times are carried out for the classical PPU-3RUS parallel mechanism, and the new RS-3UPRU parallel mechanism after the effectiveness and wide applicability of the novel modeling method is proved. The calculation time and output rate are recorded, respectively, and then the respective $\mathfrak{R}$ values are obtained by weighting. The results show that the novel modeling method has better performance.
Actor languages realize concurrency via message passing, which most of the time is easy to use. Empirical code inspection provides evidence, however, that on occasion, programmers wish to have an actor share some of its state with others. The dataspace model adds a tightly controlled state-exchange mechanism, dubbed dataspace, to the actor model for just this purpose. Experience with dataspaces suggests that this form of sharing calls for linguistic constructs that allow programmers to state temporal aspects of actor conversations. In response, this paper presents the facet notation: its theory, its type system, its behavioral type system, and some first experiences with an implementation.
A graph $G$ is $q$-Ramsey for another graph $H$ if in any $q$-edge-colouring of $G$ there is a monochromatic copy of $H$, and the classic Ramsey problem asks for the minimum number of vertices in such a graph. This was broadened in the seminal work of Burr, Erdős, and Lovász to the investigation of other extremal parameters of Ramsey graphs, including the minimum degree.
It is not hard to see that if $G$ is minimally $q$-Ramsey for $H$ we must have $\delta (G) \ge q(\delta (H) - 1) + 1$, and we say that a graph $H$ is $q$-Ramsey simple if this bound can be attained. Grinshpun showed that this is typical of rather sparse graphs, proving that the random graph $G(n,p)$ is almost surely $2$-Ramsey simple when $\frac{\log n}{n} \ll p \ll n^{-2/3}$. In this paper, we explore this question further, asking for which pairs $p = p(n)$ and $q = q(n,p)$ we can expect $G(n,p)$ to be $q$-Ramsey simple.
We first extend Grinshpun’s result by showing that $G(n,p)$ is not just $2$-Ramsey simple, but is in fact $q$-Ramsey simple for any $q = q(n)$, provided $p \ll n^{-1}$ or $\frac{\log n}{n} \ll p \ll n^{-2/3}$. Next, when $p \gg \left ( \frac{\log n}{n} \right )^{1/2}$, we find that $G(n,p)$ is not $q$-Ramsey simple for any $q \ge 2$. Finally, we uncover some interesting behaviour for intermediate edge probabilities. When $n^{-2/3} \ll p \ll n^{-1/2}$, we find that there is some finite threshold $\tilde{q} = \tilde{q}(H)$, depending on the structure of the instance $H \sim G(n,p)$ of the random graph, such that $H$ is $q$-Ramsey simple if and only if $q \le \tilde{q}$. Aside from a couple of logarithmic factors, this resolves the qualitative nature of the Ramsey simplicity of the random graph over the full spectrum of edge probabilities.
Self-instructional media in education has the potential to address educational challenges such as accessibility, flexible and personalised learning, real-time assessment and resource efficiency. The objectives of this study are to (1) develop programmed instructions to teach design thinking concepts and (2) investigate its effects on secondary school students’ understanding of these concepts. A design thinking workshop was conducted with secondary school students; subsequently, their understanding of design thinking concepts gained through digital programmed instructions was evaluated. The study involved 33 novice secondary school students from grades 6 to 9 in India, who worked in teams to find and solve real-life, open-ended, complex problems during the workshop using the design thinking process. Data on (i) the individual performance in understanding design thinking concepts and (ii) team performance in design problem finding and solving were collected using individual tests and teams’ outcome evaluations, respectively. Students’ perceptions of the effectiveness of the programmed instructions for supporting understanding of the concepts were also captured. Results show the positive effects on students’ understanding of design thinking concepts as well as on their problem-finding and solving skills. The results justify the use of programmed instructions in secondary school curricula to advance design thinking concepts. The current version of programmed instruction has limitations, including the absence of branching mechanisms, a detailed feedback system, multimodal content and backend functionalities. Future work will aim to address these issues and overcome these shortcomings.
Discussions of the development and governance of data-driven systems have, of late, come to revolve around questions of trust and trustworthiness. However, the connections between them remain relatively understudied and, more importantly, the conditions under which the latter quality of trustworthiness might reliably lead to the placing of ‘well-directed’ trust. In this paper, we argue that this challenge for the creation of ‘rich’ trustworthiness, which we term the Trustworthiness Recognition Problem, can usefully be approached as a problem of effective signalling, and suggest that its resolution can be informed by a multidisciplinary approach that relies on insights from economics and behavioural ecology. We suggest, overall, that the domain specificity inherent to the signalling theory paradigm offers an effective solution to the TRP, which we believe will be foundational to whether and how rapidly improving technologies are integrated in the healthcare space. We suggest that solving the TRP will not be possible without taking an interdisciplinary approach and suggest further avenues of inquiry that we believe will be fruitful.
Generative artificial intelligence (GenAI) has gained significant popularity in recent years. It is being integrated into a variety of sectors for its abilities in content creation, design, research, and many other functionalities. The capacity of GenAI to create new content—ranging from realistic images and videos to text and even computer code—has caught the attention of both the industry and the general public. The rise of publicly available platforms that offer these services has also made GenAI systems widely accessible, contributing to their mainstream appeal and dissemination. This article delves into the transformative potential and inherent challenges of incorporating GenAI into the domain of judicial decision-making. The article provides a critical examination of the legal and ethical implications that arise when GenAI is used in judicial rulings and their underlying rationale. While the adoption of this technology holds the promise of increased efficiency in the courtroom and expanded access to justice, it also introduces concerns regarding bias, interpretability, and accountability, thereby potentially undermining judicial discretion, the rule of law, and the safeguarding of rights. Around the world, judiciaries in different jurisdictions are taking different approaches to the use of GenAI in the courtroom. Through case studies of GenAI use by judges in jurisdictions including Colombia, Mexico, Peru, and India, this article maps out the challenges presented by integrating the technology in judicial determinations, and the risks of embracing it without proper guidelines for mitigating potential harms. Finally, this article develops a framework that promotes a more responsible and equitable use of GenAI in the judiciary, ensuring that the technology serves as a tool to protect rights, reduce risks, and ultimately, augment judicial reasoning and access to justice.
Expert drivers possess the ability to execute high sideslip angle maneuvers, commonly known as drifting, during racing to navigate sharp corners and execute rapid turns. However, existing model-based controllers encounter challenges in handling the highly nonlinear dynamics associated with drifting along general paths. While reinforcement learning-based methods alleviate the reliance on explicit vehicle models, training a policy directly for autonomous drifting remains difficult due to multiple objectives. In this paper, we propose a control framework for autonomous drifting in the general case, based on curriculum reinforcement learning. The framework empowers the vehicle to follow paths with varying curvature at high speeds, while executing drifting maneuvers during sharp corners. Specifically, we consider the vehicle’s dynamics to decompose the overall task and employ curriculum learning to break down the training process into three stages of increasing complexity. Additionally, to enhance the generalization ability of the learned policies, we introduce randomization into sensor observation noise, actuator action noise, and physical parameters. The proposed framework is validated using the CARLA simulator, encompassing various vehicle types and parameters. Experimental results demonstrate the effectiveness and efficiency of our framework in achieving autonomous drifting along general paths. The code is available at https://github.com/BIT-KaiYu/drifting.