To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Public administrations are increasingly deploying algorithmic systems to facilitate the application, execution, and enforcement of regulation, a practice that can be denoted as algorithmic regulation. While their reliance on digital technology is not new, both the scale at which they automate administrative acts and the importance of the decisions they delegate to algorithmic tools is on the rise. In this chapter, I contextualize this phenomenon and discuss the implementation of algorithmic regulation across several public sector domains. I then assess some of the ethical and legal conundrums that public administrations face when outsourcing their tasks to such systems and provide an overview of the legal framework that governs this practice, with a particular focus on the European Union. This framework encompasses not only constitutional and administrative law but also data protection law and AI-specific law. Finally, I offer some take-aways for public administrations to consider when seeking to deploy algorithmic regulation.
Firms use algorithms for important decisions in areas from pricing strategy to product design. Increased price transparency and availability of personal data, combined with ever more sophisticated machine learning algorithms, has turbocharged their use. Algorithms can be a procompetitive force, such as when used to undercut competitors or to improve recommendations. But algorithms can also distort competition, as when firms use them to collude or to exclude competitors. EU competition law, in particular its provisions on restrictive agreements and abuse of dominance (Articles 101–102 TFEU), prohibits such practices, but novel anticompetitive practices – when algorithms collude autonomously for example – may escape its grasp. This chapter assesses to what extent anticompetitive algorithmic practices are covered by EU competition law, examining horizontal agreements (collusion), vertical agreements (resale price maintenance), exclusionary conduct (ranking), and exploitative conduct (personalized pricing).
The actors that are active in the financial world process vast amounts of information, starting from customer data and account movements over market trading data to credit underwriting or money-laundering checks. It is one thing to collect and store these data, yet another challenge to interpret and make sense of them. AI helps with both, for example, by checking databases or crawling the Internet in search of relevant information, by sorting it according to predefined categories or by finding its own sorting parameter. It is hence unsurprising that AI has started to fundamentally change many aspects of finance. This chapter takes AI scoring and creditworthiness assessments as an example for how AI is employed in financial services (Section 16.2), for the ethical challenges this raises (Section 16.3), and for the legal tools that attempt to adequately balance advantages and challenges of this technique (Section 16.4). It closes with a look at scoring beyond the credit situation (Section 16.5).
The main goal of this chapter is to introduce one type of AI used for law enforcement, namely predictive policing, and to discuss the main legal, ethical, and social concerns this raises. In the last two decades, police forces in Europe and in North America have increasingly invested in predictive policing applications. Two types of predictive policing will be discussed: predictive mapping and predictive identification. After discussing these two practices and what is known about their effectiveness, I discuss the legal, ethical, and social issues they raise, covering aspects relating to their efficacy, governance, and organizational use, as well as the impact they have on citizens and society.
Adaptation to the dynamic environment and variable task sequence is the critical ability for robot navigation and task execution. The Cyclic Networking Rapidly-exploring Random Tree (CNRRT) method is proposed to obtain the optimal path in real time and realize long-term path planning ability in a complex dynamic environment. The cyclic branch is introduced to the acyclic graph of Rapidly-exploring Random Tree (RRT) method, which forms a decentralized path network in the configuration space. An iterative searching strategy is built to search for the optimal path in the network. The branch prune, reconnection, and regrowth processes enable the decentralized network to efficiently respond to dynamic changes in the environment. The CNRRT can search for the real-time optimal path in the dynamic environment, dealing with the configuration and task changes robustly. Besides, the CNRRT is consistent for scenarios with long-term task sequence without significant performance fluctuation. Simulations and real-world comparative experiments verify the effectiveness of the proposed method.
The increasing size and severity of wildfires across the western United States have generated dangerous levels of PM2.5 concentrations in recent years. In a changing climate, expanding the use of prescribed fires is widely considered to be the most robust fire mitigation strategy. However, reliably forecasting the potential air quality impact from prescribed fires, which is critical in planning the prescribed fires’ location and time, at hourly to daily time scales remains a challenging problem. In this paper, we introduce a spatio-temporal graph neural network (GNN)-based forecasting model for hourly PM2.5 predictions across California. Utilizing a two-step approach, we use our forecasting model to predict the net and ambient PM2.5 concentrations, which are used to estimate wildfire contributions. Integrating the GNN-based PM2.5 forecasting model with simulations of historically prescribed fires, we propose a novel framework to forecast their air quality impact. This framework determines that March is the optimal month for implementing prescribed fires in California and quantifies the potential air quality trade-offs involved in conducting more prescribed fires outside the peak of the fire season.
Simulated data from the Coupled Model Intercomparison Project Phase 6 (CMIP6) has been very important for climate science research, as they can provide wide spatio-temporal coverage to address data deficiencies in both present and future scenarios. However, these physics-based models require a huge amount of high-performance computing (HPC) resources. As an alternative approach, researchers are exploring if such simulated data can be generated by Generative Machine Learning models. In this work, we develop a model based on Pix2Pix conditional Generative Adversarial Network (cGAN), which can generate high-resolution spatial maps of global sea surface temperature (SST) using comparatively less computing power and time. We have shown that the maps generated by these models have similar statistical characteristics as the CMIP6 model simulations. Notably, we trained and validated our cGAN model on completely distinct time periods across all ensemble members of the EC-Earth3-CC and CMCC-CM2-SR5 CMIP6 models, demonstrating satisfactory results and confirming the generalizability of our proposed model.
This research proposes a novel conceptual framework that combines the concepts of Human-Computer Interaction (HCI) and Ambient Intelligence (AmI). The proposed framework aims to shed light on the importance of considering the needs and the social interactions of various building occupants in different types of buildings and designing HBI strategies accordingly. Specifically, we take educational buildings as a case that is less explored in the HBI research and apply the proposed framework, investigating how HBI strategies and interactions should be designed to address the needs of students, as primary occupants. Focus groups and semi-structured interviews were conducted among students in a flagship smart engineering building at Virginia Tech. Qualitative coding and concept mapping were used to analyze the qualitative data and determine the impact of occupant-specific needs on the learning experience of students. “Finding study space” was found to have the highest direct impact on the learning experience of students, and “Indoor Environment Quality (IEQ)” was found to have the highest indirect impact. The results show a clear need to integrate occupant needs in designing HBI strategies in different types of buildings. Finally, we discuss new ideas for designing potential Intelligent User Interfaces (IUI) to address the identified needs.
This study explores the relationship between alter centrality in various social domains and the perception of linguistic similarity within personal networks. Linguistic similarity perception is defined as the extent to which individuals perceive others to speak similarly to themselves. A survey of 126 college students and their social connections (n = 1035) from the French-speaking region of Switzerland was conducted. We applied logistic multilevel regressions to account for the hierarchical structure of dyadic ties. The results show that alters holding central positions in supportive networks are positively associated with perceived linguistic similarity, while those who are central in conflict networks show a negative association. The role of ambivalence yielded mixed results, with a positive and significant association emerging when ambivalence was linked to family members.
Digital twins are a new paradigm for our time, offering the possibility of interconnected virtual representations of the real world. The concept is very versatile and has been adopted by multiple communities of practice, policymakers, researchers, and innovators. A significant part of the digital twin paradigm is about interconnecting digital objects, many of which have previously not been combined. As a result, members of the newly forming digital twin community are often talking at cross-purposes, based on different starting points, assumptions, and cultural practices. These differences are due to the philosophical world-view adopted within specific communities. In this paper, we explore the philosophical context which underpins the digital twin concept. We offer the building blocks for a philosophical framework for digital twins, consisting of 21 principles that are intended to help facilitate their further development. Specifically, we argue that the philosophy of digital twins is fundamentally holistic and emergentist. We further argue that in order to enable emergent behaviors, digital twins should be designed to reconstruct the behavior of a physical twin by “dynamically assembling” multiple digital “components”. We also argue that digital twins naturally include aspects relating to the philosophy of artificial intelligence, including learning and exploitation of knowledge. We discuss the following four questions (i) What is the distinction between a model and a digital twin? (ii) What previously unseen results can we expect from a digital twin? (iii) How can emergent behaviours be predicted? (iv) How can we assess the existence and uniqueness of digital twin outputs?
This article establishes a data-driven modeling framework for lean hydrogen ($ {\mathrm{H}}_2 $)-air reaction rates for the Large Eddy Simulation (LES) of turbulent reactive flows. This is particularly challenging since $ {\mathrm{H}}_2 $ molecules diffuse much faster than heat, leading to large variations in burning rates, thermodiffusive instabilities at the subfilter scale, and complex turbulence-chemistry interactions. Our data-driven approach leverages a Convolutional Neural Network (CNN), trained to approximate filtered burning rates from emulated LES data. First, five different lean premixed turbulent $ {\mathrm{H}}_2 $-air flame Direct Numerical Simulations (DNSs) are computed each with a unique global equivalence ratio. Second, DNS snapshots are filtered and downsampled to emulate LES data. Third, a CNN is trained to approximate the filtered burning rates as a function of LES scalar quantities: progress variable, local equivalence ratio, and flame thickening due to filtering. Finally, the performances of the CNN model are assessed on test solutions never seen during training. The model retrieves burning rates with very high accuracy. It is also tested on two filter and downsampling parameters and two global equivalence ratios between those used during training. For these interpolation cases, the model approximates burning rates with low error even though the cases were not included in the training dataset. This a priori study shows that the proposed data-driven machine learning framework is able to address the challenge of modeling lean premixed $ {\mathrm{H}}_2 $-air burning rates. It paves the way for a new modeling paradigm for the simulation of carbon-free hydrogen combustion systems.
Artificial intelligence (AI) requires new ways of evaluating national technology use and strategy for African nations. We conduct a survey of existing “readiness” assessments both for general digital adoption and AI policy in particular. We conclude that existing global readiness assessments do not fully capture African states’ progress in AI readiness and lay the groundwork for how assessments can be better used for the African context. We consider the extent to which these indicators map to the African context and what these indicators miss in capturing African states’ on-the-ground work in meeting AI capability. Through case studies of four African nations of diverse geographic and economic dimensions, we identify nuances missed by global assessments and offer high-level policy considerations for how states can best improve their AI readiness standards and prepare their societies to capture the benefits of AI.
We present PCFTL (Probabilistic CounterFactual Temporal Logic), a new probabilistic temporal logic for the verification of Markov Decision Processes (MDP). PCFTL introduces operators for causal inference, allowing us to express interventional and counterfactual queries. Given a path formula ϕ, an interventional property is concerned with the satisfaction probability of ϕ if we apply a particular change I to the MDP (e.g., switching to a different policy); a counterfactual formula allows us to compute, given an observed MDP path τ, what the outcome of ϕ would have been had we applied I in the past and under the same random factors that led to observing τ. Our approach represents a departure from existing probabilistic temporal logics that do not support such counterfactual reasoning. From a syntactic viewpoint, we introduce a counterfactual operator that subsumes both interventional and counterfactual probabilities as well as the traditional probabilistic operator. This makes our logic strictly more expressive than PCTL⋆. The semantics of PCFTL rely on a structural causal model translation of the MDP, which provides a representation amenable to counterfactual inference. We evaluate PCFTL in the context of safe reinforcement learning using a benchmark of grid-world models.
Federal and local agencies have identified a need to create building databases to help ensure that critical infrastructure and residential buildings are accounted for in disaster preparedness and to aid the decision-making processes in subsequent recovery efforts. To respond effectively, we need to understand the built environment—where people live, work, and the critical infrastructure they rely on. Yet, a major discrepancy exists in the way data about buildings are collected across the United SStates There is no harmonization in what data are recorded by city, county, or state governments, let alone at the national scale. We demonstrate how existing open-source datasets can be spatially integrated and subsequently used as training for machine learning (ML) models to predict building occupancy type, a major component needed for disaster preparedness and decision -making. Multiple ML algorithms are compared. We address strategies to handle significant class imbalance and introduce Bayesian neural networks to handle prediction uncertainty. The 100-year flood in North Carolina is provided as a practical application in disaster preparedness.
The Cambridge Handbook of Emerging Issues at the Intersection of Commercial Law and Technology is a timely and interdisciplinary examination of the legal and societal implications of nascent technologies in the global commercial marketplace. Featuring contributions from leading international experts in the field, this volume offers fresh and diverse perspectives on a range of topics, including non-fungible tokens, blockchain technology, the Internet of Things, product liability for defective goods, smart readers, liability for artificial intelligence products and services, and privacy in the era of quantum computing. This work is an invaluable resource for academics, policymakers, and anyone seeking a deeper understanding of the social and legal challenges posed by technological innovation, as well as the role of commercial law in facilitating and regulating emerging technologies.
Analysing hierarchical design processes is difficult due to the technical and organizational dependencies spanning over multiple levels. The V-Model of Systems Engineering considers multiple levels. It is, however, not quantitative. We propose a model for simulating hierarchical product design processes based on the V-Model. It includes, first, a product model which structures physical product properties in a hierarchical dependency graph; second, an organizational model which formalizes the assignment of stakeholder responsibility; third, a process model which describes the top-down and bottom-up flow of design information; fourth, an actor model which simulates the combination of product, organization and process by using computational agents. The quantitative model is applied to a simple design problem with three stakeholders and three separate areas of responsibility. The results show the following phenomena observed in real-world product design: design iterations occur naturally as a consequence of the designers’ individual behaviour; inconsistencies in designs emerge and are resolved. The simple design problem is used to compare point-based and interval-based requirement decomposition quantitatively. It is shown that development time can be reduced significantly by using interval-based requirements if requirements are always broken down immediately.
A modeling method of multi-objective optimization design for parallel mechanisms (PMs) is proposed, whose implementation is illustrated with 2RPU-RPS mechanism as an example. The orientation of biased output axis on moving platform is depicted by spherical attitude angles, and its kinematic model is deduced through vector method. With screw theory as mathematic tool, a comprehensive evaluation method of kinematic performance for PM is established. On this basis, the expensive constrained multi-objective optimization model of dimensional parameters for the discussed mechanism is constructed. The NSDE-II algorithm, formed by replacing the genetic algorithm operators in non-dominated sorting genetic algorithm II (NSGA-II) with DE operators, is utilized to solve this multi-objective optimization problem, thus obtaining multiple Pareto optimal solutions with engineering application significance, which proves the feasibility and effectiveness of the proposed modeling method and algorithm. Moreover, the normalization coverage space and the minimum adjacent vector angle are proposed to evaluate the computational performance of NSDE-II. Finally, the potential engineering application value for the optimized 2RPU-RPS PM is presented.