To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We prove a new lower bound for the almost 20-year-old problem of determining the smallest possible size of an essential cover of the $n$-dimensional hypercube $\{\pm 1\}^n$, that is, the smallest possible size of a collection of hyperplanes that forms a minimal cover of $\{\pm 1\}^n$ and such that, furthermore, every variable appears with a non-zero coefficient in at least one of the hyperplane equations. We show that such an essential cover must consist of at least $10^{-2}\cdot n^{2/3}/(\log n)^{2/3}$ hyperplanes, improving previous lower bounds of Linial–Radhakrishnan, of Yehuda–Yehudayoff, and of Araujo–Balogh–Mattos.
With the promise of greater efficiency and effectiveness, public authorities have increasingly turned to algorithmic systems to regulate and govern society. In Algorithmic Rule By Law, Nathalie Smuha examines this reliance on algorithmic regulation and shows how it can erode the rule of law. Drawing on extensive research and examples, Smuha argues that outsourcing important administrative decisions to algorithmic systems undermines core principles of democracy. Smuha further demonstrates that this risk is far from hypothetical or one that can be relegated to authoritarian regimes, as many of her examples are drawn from public authorities in liberal democracies that are already making use of algorithmic regulation. Focusing on the European Union, Smuha argues that the EU's digital agenda is misaligned with its aim to protect the rule of law. Novel and timely, this book should be read by anyone interested in the intersection of law, technology, and government. This title is also available as open access on Cambridge Core.
Introduction to Probability and Statistics for Data Science provides a solid course in the fundamental concepts, methods and theory of statistics for students in statistics, data science, biostatistics, engineering, and physical science programs. It teaches students to understand, use, and build on modern statistical techniques for complex problems. The authors develop the methods from both an intuitive and mathematical angle, illustrating with simple examples how and why the methods work. More complicated examples, many of which incorporate data and code in R, show how the method is used in practice. Through this guidance, students get the big picture about how statistics works and can be applied. This text covers more modern topics such as regression trees, large scale hypothesis testing, bootstrapping, MCMC, time series, and fewer theoretical topics like the Cramer-Rao lower bound and the Rao-Blackwell theorem. It features more than 250 high-quality figures, 180 of which involve actual data. Data and R are code available on our website so that students can reproduce the examples and do hands-on exercises.
The seminal Krajewski–Kotlarski–Lachlan theorem (1981) states that every countable recursively saturated model of $\mathsf {PA}$ (Peano arithmetic) carries a full satisfaction class. This result implies that the compositional theory of truth over $\mathsf {PA}$ commonly known as $\mathsf {CT}^{-}[\mathsf {PA}]$ is conservative over $\mathsf {PA}$. In contrast, Pakhomov and Enayat (2019) showed that the addition of the so-called axiom of disjunctive correctness (that asserts that a finite disjunction is true iff one of its disjuncts is true) to $\mathsf {CT}^{-}[\mathsf {PA}]$ axiomatizes the theory of truth $\mathsf {CT}_{0}[\mathsf {PA}]$ that was shown by Wcisło and Łełyk (2017) to be nonconservative over $\mathsf {PA}$. The main result of this paper (Theorem 3.12) provides a foil to the Pakhomov–Enayat theorem by constructing full satisfaction classes over arbitrary countable recursively saturated models of $\mathsf {PA}$ that satisfy arbitrarily large approximations of disjunctive correctness. This shows that in the Pakhomov–Enayat theorem the assumption of disjunctive correctness cannot be replaced with any of its approximations.
Soft robots show an advantage when conducting tasks in complex environments due to their enormous flexibility and adaptability. However, soft robots suffer interactions and nonlinear deformation when interacting with soft and fluid materials. The reason behind is the free boundary interactions, which refers to undetermined contact between soft materials, specifically containing nonlinear deformation in air and nonlinear interactions in fluid for soft robot simulation. Therefore, we propose a new approach using material point method (MPM), which can solve the free boundary interactions problem, to simulate soft robots under such environments. The proposed approach can autonomously predict the flexible and versatile behaviors of soft robots. Our approach entails incorporating automatic differentiation into the algorithm of MPM to simplify the computation and implement an efficient implicit time integration algorithm. We perform two groups of experiments with an ordinary pneumatic soft finger in different free boundary interactions. The results indicate that it is possible to simulate soft robots with nonlinear interactions and deformation, and such environmental effects on soft robots can be restored.
With the widespread application of proton exchange membrane fuel cells (PEMFCs), ensuring the safe and reliable operation of the PEMFCs is becoming more and more important. Timely diagnosis of fault types and the implementation of targeted interventions are crucial for addressing these challenges. In this study, a simulated PEMFC model is firstly introduced by using Fluent, and the effectiveness is validated through experiments involving membrane dry faults, water flooding faults, normal states, and unknown states. Then, a data-driven deep learning convolutional neural network, YOLOv5-CG-AS, is developed, which employs the EfficientViT network as the backbone, incorporating lightweight improvements through the proposed CG-AS attention layer. The results demonstrate that YOLOv5-CG-AS can automatically extract fault features from images for offline fault diagnosis and can perform real-time online diagnosis of multiple parameter curves of PEMFCs. Moreover, the experimental results have validated the feasibility and effectiveness of proposed method and shown the average precision mean Average Precision (mAP) of the trained model reaches 99.50%, superior than other conventional strategies. This has significant implications for advancing fault diagnosis methods, enhancing the reliability and durability of PEMFC systems, and promoting further development in the field.
The performance and confidence in fault detection and diagnostic systems can be undermined by data pipelines that feature multiple compounding sources of uncertainty. These issues further inhibit the deployment of data-based analytics in industry, where variable data quality and lack of confidence in model outputs are already barriers to their adoption. The methodology proposed in this paper supports trustworthy data pipeline design and leverages knowledge gained from one fully-observed data pipeline to a similar, under-observed case. The transfer of uncertainties provides insight into uncertainty drivers without repeating the computational or cost overhead of fully redesigning the pipeline. A SHAP-based human-readable explainable AI (XAI) framework was used to rank and explain the impact of each choice in a data pipeline, allowing the decoupling of positive and negative performance drivers to facilitate the successful selection of highly-performing pipelines. This empirical approach is demonstrated in bearing fault classification case studies using well-understood open-source data.
In our digitalized modern society where cyber-physical systems and internet-of-things (IoT) devices are increasingly commonplace, it is paramount that we are able to assure the cybersecurity of the systems that we rely on. As a fundamental policy, we join the advocates of multilayered cybersecurity measures, where resilience is built into IoT systems by relying on multiple defensive techniques. While existing legislation such as the General Data Protection Regulation (GDPR) also takes this stance, the technical implementation of these measures is left open. This invites research into the landscape of multilayered defensive measures, and within this problem space, we focus on two defensive measures: obfuscation and diversification. In this study, through a literature review, we situate these measures within the broader IoT cybersecurity landscape and show how they operate with other security measures built on the network and within IoT devices themselves. Our findings highlight that obfuscation and diversification show promise in contributing to a cost-effective robust cybersecurity ecosystem in today’s diverse cyber threat landscape.
Gas furnaces are the prevalent heating systems in Europe, but efforts to decarbonize the energy sector advocate for their replacement with heat pumps. However, this transition poses challenges for power grids due to increased electricity consumption. Estimating this consumption relies on the seasonal performance factor (SPF) of heat pumps, a metric that is complex to model and hard to measure accurately. We propose using an unpaired dataset of smart meter data at the building level to model the heat consumption and the SPF. We compare the distributions of the annual gas and heat pump electricity consumption by applying either the Jensen–Shannon Divergence or the Kolmogorov–Smirnov test. Through evaluation of a real-world dataset, we prove the ability of the methodology to predict the electricity consumption of future heat pumps replacing existing gas furnaces with a focus on single- and two-family buildings. Our results indicate anticipated SPFs ranging between 2.8 and 3.4, based on the Kolmogorov–Smirnov test. However, it is essential to note that the analysis reveals challenges associated with interpreting results when there are single-sided shifts in the input data, such as those induced by external factors like the European gas crisis in 2022. In summary, this extended version of a conference paper shows the viability of utilizing smart meter data to model heat consumption and seasonal performance factor for future retrofitted heat pumps.
In this paper we consider positional games where the winning sets are edge sets of tree-universal graphs. Specifically, we show that in the unbiased Maker-Breaker game on the edges of the complete graph $K_n$, Maker has a strategy to claim a graph which contains copies of all spanning trees with maximum degree at most $cn/\log (n)$, for a suitable constant $c$ and $n$ being large enough. We also prove an analogous result for Waiter-Client games. Both of our results show that the building player can play at least as good as suggested by the random graph intuition. Moreover, they improve on a special case of earlier results by Johannsen, Krivelevich, and Samotij as well as Han and Yang for Maker-Breaker games.
We present a practical verification method for safety analysis of the autonomous driving system (ADS). The main idea is to build a surrogate model that quantitatively depicts the behavior of an ADS in the specified traffic scenario. The safety properties proved in the resulting surrogate model apply to the original ADS with a probabilistic guarantee. Given the complexity of a traffic scenario in autonomous driving, our approach further partitions the parameter space of a traffic scenario for the ADS into safe sub-spaces with varying levels of guarantees and unsafe sub-spaces with confirmed counter-examples. Innovatively, the partitioning is based on a branching algorithm that features explainable AI methods. We demonstrate the utility of the proposed approach by evaluating safety properties on the state-of-the-art ADS Interfuser, with a variety of simulated traffic scenarios, and we show that our approach and existing ADS testing work complement each other. We certify five safe scenarios from the verification results and find out three sneaky behavior discrepancies in Interfuser which can hardly be detected by safety testing approaches.
Transfer learning has been highlighted as a promising framework to increase the accuracy of the data-driven model in the case of data sparsity, specifically by leveraging pretrained knowledge to the training of the target model. The objective of this study is to evaluate whether the number of requisite training samples can be reduced with the use of various transfer learning models for predicting, for example, the chemical source terms of the data-driven reduced-order modeling (ROM) that represents the homogeneous ignition of a hydrogen/air mixture. Principal component analysis is applied to reduce the dimensionality of the hydrogen/air mixture in composition space. Artificial neural networks (ANNs) are used to regress the reaction rates of principal components, and subsequently, a system of ordinary differential equations is solved. As the number of training samples decreases in the target task, the ROM fails to predict the ignition evolution of a hydrogen/air mixture. Three transfer learning strategies are then applied to the training of the ANN model with a sparse dataset. The performance of the ROM with a sparse dataset is remarkably enhanced if the training of the ANN model is restricted by a regularization term that controls the degree of knowledge transfer from source to target tasks. To this end, a novel transfer learning method is introduced, Parameter control via Partial Initialization and Regularization (PaPIR), whereby the amount of knowledge transferred is systemically adjusted in terms of the initialization and regularization schemes of the ANN model in the target task.
As its name indicates, algorithmic regulation relies on the automation of regulatory processes through algorithms. Examining the impact of algorithmic regulation on the rule of law hence first requires an understanding of how algorithms work. In this chapter, I therefore start by focusing on the technical aspects of algorithmic systems (Section 2.1), and complement this discussion with an overview of their societal impact, emphasising their societal embeddedness and the consequences thereof (Section 2.2). Next, I examine how and why public authorities rely on algorithmic systems to inform and take administrative acts, with special attention to the historical adoption of such systems, and their impact on the role of discretion (Section 2.3). Finally, I draw some conclusions for subsequent chapters (Section 2.4).
In this book, I examined how public authorities’ reliance on algorithmic regulation can affect the rule of law and erode its protective role. I conceptualised this threat as algorithmic rule by law and evaluated the EU legal framework’s safeguards to counter it. In this chapter, I summarise my findings, conclude that this threat is insufficiently addressed (Section 6.1) and provide a number of recommendations (Section 6.2). Finally, I offer some closing remarks (Section 6.3). Algorithmic regulation promises simplicity and a route to avoid the complex tensions of legal rules that are continuously open to multiple interpretations. Yet the same promise also threatens liberal democracy today, as illiberal and authoritarian tendencies seek to eliminate plurality in favour of simplicity. The threat of algorithmic rule by law is hence the same that also threatens liberal democracy: the elimination of normative tensions by essentialising a single view. The antidote is hence to accept not only the normative tensions that are inherent in law but also the tensions inherent in a pluralistic society. We should not essentialise the law’s interpretation, but embrace its normative complexity.
This chapter introduces the main research themes of this book, which explores two current global developments. The first concerns the increased use of algorithmic systems by public authorities in a way that raises significant ethical and legal challenges. The second concerns the erosion of the rule of law and the rise of authoritarian and illiberal tendencies in liberal democracies, including in Europe. While each of these developments is worrying as such, in this book, I argue that the combination of their harms is currently underexamined. By analysing how the former development might reinforce the latter, this book seeks to provide a better understanding of how algorithmic regulation can erode the rule of law and lead to algorithmic rule by law instead. It also evaluates the current EU legal framework which is inadequate to counter this threat, and identifies new pathways forward.
In Chapter 3, I developed this book’s normative analytical framework by concretising the six principles that can be said to constitute the rule of law in the EU legal order. Drawing on this framework, in this chapter I now revisit each of these principles and carry out a systematic assessment of how public authorities’ reliance on algorithmic regulation can adversely affect them (Section 4.1). I then propose a theory of harm that conceptualises this threat, by juxtaposing the rule of law to algorithmic rule by law (Section 4.2). Finally, I summarise my findings and outline the main elements that should be considered when evaluating the aptness of the current legal framework to address this threat (Section 4.3).
Robot pick-and-place for unknown objects is still a very challenging research topic. This paper proposes a multi-modal learning method for robot one-shot imitation of pick-and-place tasks. This method aims to enhance the generality of industrial robots while reducing the amount of data and training costs the one-shot imitation method relies on. The method first categorizes human demonstration videos into different tasks, and these tasks are classified into six types to symbolize as many types of pick-and-place tasks as possible. Second, the method generates multi-modal prompts and finally predicts the action of the robot and completes the symbolic pick-and-place task in industrial production. A carefully curated dataset is created to complement the method. The dataset consists of human demonstration videos and instance images focused on real-world scenes and industrial tasks, which fosters adaptable and efficient learning. Experimental results demonstrate favorable success rates and loss results both in simulation environments and real-world experiments, confirming its effectiveness and practicality.