To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We suggest that foundation models are general purpose solutions similar to general purpose programmable microprocessors, where fine-tuning and prompt-engineering are analogous to coding for microprocessors. Evaluating general purpose solutions is not like hypothesis testing. We want to know how well the machine will perform on an unknown program with unknown inputs for unknown users with unknown budgets and unknown utility functions. This paper is based on an invited talk by John Mashey, “Lessons from SPEC,” at an ACL-2021 workshop on benchmarking. Mashey started by describing Standard Performance Evaluation Corporation (SPEC), a benchmark that has had more impact than benchmarks in our field because SPEC addresses an import commercial question: which CPU should I buy? In addition, SPEC can be interpreted to show that CPUs are 50,000 faster than they were 40 years ago. It is remarkable that we can make such statements without specifying the program, users, task, dataset, etc. It would be desirable to make quantitative statements about improvements of general purpose foundation models over years/decades without specifying tasks, datasets, use cases, etc.
The risks emanating from algorithmic rule by law lie at the intersection of two regulatory domains: regulation pertaining to the rule of law’s protection (the EU’s rule of law agenda), and regulation pertaining to the protection of individuals against the risks of algorithmic systems (the EU’s digital agenda). Each of these domains consists of a broad range of legislation, including not only primary and secondary EU law, but also soft law. In what follows, I confine my investigation to those areas of legislation that are most relevant for the identified concerns. After addressing the EU’s competences to take legal action in this field (Section 5.1), I respectively examine safeguards provided by regulation pertaining to the rule of law (Section 5.2), to personal data (Section 5.3) and to algorithmic systems (Section 5.4), before concluding (Section 5.5).
In this chapter, I first examine how the rule of law has been defined in legal theory, and how it has been distinguished from the rule by law, which is a distortion thereof (Section 3.1). Second, I assess how the rule of law has been conceptualised in the context of the European Union, as this book focuses primarily on the EU legal order (Section 3.2). In this regard, I also draw on the acquis of the Council of Europe. The Council of Europe is a distinct jurisdictional order, yet it heavily influenced the ‘EU’ conceptualisation of the rule of law, and the EU regularly relies on Council of Europe sources in its own legal practices. Finally, I draw on these findings to identify the rule of law’s core principles and to distil the concrete requirements that public authorities must fulfil to comply therewith (Section 3.3). Identifying these requirements – and the inherent challenges to achieve them – will subsequently allow me to build a normative analytical framework that I can use as a benchmark in Chapter 4 to assess how algorithmic regulation impacts the rule of law.
In numerous applications, extracting a single rotation component (termed “planar rotation”) from a 3D rotation is of significant interest. In biomechanics, for example, the analysis of joint angles within anatomical planes offers better clinical interpretability than spatial rotations. Moreover, in parallel kinematics robotic machines, unwished rotations about an axis – termed “parasitic motions” – need to be excluded. However, due to the non-Abelian nature of spatial rotations, these components cannot be extracted by simple projections as in a vector space. Despite extensive discussion in the literature about the non-uniqueness and distortion of the results due to the nonlinearity of the SO(3) group, they continue to be used due to the absence of alternatives. This paper reviews the existing methods for planar-rotation extraction from 3D rotations, showing their similarities and differences as well as inconsistencies by mathematical analysis as well as two application cases, one of them from biomechanics (flexural knee angle in the sagittal plane). Moreover, a novel, simple, and efficient method based on a pseudo-projection of the Quaternion rotation vector is introduced, which circumvents the ambiguity and distortion problems of existing approaches. In this respect, a novel method for determining the orientation of a box from camera recordings based on a two-plane projection is also proposed, which yields more precise results than the existing Perspective 3-Point Problem from the literature. This paper focuses exclusively on the case of finite rotations, as infinitesimal rotations within a single plane are non-holonomic and, through integration, produce rotation components orthogonal to the plane.
For relevant logics, the admissibility of the rule of proof $\gamma $ has played a significant historical role in the development of relevant logics. For first-order logics, however, there have been only a handful of $\gamma $-admissibility proofs for a select few logics. Here we show that, for each logic L of a wide range of propositional relevant logics for which excluded middle is valid (with fusion and the Ackermann truth constant), the first-order extensions QL and LQ admit $\gamma $. Specifically, these are particular “conventionally normal” extensions of the logic $\mathbf {G}^{g,d}$, which is the least propositional relevant logic (with the usual relational semantics) that admits $\gamma $ by the method of normal models. We also note the circumstances in which our results apply to logics without fusion and the Ackermann truth constant.
An equivalence relation can be constructed from a given (homogeneous, binary) relation in two steps: first, construct the smallest reflexive and transitive relation containing the given relation (the “star” of the relation) and, second, construct the largest symmetric relation that is included in the result of the first step. The fact that the final result is also reflexive and transitive (as well as symmetric), and thus an equivalence relation, is not immediately obvious, although straightforward to prove. Rather than prove that the defining properties of reflexivity and transitivity are satisfied, we establish reflexivity and transitivity constructively by exhibiting a starth root—in a way that emphasises the creative process in its construction. The resulting construction is fundamental to algorithms that determine the strongly connected components of a graph as well as the decomposition of a graph into its strongly connected components together with an acyclic graph connecting such components.
We show that the twin-width of every $n$-vertex $d$-regular graph is at most $n^{\frac{d-2}{2d-2}+o(1)}$ for any fixed integer $d \geq 2$ and that almost all $d$-regular graphs attain this bound. More generally, we obtain bounds on the twin-width of sparse Erdős–Renyi and regular random graphs, complementing the bounds in the denser regime due to Ahn, Chakraborti, Hendrey, Kim, and Oum.
Some top-down problem specifications, if executed, may compute sub-problems repeatedly. Instead, we may want a bottom-up algorithm that stores solutions of sub-problems in a table to be reused. How the table can be represented and efficiently maintained, however, can be tricky. We study a special case: computing a function ${\mathit{h}}$ taking lists as inputs such that ${\mathit{h}\;\mathit{xs}}$ is defined in terms of all immediate sublists of ${\mathit{xs}}$. Richard Bird studied this problem in 2008 and presented a concise but cryptic algorithm without much explanation. We give this algorithm a proper derivation and discovered a key property that allows it to work. The algorithm builds trees that have certain shapes—the sizes along the left spine is a prefix of a diagonal in Pascal’s triangle. The crucial function we derive transforms one diagonal to the next.
Social impact has been widely discussed by the engineering community, but studies show that there is currently little systematic consideration of the social impact of products in both academia and in industry beyond social impacts on health and safety. While Failure Mode and Effect Analysis (FMEA) is useful for evaluating health and safety risks, new developments are needed to create an FMEA-style evaluation that can be applied to a wide range of social impacts for engineered products. The authors describe necessary modifications to traditional FMEA that transform it into a tool for social impact analysis. The modification of FMEA involves the introduction of positive and negative impacts, the inclusion of discrete and continuous impacts, the consideration of various stakeholder types, and the inclusion of uncertainty in place of detectability. This modified FMEA is referred to in this paper as Social Impact Effects Analysis (SIEA). The paper describes how SIEA is performed and articulates the potential benefits of SIEA.
The advent of generative artificial intelligence (AI) models holds potential for aiding teachers in the generation of pedagogical materials. However, numerous knowledge gaps concerning the behavior of these models obfuscate the generation of research-informed guidance for their effective usage. Here, we assess trends in prompt specificity, variability, and weaknesses in foreign language teacher lesson plans generated by zero-shot prompting in ChatGPT. Iterating a series of prompts that increased in complexity, we found that output lesson plans were generally high quality, though additional context and specificity to a prompt did not guarantee a concomitant increase in quality. Additionally, we observed extreme cases of variability in outputs generated by the same prompt. In many cases, this variability reflected a conflict between outdated (e.g. reciting scripted dialogues) and more current research-based pedagogical practices (e.g. a focus on communication). These results suggest that the training of generative AI models on classic texts concerning pedagogical practices may bias generated content toward teaching practices that have been long refuted by research. Collectively, our results offer immediate translational implications for practicing and training foreign language teachers on the use of AI tools. More broadly, these findings highlight trends in generative AI output that have implications for the development of pedagogical materials across a diversity of content areas.
Let $T$ be a tree on $t$ vertices. We prove that for every positive integer $k$ and every graph $G$, either $G$ contains $k$ pairwise vertex-disjoint subgraphs each having a $T$ minor, or there exists a set $X$ of at most $t(k-1)$ vertices of $G$ such that $G-X$ has no $T$ minor. The bound on the size of $X$ is best possible and improves on an earlier $f(t)k$ bound proved by Fiorini, Joret, and Wood (2013) with some fast-growing function $f(t)$. Moreover, our proof is short and simple.
Our emotions do not always surface into our awareness, making it difficult to manage them and communicate them to others. Even when emotions do not reach our awareness, they still express themselves as physiological changes, often unperceived by ourselves and others. To aid in emotion self-regulation and increase the bandwidth of emotion communication, I designed a programmable affective sleeve that translates physiological aspects of emotions into material haptic action. The affective sleeve has been developed as a case study for Affective Matter. Affective Matter suggests a method for human-material interaction that enhances health and wellbeing.
I first discuss the three foundations of Affective Matter underlying the design of the affective sleeve: Embodiment, Entrainment, and Material Intelligence. I then proceed to the methods and results of an exploratory study I developed and conducted that tests the psychophysiological impact of the sleeve on 36 participants. The study results suggest that the pace of the affective sleeve’s haptic action can be programmed to regulate the wearer’s breathing pace to either have a calming or a stimulating impact on the wearer. The results also show varied affective responses to distinct haptic stimuli. Discussion of the results suggests future research directions and therapeutic applications for the benefit of individuals with mental health and neurodevelopmental disorders.