To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In Chapter 3, I developed this book’s normative analytical framework by concretising the six principles that can be said to constitute the rule of law in the EU legal order. Drawing on this framework, in this chapter I now revisit each of these principles and carry out a systematic assessment of how public authorities’ reliance on algorithmic regulation can adversely affect them (Section 4.1). I then propose a theory of harm that conceptualises this threat, by juxtaposing the rule of law to algorithmic rule by law (Section 4.2). Finally, I summarise my findings and outline the main elements that should be considered when evaluating the aptness of the current legal framework to address this threat (Section 4.3).
Robot pick-and-place for unknown objects is still a very challenging research topic. This paper proposes a multi-modal learning method for robot one-shot imitation of pick-and-place tasks. This method aims to enhance the generality of industrial robots while reducing the amount of data and training costs the one-shot imitation method relies on. The method first categorizes human demonstration videos into different tasks, and these tasks are classified into six types to symbolize as many types of pick-and-place tasks as possible. Second, the method generates multi-modal prompts and finally predicts the action of the robot and completes the symbolic pick-and-place task in industrial production. A carefully curated dataset is created to complement the method. The dataset consists of human demonstration videos and instance images focused on real-world scenes and industrial tasks, which fosters adaptable and efficient learning. Experimental results demonstrate favorable success rates and loss results both in simulation environments and real-world experiments, confirming its effectiveness and practicality.
We suggest that foundation models are general purpose solutions similar to general purpose programmable microprocessors, where fine-tuning and prompt-engineering are analogous to coding for microprocessors. Evaluating general purpose solutions is not like hypothesis testing. We want to know how well the machine will perform on an unknown program with unknown inputs for unknown users with unknown budgets and unknown utility functions. This paper is based on an invited talk by John Mashey, “Lessons from SPEC,” at an ACL-2021 workshop on benchmarking. Mashey started by describing Standard Performance Evaluation Corporation (SPEC), a benchmark that has had more impact than benchmarks in our field because SPEC addresses an import commercial question: which CPU should I buy? In addition, SPEC can be interpreted to show that CPUs are 50,000 faster than they were 40 years ago. It is remarkable that we can make such statements without specifying the program, users, task, dataset, etc. It would be desirable to make quantitative statements about improvements of general purpose foundation models over years/decades without specifying tasks, datasets, use cases, etc.
The risks emanating from algorithmic rule by law lie at the intersection of two regulatory domains: regulation pertaining to the rule of law’s protection (the EU’s rule of law agenda), and regulation pertaining to the protection of individuals against the risks of algorithmic systems (the EU’s digital agenda). Each of these domains consists of a broad range of legislation, including not only primary and secondary EU law, but also soft law. In what follows, I confine my investigation to those areas of legislation that are most relevant for the identified concerns. After addressing the EU’s competences to take legal action in this field (Section 5.1), I respectively examine safeguards provided by regulation pertaining to the rule of law (Section 5.2), to personal data (Section 5.3) and to algorithmic systems (Section 5.4), before concluding (Section 5.5).
In this chapter, I first examine how the rule of law has been defined in legal theory, and how it has been distinguished from the rule by law, which is a distortion thereof (Section 3.1). Second, I assess how the rule of law has been conceptualised in the context of the European Union, as this book focuses primarily on the EU legal order (Section 3.2). In this regard, I also draw on the acquis of the Council of Europe. The Council of Europe is a distinct jurisdictional order, yet it heavily influenced the ‘EU’ conceptualisation of the rule of law, and the EU regularly relies on Council of Europe sources in its own legal practices. Finally, I draw on these findings to identify the rule of law’s core principles and to distil the concrete requirements that public authorities must fulfil to comply therewith (Section 3.3). Identifying these requirements – and the inherent challenges to achieve them – will subsequently allow me to build a normative analytical framework that I can use as a benchmark in Chapter 4 to assess how algorithmic regulation impacts the rule of law.
In numerous applications, extracting a single rotation component (termed “planar rotation”) from a 3D rotation is of significant interest. In biomechanics, for example, the analysis of joint angles within anatomical planes offers better clinical interpretability than spatial rotations. Moreover, in parallel kinematics robotic machines, unwished rotations about an axis – termed “parasitic motions” – need to be excluded. However, due to the non-Abelian nature of spatial rotations, these components cannot be extracted by simple projections as in a vector space. Despite extensive discussion in the literature about the non-uniqueness and distortion of the results due to the nonlinearity of the SO(3) group, they continue to be used due to the absence of alternatives. This paper reviews the existing methods for planar-rotation extraction from 3D rotations, showing their similarities and differences as well as inconsistencies by mathematical analysis as well as two application cases, one of them from biomechanics (flexural knee angle in the sagittal plane). Moreover, a novel, simple, and efficient method based on a pseudo-projection of the Quaternion rotation vector is introduced, which circumvents the ambiguity and distortion problems of existing approaches. In this respect, a novel method for determining the orientation of a box from camera recordings based on a two-plane projection is also proposed, which yields more precise results than the existing Perspective 3-Point Problem from the literature. This paper focuses exclusively on the case of finite rotations, as infinitesimal rotations within a single plane are non-holonomic and, through integration, produce rotation components orthogonal to the plane.
For relevant logics, the admissibility of the rule of proof $\gamma $ has played a significant historical role in the development of relevant logics. For first-order logics, however, there have been only a handful of $\gamma $-admissibility proofs for a select few logics. Here we show that, for each logic L of a wide range of propositional relevant logics for which excluded middle is valid (with fusion and the Ackermann truth constant), the first-order extensions QL and LQ admit $\gamma $. Specifically, these are particular “conventionally normal” extensions of the logic $\mathbf {G}^{g,d}$, which is the least propositional relevant logic (with the usual relational semantics) that admits $\gamma $ by the method of normal models. We also note the circumstances in which our results apply to logics without fusion and the Ackermann truth constant.
An equivalence relation can be constructed from a given (homogeneous, binary) relation in two steps: first, construct the smallest reflexive and transitive relation containing the given relation (the “star” of the relation) and, second, construct the largest symmetric relation that is included in the result of the first step. The fact that the final result is also reflexive and transitive (as well as symmetric), and thus an equivalence relation, is not immediately obvious, although straightforward to prove. Rather than prove that the defining properties of reflexivity and transitivity are satisfied, we establish reflexivity and transitivity constructively by exhibiting a starth root—in a way that emphasises the creative process in its construction. The resulting construction is fundamental to algorithms that determine the strongly connected components of a graph as well as the decomposition of a graph into its strongly connected components together with an acyclic graph connecting such components.
We show that the twin-width of every $n$-vertex $d$-regular graph is at most $n^{\frac{d-2}{2d-2}+o(1)}$ for any fixed integer $d \geq 2$ and that almost all $d$-regular graphs attain this bound. More generally, we obtain bounds on the twin-width of sparse Erdős–Renyi and regular random graphs, complementing the bounds in the denser regime due to Ahn, Chakraborti, Hendrey, Kim, and Oum.
Some top-down problem specifications, if executed, may compute sub-problems repeatedly. Instead, we may want a bottom-up algorithm that stores solutions of sub-problems in a table to be reused. How the table can be represented and efficiently maintained, however, can be tricky. We study a special case: computing a function ${\mathit{h}}$ taking lists as inputs such that ${\mathit{h}\;\mathit{xs}}$ is defined in terms of all immediate sublists of ${\mathit{xs}}$. Richard Bird studied this problem in 2008 and presented a concise but cryptic algorithm without much explanation. We give this algorithm a proper derivation and discovered a key property that allows it to work. The algorithm builds trees that have certain shapes—the sizes along the left spine is a prefix of a diagonal in Pascal’s triangle. The crucial function we derive transforms one diagonal to the next.