To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Chapter 7 argues that denialist speech can, and should, be classified as a sui generis form of hate speech in the legal sense of the term. Section 7.2 looks at the many faces of denialist speech, including forms of synchronous denialism that are often overlooked in these debates. Sections 7.3 and 7.4 address the thorny problem of classification and attempt to explain why it matters. In Sections 7.5 and 7.6 we turn to examine in more detail the spread of denialism laws at the domestic level and try to uncover the many different functions or purposes served by such laws. Finally, in Section 7.7 we address two sceptical challenges to our main thesis. First, if denialist speech is rightly classifiable as hate speech, then why were denialism provisions absent from the early landmark international laws dealing with incitement to genocide and hate speech in general? And second, if denialist speech can be considered hate speech despite its absence from landmark international laws, then what about other things, such as defamation of religion, and what prevents our characterisation of hate speech in the legal sense from becoming absurdly capacious?
Chapter 6 explores the distinction between the legal concepts of hate speech and hate crime. Our purpose is not only to shed light on but also to resolve the ambiguity, as well as to further illustrate and stress test our analyses. Sections 6.2 and 6.3 propose that the legal concept hate speech, formally speaking, only refers to laws which create bespoke crimes or other sorts of offences that do not have corresponding or parallel basic or base versions, whereas the legal concept hate crime only refers to laws which identify aggravated crimes that do have corresponding or parallel basic or base versions. Section 6.4 makes several key comparisons and contrasts between the concepts, beyond the merely formal analysis, while Section 6.5 develops an account of why the distinction between hate speech and hate crime matters legally speaking, both for victims and defendants. Finally, Section 6.6 discusses four potential grey areas of hate speech law, namely using threatening words or behaviour to stir up hatred; incitement to commit genocide; incitement to discrimination or violence; and torts and delicts involving racist abuse.
Chapter 1 establishes the context of our project and defends its theoretical and practical importance. Section 1.2 outlines the basic conceptual framework employed in the book, including the distinction between two concepts of hate speech and our twin-track approach to analysing them. We also highlight some of the pay-offs that flow from this conceptual framework. Section 1.3 explains what we mean by ‘grey areas of hate speech’ including identifying three underlying reasons or explanations why certain phenomena might end up falling into these areas, namely moral, semantic, and conceptual. We also try to motivate the significance and value of working to clear up the grey areas. Finally, Section 1.4 introduces and attempts to respond to the sceptical challenge that says, because the term ‘hate speech’ is linked to conceptual ambiguities, misleading connotations, an explosion of applications, and politicisation, it would be better to dispense with both the term and its concepts. We critically examine five main ways of responding to this sceptical challenge: rehabilitation, downsizing, abandonment, replacement, and enhanced understanding. We defend the final response as being the most promising and the overarching goal of the book.
Chapter 2 identifies prototypical examples of hate speech and seeks to explain what makes them such. Section 2.2 lists the original examples of hate speech cited in Mari Matsuda’s seminal article on the legal concept. We then explain how, even though the ordinary and legal concepts of hate speech share paradigmatic examples, the ordinary concept now has its own extended body of exemplars. Section 2.3 attempts to plot the complex pattern of overlapping and criss-crossing similarities among these exemplars. Section 2.4 looks in more depth at one of the paradigmatic examples of hate speech, namely racial slurs such as ‘nigger’. We highlight similarities it shares with other prototypical examples of hate speech. Finally, Section 2.5 defends a particular account of what it means for a new example to have enough similarities with exemplars to count as hate speech. If there are enough similarities across at least four out of five of the distinguishing qualities of target, style, message, act, and effect, then this conceptually justifies applying the phrase ‘x is also hate speech’ to the new example. We dub this the global resemblance test.
Chapter 5 seeks to orient the ordinary and legal concepts of hate speech. Section 5.2 uncovers various ways in which the ordinary and legal concepts of hate speech come together, including in terms of the kinds of speech they both count as hate speech. In Section 5.3, however, we turn to consider the potential sources of divergence between the ordinary and legal concepts of hate speech including the differing social functions or purposes played by the two concepts. Section 5.4 addresses the nature of the relationship and interaction between the ordinary and legal concepts of hate speech. Finally, in Section 5.5 we try to show why theoretical disagreements about the relationship between the ordinary and legal concepts of hate speech matter. In particular, we argue that uncovering these deeper disagreements can help to explain both the source of some academic controversies about the legitimacy of hate speech laws and the source of some wider public debates about the rights and wrongs of social media platform content policies on hate speech.
Chapter 4 defends classifying a further five grey area examples as hate speech in the ordinary sense of the term under the global resemblance test. We shall also critically examine Facebook’s community standard on hate speech in relation to its handling of these kinds of attacks, and make specific recommendations to address relevant weaknesses. Section 4.2 looks at what we call identity attacks. Section 4.3 investigates existential denials, namely statements denying the very existence of people identified by a protected characteristic. Section 4.4 scrutinises identity denials, by which we mean statements denying that certain people are who they take themselves to be, based on protected characteristics. Section 4.5 examines identity miscategorisations, which go one step further and attribute identities to people that do not match the identities they take themselves to possess, based on protected characteristics. Finally, Section 4.6 assesses identity appropriations, wherein people adopt elements of the identities of other people, based on protected characteristics, but without claiming to possess the relevant identities.
The Java programming language has been one of the most powerful tools available to computer programmers since its inception in 1995. It has also consistently changed since then, making it a vast and powerful resource for object-oriented programming today. This lucid textbook introduces the student not only to the nuances of object-oriented programming, but also to the many syntaxes and semantics of the modern Java language. Each concept of programming is explained, and then illustrated with small but effective ready-to-run programs. Important points to be noted have been emphasized and hints have been given at the end of each discussion so that programmers are careful to avoid common pitfalls. Finally, a number of practice problems taken from real world scenarios encourage the student to think in terms of problem solving, consolidating the knowledge gained.
Automated planning is a prominent area of Artificial Intelligence and an important component for intelligent autonomous agents. A cornerstone of domain-independent planning is the separation between planning logic, that is the automated reasoning side, and the knowledge model, that encodes a formal representation of domain knowledge needed to reason upon a given problem to synthesize a solution plan. Such a separation enables the use of reformulation techniques, which transform how a model is represented in order to improve the efficiency of plan generation. Over the past decades, significant research effort has been devoted to the design of reformulation techniques. In this paper, we present a systematic review of the large body of work on reformulation techniques for classical planning, aiming to provide a holistic view of the field and to foster future research in the area. As a tangible outcome, we provide a qualitative comparison of the existing classes of techniques, that can help researchers gain an overview of their strengths and weaknesses.
Efficient knowledge extraction from Big Data is quite a challenging topic. Recognizing relevant concepts from unannotated data while considering both context and domain knowledge is critical to implementing successful knowledge extraction. In this research, we provide a novel platform we call Active Learning Integrated with Knowledge Extraction (ALIKE) that overcomes the challenges of context awareness and concept extraction, which have impeded knowledge extraction in Big Data. We propose a method to extract related concepts from unorganized data with different contexts using multiple agents, synergy, reinforcement learning, and active learning.
We test ALIKE on the datasets of the COVID-19 Open Research Dataset Challenge. The experiment result suggests that the ALIKE platform can more efficiently distinguish inherent concepts from different papers than a non-agent-based method (without active learning) and that our proposed approach has a better chance to address the challenges of knowledge extraction with heterogeneous datasets. Moreover, the techniques used in ALIKE are transferable across any domain with multidisciplinary activity.
In 2003, Bohman, Frieze, and Martin initiated the study of randomly perturbed graphs and digraphs. For digraphs, they showed that for every $\alpha \gt 0$, there exists a constant $C$ such that for every $n$-vertex digraph of minimum semi-degree at least $\alpha n$, if one adds $Cn$ random edges then asymptotically almost surely the resulting digraph contains a consistently oriented Hamilton cycle. We generalize their result, showing that the hypothesis of this theorem actually asymptotically almost surely ensures the existence of every orientation of a cycle of every possible length, simultaneously. Moreover, we prove that we can relax the minimum semi-degree condition to a minimum total degree condition when considering orientations of a cycle that do not contain a large number of vertices of indegree $1$. Our proofs make use of a variant of an absorbing method of Montgomery.
Since the 1960s Mastermind has been studied for the combinatorial and information-theoretical interest the game has to offer. Many results have been discovered starting with Erdős and Rényi determining the optimal number of queries needed for two colours. For $k$ colours and $n$ positions, Chvátal found asymptotically optimal bounds when $k \le n^{1-\varepsilon }$. Following a sequence of gradual improvements for $k\geq n$ colours, the central open question is to resolve the gap between $\Omega (n)$ and $\mathcal{O}(n\log \log n)$ for $k=n$. In this paper, we resolve this gap by presenting the first algorithm for solving $k=n$ Mastermind with a linear number of queries. As a consequence, we are able to determine the query complexity of Mastermind for any parameters $k$ and $n$.
The increasing intensity and frequency of floods is one of the many consequences of our changing climate. In this work, we explore ML techniques that improve the flood detection module of an operational early flood warning system. Our method exploits an unlabeled dataset of paired multi-spectral and synthetic aperture radar (SAR) imagery to reduce the labeling requirements of a purely supervised learning method. Prior works have used unlabeled data by creating weak labels out of them. However, from our experiments, we noticed that such a model still ends up learning the label mistakes in those weak labels. Motivated by knowledge distillation and semi-supervised learning, we explore the use of a teacher to train a student with the help of a small hand-labeled dataset and a large unlabeled dataset. Unlike the conventional self-distillation setup, we propose a cross-modal distillation framework that transfers supervision from a teacher trained on richer modality (multi-spectral images) to a student model trained on SAR imagery. The trained models are then tested on the Sen1Floods11 dataset. Our model outperforms the Sen1Floods11 baseline model trained on the weak-labeled SAR imagery by an absolute margin of $ 6.53\% $ intersection over union (IoU) on the test split.
We proposed a compliant leg configuration that enhances the conventional pantograph design with leaf springs. The following facts characterize the proposed configuration: (1) Due to the use of the pantograph structure, the mass is centralized around the hip joint, reducing the lower leg inertia; (2) Leaf springs are chosen as elastic parts to increase energy efficiency and estimate foot-end contact forces. Compared with coil springs, leaf springs require no guide rails to deploy, and their stiffness can be easily adjusted through shape cutting. Analytical models are introduced to analyze the leg’s stiffness and estimate the contact forces only with the deflections of leaf springs. A one-leg robot based on the proposed design is built, and various experiments are conducted. Experiments regarding the stiffness calibration and the contact forces estimation showed an acceptable agreement with the analytical model. Experiments of dropping demonstrate the feasibility of the leg to perform spring-like behaviors. Experiments of periodic hopping demonstrate the feasibility of using spring deflections to detect touch-down events. For energy efficiency, it is also observed that the elastic leg has a 20% increment concerning the jumping height in the flight phase, compared with the one where leaf springs are replaced with rigid materials.
Hyper-redundant robots have good prospects for applications in confined space due to their high flexibility and slim body size. However, the super-redundant structure brings great challenges for its inverse kinematics with shape constraints. Unfortunately, traditional Jacobian pseudo-inverse-based inverse kinematics method and forward and backward reaching inverse kinematics (FABRIK) method are difficult to constrain the arm shape and realize trajectory tracking in confined spaces. To solve this problem, we propose a shape-controllable FABRIK method to satisfy the given path and shape constraints. Firstly, the kinematic model of the hyper-redundant robot is established, and the canonical FABRIK method is introduced. Based on the preliminary works, the single-layer improved FABRIK method is developed to solve the position and pointing inverse kinematics considering path environment and joint angle constraints instead of two-layer geometric iterations. For tracking the desired end roll angles, the polygonal virtual arm is designed. The real arm roll angle is achieved by controlling its winding on the virtual arm. In this way, the shape can be controlled. Finally, we compare the proposed method with other three approaches by simulations. Results show that the proposed method is more efficient and the arm shape is controllable.
Choices made by individuals have widespread impacts—for instance, people choose between political candidates to vote for, between social media posts to share, and between brands to purchase—moreover, data on these choices are increasingly abundant. Discrete choice models are a key tool for learning individual preferences from such data. Additionally, social factors like conformity and contagion influence individual choice. Traditional methods for incorporating these factors into choice models do not account for the entire social network and require hand-crafted features. To overcome these limitations, we use graph learning to study choice in networked contexts. We identify three ways in which graph learning techniques can be used for discrete choice: learning chooser representations, regularizing choice model parameters, and directly constructing predictions from a network. We design methods in each category and test them on real-world choice datasets, including county-level 2016 US election results and Android app installation and usage data. We show that incorporating social network structure can improve the predictions of the standard econometric choice model, the multinomial logit. We provide evidence that app installations are influenced by social context, but we find no such effect on app usage among the same participants, which instead is habit-driven. In the election data, we highlight the additional insights a discrete choice framework provides over classification or regression, the typical approaches. On synthetic data, we demonstrate the sample complexity benefit of using social information in choice models.