Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
Genuinely broad in scope, each handbook in this series provides a complete state-of-the-field overview of a major sub-discipline within language study, law, education and psychological science research.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter reflects on the emerging uses of Emotional Artificial Intelligence (EAI) Systems in cars and the future regulatory implications for human–machine interactions in vehicles. The car has sociocultural importance as an everyday context of technology deployment, and we begin by reflecting on the ways the automotive sector is changing to incorporate EAI more widely. This is primarily due to shifts in safety legislation, alongside advances in biometrics and vehicle automation. This change is creating hybrid environments of human/nonhuman interactions, where emotion sensing systems foster increasingly entangled relationships with humans and cars. These laws, such as the proposed EU Artificial Intelligence Act and EU Vehicle Safety Regulation, are driving design of more trustworthy, privacy preserving, and safe AI systems. We reflect on legal implications arising from sensing human in-cabin behaviors and expressions, alongside risks posed by AI systems. We conclude by reflecting on the challenges of designing for trustworthy, human-centered interactions.
In 2017 Microsoft founder Bill Gates recommended taxing robots to slow the pace of automation. It has been estimated that up to 47 percent of U.S. jobs are at risk by advancements in artificial intelligence that has increased the rate of automation. While employment changes due to automation are not new, advances in artificial intelligence embedded within robots threaten many more jobs much more quickly than historic automation did. The chapter discusses how accelerated automation presents a revenue problem for governments. The revenue problem exists because the tax system is designed to tax labor more heavily, as labor is less likely to be able to avoid taxation. Capital investment, on the other hand, is taxed more lightly because capital is mobile and can escape taxation. When capital becomes labor, as in robotic automation, the bottom falls out of the system. With this background in mind, the Tax Cuts and Jobs Act (TCJA), enacted in 2017, significantly cut the U.S. corporate tax rate, from 35 percent to 21 percent. In addition, TCJA increased tax benefits for purchasing equipment (which would include automation in the form of robots), significantly enhancing bonus depreciation. The 2017 tax legislation continued and deepened the existing tax bias toward automation. This chapter explores policy options for solving the revenue problem.
This chapter provides a thorough, up-to-date review of the literature on the phonetics and phonology of early bilinguals. It pulls together studies from a range of bilingual settings, including bilingual societies and heritage language contexts. While the chapter mostly reviews evidence from adolescent and adult participants, it also makes reference to the child bilingualism literature, where appropriate. The chapter first reviews studies on the accents of early versus late bilinguals, followed by a discussion of the various explanatory accounts for the observed differences between these two groups. Subsequently, the critical significance of early linguistic experience on bilingual speech patterns is considered, with particular reference to the evidence from childhood overhearers and international adoptees. The following sections then review studies comparing simultaneous and early sequential bilinguals, and those exploring the role of language dominance, continued use, the language of the environment in bilinguals’ pronunciation patterns, and the role of sociolinguistic factors in early bilingual speech patterns. The chapter concludes with suggestions for future research.
This chapter reviews evidence that the orthographic forms (spellings) of L2 sounds and words affect L2 phonological representation and processing. Orthographic effects are found in speech perception, speech production, phonological awareness, and the learning of words and sounds. Orthographic forms facilitate L2 speakers/listeners – for instance in lexical learning – but also have negative effects, resulting in sound additions, deletions, and substitutions. This happens because L2 speakers’ L2 orthographic knowledge differs from the actual working of the L2 writing system. Orthographic effects are established after little exposure to orthographic forms, are persistent, can be reinforced by factors other than orthography, including spoken input, and are modulated by individual-level and sound/word-level variables. Future research should address gaps in current knowledge, for instance investigating the effects of teaching interventions, and aim at producing a coherent framework.
Humanlike robots, based on their behavior and physical appearance, are becoming an increasingly important part of society often interacting with individuals in a wide variety of social contexts. One emerging class of robots that socialize with humans are robots capable of expressing emotions, are humanoid in appearance, and anthropomorphized by users. For such robots I propose that how humans interact with them is represented as a four-way process depending on the following: (1) The context of the situation surrounding the interaction; (2) The effort that users make to comprehend the robotic technology especially in a particular context; (3) The process of the adaptation of robotic technology (by users or groups of users) to incorporate robots into their lives, practices, and work routines; and (4) The transformation of the technology and its subsequent meaning to the user. One aspect of the above points is that humans receive significant benefits by interacting with robots in different contexts. For example, by applying (or appropriating) the robot’s abilities to perform various tasks, humans may increase their own physical and intellectual abilities. However, unlike human creativity and innovation, which are a product of human cognition, robot creativity is based on the use of algorithms and software in which robots’ appropriate data is used to perform tasks. Further, a robot’s ability to express emotion and a personality may influence humans in a variety of ways; for example, the conditions under which they appropriate the robot’s labor or creative output for a particular reason. As I argue in this chapter, “robot appropriation” can lead to legal constraints and regulations between human and robot. On this point, I note that the continental legal order, to which the Russian legal system belongs, has developed a robust method for the conscious exercise of law. Based on the Russian legal system and the approach taken by other jurisdictions, an AI-enabled robot can only receive what are described as independent rights that are different in substance from the rights granted to natural persons. Taking a broad scope in this chapter, I propose that regulations are required for various human–robot interactions, and I discuss several examples of this approach.
This chapter focuses on the growing inclusion of social robots in therapy from the perspective of unresolved legal and ethical issues that include risks to patient autonomy, human dignity and trust, the potentially life-threatening effects of inaccurate or malfunctioning technology, diminished privacy due to the reliance on enormous amounts of personal (sensitive health) data, new challenges to data security due to the cyber–physical nature of robots, and the problem of how to obtain informed consent to medical treatments that depend on opaque AI decision-making. From this broad spectrum, the chapter focuses on the protection of the health and safety of patients and care recipients under EU law. A more detailed analysis shows that neither the Medical Device Regulation nor the proposal for an Artificial Intelligence Act adequately address the risks to patient health and safety that arise from human–machine interaction. Against this backdrop, the chapter provides recommendations as to which aspects should be regulated in the future and argues for a public discussion about the extent to which we, as a society, should replace human therapists with AI-enabled robotic technology.
In this chapter we review the status of human–robot interaction (HRI) including current research directions within robotics that may impact issues of law, policy, and regulations. While the focus of this book is on HRI experienced in social contexts, to provide a broad review of the legal and policy issues impacted by HRI, we discuss different areas of robotics that require various levels of human interaction and supervisory control of robots. We note that robots have evolved from continuous human-controlled master–slave servomechanisms for handling nuclear waste to a broad range of robots incorporating artificial intelligence (AI), which are under human supervisory control but becoming more autonomous. Further, we note that research on human interaction with robots is a rapidly evolving field and specialized robots under human teleoperation have proven successful in hazardous environments and for medical and other applications. There is also a noticeable trend for more humanoid-appearing and AI-enabled robots interacting with humans in social contexts, and for this class of robots we discuss emerging issues of law, regulations, and policy.
Could robots be recognized as legal persons? Should they? Much of the discussion of these topics is distorted by fictional representations of what form true artificial intelligence (AI) might take – in particular that it would be of human-level intellect and be embodied in humanoid form. Such robots are the focus of this volume, with the possibility that external appearance and its echoes in science fiction may shape debate over their “rights.” Most legal systems would be able to grant some form of personality, yet early considerations of whether they should conflate two discrete rationales. The first is instrumental, analogous to the economic reasons why corporations are granted personality. The second is inherent, linked to the manner in which human personality is recognized. Neither is sufficient to justify legal personality for robots today. A third reason, which may become more pressing in the medium term, is tied to the possibility of AI systems that far surpass humans in terms of ability. In the event that such entities are created, the question may shift from whether we recognize them under the law, to whether they recognize us.
Generative phonologists share the goal of modeling the internalized grammars that allow members of a linguistic community to produce and understand utterances they have not previously encountered. But while most generativists assume that the internalized grammar maps lexical to surface representations, they may disagree on the nature of that mapping, the makeup of the mental representations of phonological structure, and the role of universal well-formedness constraints in grammar. This chapter surveys analyses of data from multilinguals, foreign language learners, and loanword adapters within different generative models, exploring both strengths and limitations of competing approaches. Issues addressed include the role of phonological vs. phonetic structure, the relationship between the production grammar and the perception grammar, and the role of putative innate learning biases vs. factors such as input frequency and perceptual salience.
Many have expressed concerns regarding the replicability of scientific research. However, little of this ongoing discussion has focused on research examining the production of vowels and consonants or the many important choices that researchers must make in pre-analysis phases of speech production research. The literature reviewed here indicates that not all speech production studies have been replicated, and that how speech is elicited may affect the results that are obtained. Many different elicitation techniques are in current use, but none represents a gold standard. The new Characteristic Speech Production (CSP) technique presented here aims to augment replicability by obviating the need for participants to accommodate their speech to that of others or adopt a particular speaking style as they give meaningful answers to meaningful questions. Given the novelty of the CSP technique, the chapter provides a protocol that is designed to test its efficacy. If the CSP technique can be shown to yield speech samples that are more representative of individuals’ speech than a standard list-reading technique, a change in how speech is elicited for production research will be warranted.
Modern law has developed based on the fundamental principle of respecting each person as an autonomous individual. However, the meaning of autonomous is not necessarily unambiguous even in jurisprudence, which has resulted in its use in various contexts. In this chapter, I examine the differences between personal autonomy and machine autonomy. Then I attempt to reconstruct the concept by exploring legal issues concerning privacy and freedom of expression when people use robots. As this chapter discusses, personal autonomy and machine autonomy differ considerably from each other, even when using the same concept of autonomy. Given the state of human–robot interaction (HRI) technology, it is appropriate to understand machine autonomy as having instrumental value as a means of serving personal autonomy. Furthermore, it is argued that the development of HRI will likely call for the implementation of a system of checks and balances among multiple robots to ensure autonomy as collective self-regulation for groups of robots connected to each other through information communication networks, rather than ensuring the autonomy of individual robots.
This chapter outlines studies within the domain of speech perception by bilingual adult listeners. I first discuss studies that have examined bilinguals’ perception of L1 and/or L2 speech segments, as well as those that have tested perception of unfamiliar, non-native speech segments. In turn, I examine each of the factors that are known to affect bilinguals’ perception of speech, which include age of L2 acquisition, effects of L1:L2 usage as they pertain to language dominance and proficiency, and short-term contextual effects on speech perception. I also provide an overview of the literature on bilinguals’ perception of suprasegmentals. Finally, I explore what I think are some of the crucial questions facing the field of bilingual speech perception.
This chapter discusses the use of AI ethics standardizations for robot governance. Specifically, the chapter considers challenges to the regulation of AI-enabled technology due to slow legislative processes that have not been able to keep pace with the rapid speed of technological advances. In addition to considering the regulation of critical AI technologies, the chapter also argues for a regulatory framework that relies on nonbinding and flexible AI ethics standards to ensure that stakeholders manage ethical, legal, and social implication (ELSI) risks that are inherent in daily human–robot interactions. By including AI ethics standards into the development process for humanoid and expressive robots, robot developers will be able to include principles of responsible innovation and research without conflicting with “hard laws” enacted for robot regulation. In this chapter, through two case studies, I explore the approach of ethical robot design, examine its potential and limitations, and demonstrate the utility of “ethically aligned design” and “social system design” frameworks in implementing legal human–robot interaction (L-HRI).
This chapter presents an overview of what is currently known about phonetic and phonological first language (L1) attrition and drift in bilingual speech and introduces a new theory of bilingual speech, Attrition & Drift in Access, Production, and Perception Theory (ADAPPT). Attrition and drift are defined and differentiated along several dimensions, including duration of change, source in second language (L2) experience, consciousness, agency, and scope. We address why findings of attrition and drift are important for our overall understanding of bilingual speech and draw links between ADAPPT and well-known theories of L2 speech, such as the revised Speech Learning Model (SLM-r), the Perceptual Assimilation Model-L2 (PAM-L2), and the Second Language Linguistic Perception model (L2LP). The significance of findings revealing attrition and drift is discussed in relation to different linguistic subfields. The chapter raises the question of how attrition and drift potentially interact to influence speech production and perception in the bilingual’s L1 over the life span; additional directions for future research are pointed out as well.
This chapter provides an introduction to the Cambridge Handbook of Bilingual Phonetics and Phonology, and emphasizes the interdisciplinarity of the scholarship included in the Handbook, which contributes to the diversity of approaches, to theory-building, and to the collaborative connections that are enhancing the field. The abstracts of each of the thirty-five chapters are also included and are followed by concluding remarks providing a roadmap for the future of research on bilingual phonetics and phonology.
This chapter introduces issues of law, policy, and regulations for human interaction with robots that are AI enabled, expressive, humanoid in appearance, and that are anthropomorphized by users. These features are leading to a class of robots that are beginning to pose unique challenges to courts, legislators, and the robotics industry as they consider how the behavior of robots operating with sophisticated social skills and increasing levels of intelligence should be regulated. In this chapter we introduce basic terms, definitions, and concepts which relate to human interaction with AI-enabled and social robots and we review some of the regulations, statutes, and case law which apply to such robots and we do so specifically in the context of human–robot interaction. Our goal in this chapter is to provide a conceptual framework for the chapters which follow focusing on human interaction with robots that are becoming more like us in form and behavior.