To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter describes dark patterns, interface features designed to be deceptive, which may covertly manipulate users in the task flows. These dark patterns may tweak user interfaces or present choices in a way that is persuasive or may deceive, all with the goal of getting users to give up personal information or make purchasing decisions that they otherwise would not normally do if information was presented more neutrally. This chapter presents case law, Federal statutes and regulations, state statutes and regulations, and describes the relationship between dark patterns and consumer rights law
The rise of new telecommunication technologies, such as GPS, satellite phones, and ubiquitous Internet of Things (IoT) devices, has raised new regulatory challenges, particularly around security, privacy, and interoperability. Specific examples of challenges at the intersection of telecommunications and HCI, such as over-the-top (OTT) services which operate over telecommunication networks but are independent from telecommunications carrier services, and challenges with accessing broadband services which then impact the user experience, and multi-factor authorization, are discussed. The ongoing debate about net neutrality is also discussed.
This chapter provides an introduction to the core concepts of US law, for those with an HCI background but not a legal background. The chapter covers the history of U.S. law, the basic constructs of the U.S. legal system, the core sources of legal rules: constitutions, statutues, regulations, and case law, differences between civil and criminal law, the differences between law and policy at the federal versus state level, searching for and using legal resources, and how to apply basic legal principles to HCI research.
The core topics at the intersection of human-computer interaction (HCI) and US law -- privacy, accessibility, telecommunications, intellectual property, artificial intelligence (AI), dark patterns, human subjects research, and voting -- can be hard to understand without a deep foundation in both law and computing. Every member of the author team of this unique book brings expertise in both law and HCI to provide an in-depth yet understandable treatment of each topic area for professionals, researchers, and graduate students in computing and/or law. Two introductory chapters explaining the core concepts of HCI (for readers with a legal background) and U.S. law (for readers with an HCI background) are followed by in-depth discussions of each topic.
Machine-readable humanity is an evocative idea, and it is this idea which Hanley et al. spell out and critically discuss in their contribution. They are interested in exploring the technological as well as the moral side of the meaning of machine-readability. They start by differentiating between various ways to collect (and read) data and to develop classification schemes. They argue that traditional top-down data collection (first the pegs and then the collection according to the pegs) is less efficient than more recent machine readability, which is dynamic, because of the successive advances of data and predictive analytics (“big data”), machine learning, deep learning, and AI. Discussing the advantages as well as the dangers of this new way to read humans, they conclude that we should be especially cautious vis-à-vis the growing field of digital biomarkers since in the end they could not only endanger privacy and entrench biases, but also obliterate our autonomy. Seen in this light, apps (like AdNauseam) that restrict data collection as a form of protest against behavioral profiling also constitute resistance to the inexorable transformation of humanity into a standing reserve: humans on standby, to be immediately at hand for consumption by digital machines.
Steeves revisits empirical data about young people’s experiences on social media to provide a snapshot of what happens to the interaction between self and others when community is organized algorithmically. She then uses Meadian notions of sociality to offer a theoretical framing that can explain the meaning of self, other, and community found in the data. She argues that young people interact with algorithms as if they were another social actor, and reflexively examine their own performances from the perspective of the algorithm as a specific form of generalized other. In doing so, they pay less attention to the other people they encounter in online spaces and instead orient themselves to action by emulating the values and goals of this algorithmic other. Their performances can accordingly be read as a concretization of these values and goals, making visible the agenda of those who mobilize the algorithm for their own purposes.
In her contribution, Roessler is interested in what digitalization means for the concept of human beings: a specific concept, identifiable, that defies digitalization? A conceptual clarification, she argues, shows that a rather uncontested definition of a human being includes their vulnerability, their finiteness, and their rational self-consciousness. In a next step, she discusses the difference between robots and humans and engages with novels by Ian MacEwan and Kazuo Ishiguro which imagine this difference between humans and robots. Finally, she advocates that a world in which the difference between robots and humans would no longer be recognizable would be an uncanny world in which we would not want to live.
Roessler and Steeves, in their introduction, underscore the urgency of the debate about being human in an increasingly digitalized society. In a further step, they outline the theoretical background with regard to the concept of a human being, as well as with regard to the theoretical approaches of postmodernism and transhumanism, to situate the volume within earlier discussions about the digital human. They conclude with a helpful overview of the volume’s contributions.
Akbari describes what it means to have a human body in the digital age and argues that datafication has transformed the materiality of the body in its very flesh and bone. This transformation is especially dangerous in uncertain spaces, such as borders and refugee camps, where identity becomes crucial and only certain categories of human bodies can pass. The consequences to those experiencing datafication of their bodies at the border are harsh and severe. However, the deliberate unruliness of the border paves the way for these spaces to become technological testing grounds, as evidenced by the development of technologies to track fleeing populations for the purposes of contact tracing during the COVID-19 pandemic. Akbari’s text oscillates deliberately between academic thinking, autobiographical accounts, pictures, and poetry, thus clearly denoting the discomfort of the human being living in a Code|Body.
Susser provides a thoughtful examination of what we mean by (digital) exploitation and suggests that regulation should constrain platform activities that instrumentalize people or treat them unfairly. Using a diverse set of examples, he argues that the language of exploitation helps makes visible forms of injustice overlooked or only partially captured by dominant concerns about, for example, surveillance, discrimination, and related platform abuses. He provides valuable conceptual and normative resources for challenging efforts by platforms to obscure or legitimate those abuses.
Cohen adapts the doughnut model of sustainable economic development to suggest ways for policymakers to identify regulatory policies that can better serve the humans who live in digital spaces. She does this in two steps. First, she demonstrates that a similarly doughnut-shaped model can advance the conceptualization of the appropriate balance(s) between surveillance and privacy. Second, she demonstrates how taking the doughnut model of privacy and surveillance seriously can help us think through important questions about the uses, forms, and modalities of legitimate surveillance.
Pasquale draws from the world of literature and film to explore the role of emotions in being human and the ways that affective computing both seeks to duplicate and constrain caring as a fundamental human quality. Focusing on digital culture, he discusses various films (e.g. Ich bin dein Mensch), novels (e.g. Rachel Cusks), and TV series (e.g. Westworld) in order to unpack the alienation and loneliness which robots and AI promise to cure. He argues that cultural products ostensibly decrying the lack of humanity in an age of alexithymia work to create and sustain a particular culture, one that makes it difficult to recognize or describe human emotions by creating affective relationships between humans and technology. He concludes with critical reflections on the politico-economic context of those professed emotional attachments to AI and robotics.