To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the previous chapters, and in Chapters 4 and 6 in particular, we already encountered methods for testing hypotheses. We used these statistical tests to determine if a given empirical correlation corresponds to the real key, or to an incorrect key. This chapter takes a more systematic look at statistical testing and derives methods that are—in some particular sense—best possible.
This chapter discusses the terms overlap and similarity between quantum states and introduces the important swap test, as well as the Hadamard test and the inversion test. The mathematical derivations in this chapter are still very detailed.
Roessler and Steeves, in their introduction, underscore the urgency of the debate about being human in an increasingly digitalized society. In a further step, they outline the theoretical background with regard to the concept of a human being, as well as with regard to the theoretical approaches of postmodernism and transhumanism, to situate the volume within earlier discussions about the digital human. They conclude with a helpful overview of the volume’s contributions.
Akbari describes what it means to have a human body in the digital age and argues that datafication has transformed the materiality of the body in its very flesh and bone. This transformation is especially dangerous in uncertain spaces, such as borders and refugee camps, where identity becomes crucial and only certain categories of human bodies can pass. The consequences to those experiencing datafication of their bodies at the border are harsh and severe. However, the deliberate unruliness of the border paves the way for these spaces to become technological testing grounds, as evidenced by the development of technologies to track fleeing populations for the purposes of contact tracing during the COVID-19 pandemic. Akbari’s text oscillates deliberately between academic thinking, autobiographical accounts, pictures, and poetry, thus clearly denoting the discomfort of the human being living in a Code|Body.
Susser provides a thoughtful examination of what we mean by (digital) exploitation and suggests that regulation should constrain platform activities that instrumentalize people or treat them unfairly. Using a diverse set of examples, he argues that the language of exploitation helps makes visible forms of injustice overlooked or only partially captured by dominant concerns about, for example, surveillance, discrimination, and related platform abuses. He provides valuable conceptual and normative resources for challenging efforts by platforms to obscure or legitimate those abuses.
In this chapter, we rebuild the theory of linear cryptanalysis one last time. One of the reasons for doing this was already mentioned in Chapter 9: there are various combinatorial properties that might be useful, but for which there are no analytic methods. However, before attempting to address this issue, we must take a step back and try to improve our understanding of linear cryptanalysis.
Cohen adapts the doughnut model of sustainable economic development to suggest ways for policymakers to identify regulatory policies that can better serve the humans who live in digital spaces. She does this in two steps. First, she demonstrates that a similarly doughnut-shaped model can advance the conceptualization of the appropriate balance(s) between surveillance and privacy. Second, she demonstrates how taking the doughnut model of privacy and surveillance seriously can help us think through important questions about the uses, forms, and modalities of legitimate surveillance.
This chapter discusses quantum noise and techniques for quantum error correction, a necessity for quantum computing. It discusses bit-flip errors, phase-flip errors, and their combination. The formalism of quantum operations is introduced, along with the operator-sum representation and the Kraus operators. With this in mind, the chapter discusses the depolarization channel and imprecise gates, as well as (briefly) amplitude and phase damping. For error correction, repetition codes are introduced to motivate Shor’s 9-qubit error correction technique.
We have introduced a compact infrastructure for exploration and experimentation, but all at the level of individual gates. Higher levels of abstraction are needed to scale to larger programs. The chapter discusses several quantum programming languages, including their specific tooling, such as hierarchical program representations or entanglement analysis. General challenges for compilation are discussed as well as compiler optimization techniques.
Pasquale draws from the world of literature and film to explore the role of emotions in being human and the ways that affective computing both seeks to duplicate and constrain caring as a fundamental human quality. Focusing on digital culture, he discusses various films (e.g. Ich bin dein Mensch), novels (e.g. Rachel Cusks), and TV series (e.g. Westworld) in order to unpack the alienation and loneliness which robots and AI promise to cure. He argues that cultural products ostensibly decrying the lack of humanity in an age of alexithymia work to create and sustain a particular culture, one that makes it difficult to recognize or describe human emotions by creating affective relationships between humans and technology. He concludes with critical reflections on the politico-economic context of those professed emotional attachments to AI and robotics.
This study provides some results about two-level type-theoretic notions in a way that the proofs are fully formalizable in a proof assistant implementing two-level type theory, such as Agda. The difference from prior works is that these proofs do not assume any abuse of notation, providing more direct formalization. Also, some new notions, such as function extensionality for cofibrant exo-types, are introduced. The necessity of such notions arises during the task of formalization. In addition, we provide some novel results about inductive types using cofibrant exo-nat, the natural number type at the non-fibrant level. While emphasizing the necessity of this axiom by citing new applications as justifications, we also touch upon the semantic aspect of the theory by presenting various models that satisfy this axiom.
Georas analyzes different dilemmas that arise when we use robots to serve humans living in the digital age. She focuses on the design and deployment of carebots in particular, to explore how they are embedded in more general multifaceted material and discursive configurations, and how they are implicated in the construction of humanness in socio-technical spaces. In doing so, she delves into the "fog of technology," arguing that this fog is always also a fog of inequality since the emerging architectures of our digitized lives will connect with pre-existing forms of domination. In this context, resistive struggles are premised upon our capacity to dissent, which is what ultimately enables us to express our humanity and at the same time makes us unpredictable. What it means to be human in the digital world is thus never fixed, but, Georas argues, must always be strategically reinvented and reclaimed, since there always will be people living on the “wrong side of the digital train tracks” who will be unjustly treated.
This article proposes the Function–Behavior–Structure–Failure Modes (FBSFMs), a novel ontological framework for an enhanced representation of system knowledge, to address the integration gap between the system models and design risk analysis activities during the early product development phase. As a theoretical contribution, the FBSFM extends the well-established function–behavior–structure ontology for system design information representation in terms of functions, intended behaviors, and structure, with an ontology schema for the representation of the actual behavior as function failure modes, enriched with linkages to causes and effects across multiple levels of system abstraction. This integrated representation improves design risk analysis by facilitating the traceability between design decisions captured in system models and potential failure scenarios documented in Failure Mode and Effects Analyses (FMEAs). The framework was implemented using formal ontology engineering methods and implemented in Web Ontology Language using Protégé. A real-world automotive case study was conducted in collaboration with practicing engineers and domain experts from a global automotive manufacturer, to demonstrate the framework’s applicability and its ability to support structured failure knowledge representation. The case study illustrates the capability of the ontology to consolidate multisource engineering knowledge, specifically design data derived from system modeling and structured risk artifacts from FMEA, into a coherent, machine-readable repository, supporting enhanced traceability from user goals to potential system failures. The use of ontological reasoning and structured querying facilitates the systematic review and validation of FMEA information against system models, with a positive impact on product development practice.