To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In these first few chapters, our aim is to establish a firm grounding so that we can address some fundamental questions regarding information transmission over quantum channels. This area of study has become known as “quantum Shannon theory” in the broader quantum information community, in order to distinguish this topic from other areas of study in quantum information science. In this text, we will use the terms “quantum Shannon theory” and “quantum information theory” somewhat interchangeably. We will begin by briefly overviewing several fundamental aspects of the quantum theory. Our study of the quantum theory, in this chapter and future ones, will be at an abstract level, without giving preference to any particular physical system such as a spin-1/2 particle or a photon. This approach will be more beneficial for the purposes of our study, but, here and there, we will make some reference to actual physical systems to ground us in reality.
You may be wondering, what is quantum Shannon theory and why do we name this area of study as such? In short, quantum Shannon theory is the study of the ultimate capability of noisy physical systems, governed by the laws of quantum mechanics, to preserve information and correlations. Quantum information theorists have chosen the name quantum Shannon theory to honor Claude Shannon, who single-handedly founded the field of classical information theory, with a groundbreaking 1948 paper (Shannon, 1948).
All physical systems register bits of information, whether it be an atom, an electrical current, the location of a billiard ball, or a switch. Information can be classical, quantum, or a hybrid of both, depending on the system. For example, an atom or an electron or a superconducting system can register quantum information because the quantum theory applies to each of these systems, but we can safely argue that the location of a billiard ball registers classical information only. These atoms or electrons or superconducting systems can also register classical bits because it is always possible for a quantum system to register classical bits.
The term information, in the context of information theory, has a precise meaning that is somewhat different from our prior “everyday” experience with it. Recall that the notion of the physical bit refers to the physical representation of a bit, and the information bit is a measure of how much we learn from the outcome of a random experiment. Perhaps the word “surprise” better captures the notion of information as it applies in the context of information theory.
This chapter begins our formal study of classical information. Recall that Chapter 2 overviewed some of the major operational tasks in classical information theory. Here, our approach is somewhat different because our aim is to provide an intuitive understanding of information measures, in terms of the parties who have access to the classical systems.
Environmental sustainability has been of increasing interest in designing any system in recent times. Computing systems usually contribute to this drive of sustainability from two different perspectives: (i) the energy perspective and (ii) the equipment-recycling perspective. Sections 6.1 and 6.2 describe these perspectives of sustainability of computing systems in general. All subsequent sections will focus on how to ensure sustainability for BANs from the energy perspective.
The energy perspective
Sustainability from the energy perspective, also referred to as energy-sustainability, has two main objectives: (i) reducing the carbon footprint from the power grid and (ii) reducing the need for battery replacement (for computing equipment running on limited-energy batteries). To ensure that both these objectives are attained, energy-sustainability can be described as the balance between the power required for computation and the power available from renewable or green energy sources (e.g., sources in the environment such as solar power). Ideally, if the power available from external renewable energy sources is more than the power required for computation then a power grid (or battery) might not be needed, and computation can be said to be energy-sustainable. However, in reality, both the available and the required power may vary over time. For example, solar power is available only during the day, but power may be required during the night (depending on the time-varying computing operations performed). In such a case, power may need to be extracted from a power grid (or battery) during the night, thus making computing operations unsustainable.
How designers communicate within design teams, and with users, suppliers, and customers, differs in formality both between industries and between different situations within one project. This paper identifies three layers of structure in design communication, each of which can be more or less formal: the design process, the interaction between participants, and the representations of design information that are constructed and used. These layers can be formal across a spectrum from explicit rules to habitual conventions. The paper draws on a range of contrasting case studies in mechanical engineering and knitwear design, as well as a larger corpus of cases comparing design domains more generally, to analyze how formality affects design interaction in different situations and process contexts. Mismatches in the understanding of formality can lead to misunderstandings, in particular across expertise boundaries and between designers and their clients or customers. Formality can be modulated in the mannerism of communication, the rhetoric employed, and how representations are constructed, to make communication more effective. The effort and skill put into modulating formality is greater in domains where designers work with end users, like architecture, than it is in companies where designers interact mainly with other professionals.
We discussed the major noiseless quantum communication protocols such as teleportation, super-dense coding, their coherent versions, and entanglement distribution in detail in Chapters 6, 7,and 8. Each of these protocols relies on the assumption that noiseless resources are available. For example, the entanglement distribution protocol assumes that a noiseless qubit channel is available to generate a noiseless ebit. This idealization allowed us to develop the main principles of the protocols without having to think about more complicated issues, but in practice, the protocols do not work as expected under the presence of noise.
Given that quantum systems suffer noise in practice, we would like to have a way to determine how well a protocol is performing. The simplest way to do so is to compare the output of an ideal protocol to the output of the actual protocol using a distance measure of the two respective output quantum states. That is, suppose that a quantum information-processing protocol should ideally output some quantum state ∣ψ⟩, but the actual output of the protocol is a quantum state with density operator ρ. Then a performance measure P(∣ψ⟩,ρ) should indicate how close the ideal output is to the actual output. Figure 9.1 depicts the comparison of an ideal protocol with another protocol that is noisy.
This chapter introduces two distance measures that allow us to determine how close two quantum states are to each other.
The final chapter of our development of the quantum theory gives perhaps the most powerful viewpoint, by providing a mathematical tool, the purification theorem, which offers a completely different way of thinking about noise in quantum systems. This theorem states that our lack of information about a set of quantum states can be thought of as arising from entanglement with another system to which we do not have access. The system to which we do not have access is known as a purification. In this purified view of the quantum theory, noisy evolution arises from the interaction of a quantum system with its environment. The interaction of a quantum system with its environment leads to correlations between the quantum system and its environment, and this interaction leads to a loss of information because we cannot access the environment. The environment is thus the purification of the output of the noisy quantum channel.
In Chapter 3, we introduced the noiseless quantum theory. The noiseless quantum theory is a useful theory to learn so that we can begin to grasp an intuition for some uniquely quantum behavior, but it is an idealized model of quantum information processing. In Chapter 4, we introduced the noisy quantum theory as a generalization of the noiseless quantum theory. The noisy quantum theory can describe the behavior of imperfect quantum systems that are subject to noise.
This chapter demonstrates the power of both coherent communication from Chapter 7 and the particular protocol for entanglement-assisted classical coding from the previous chapter. Recall that coherent dense coding is a version of the dense coding protocol in which the sender and receiver perform all of its steps coherently. Since our protocol for entanglement-assisted classical coding from the previous chapter is really just a glorified dense coding protocol, the sender and receiver can perform each of its steps coherently, generating a protocol for entanglement-assisted coherent coding. Then, by exploiting the fact that two coherent bits are equivalent to a qubit and an ebit, we obtain a protocol for entanglement-assisted quantum coding that consumes far less entanglement than a naive strategy would in order to accomplish this task. We next combine this entanglement-assisted quantum coding protocol with entanglement distribution (Section 6.2.1) and obtain a protocol for which the channel's coherent information (Section 12.5) is an achievable rate for quantum communication. This sequence of steps demonstrates an alternate proof of the direct part of the quantum channel coding theorem stated in Chapter 23.
Entanglement-assisted classical communication is one generalization of superdense coding, in which the noiseless qubit channel becomes an arbitrary noisy quantum channel while the noiseless ebits remain noiseless. Another generalization of super-dense coding is a protocol named noisy super-dense coding, in which the shared entanglement becomes a shared noisy state ρAB and the noiseless qubit channels remain noiseless.
We study the diffusion of an idea, a product, a disease, a cultural fad, or a technology among agents in a social network that exhibits segregation or homophily (the tendency of agents to associate with others similar to themselves). Individuals are distinguished by their types—e.g., race, gender, age, wealth, religion, profession—which, together with biased interaction patterns, induce heterogeneous rates of adoption or infection. We identify the conditions under which a behavior or disease diffuses and becomes persistent in the population. These conditions relate to the level of homophily in a society and the underlying proclivities of various types for adoption or infection. In particular, we show that homophily can facilitate diffusion from a small initial seed of adopters.
To find out, we measure co-voting similarity networks in the US Senate and trace individual careers over time. Standard network visualization tools fail on dense highly clustered networks, so we used two aggregation strategies to clarify positional mobility over time. First, clusters of Senators who often vote the same way capture coalitions, and allow us to measure polarization quantitatively through modularity (Newman, 2006; Waugh et al., 2009; Poole, 2012). Second, we use role-based blockmodels (White et al., 1976) to identify role positions, identifying sets of Senators with highly similar tie patterns. Our partitioning threshold for roles is stringent, generating many roles occupied by single Senators. This combination allows us to identify movement between positions over time (specifically, we used the Kernighan–Lin improvement of a Louvain method greedy partitioning algorithm for modularity [Blondel et al., 2008], and CONCOR with an internal similarity threshold for roles; see Supplementary materials for details).
This is the beginning of Network Science. The journal has been created because network science is exploding. As is typical for a field in formation, the discussions about its scope, contents, and foundations are intense. On these first few pages of the first issue of our new journal, we would like to share our own vision of the emerging science of networks.