To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The electroweak unification appears mainly in the neutral current processes. The transition probabilities of all of them are predicted in terms of the weak mixing angle. Measuring the weak mixing angle.
Theory predicts the existence of three vector bosons, W+, W− and Z0. It does not predict their masses, but precisely states how they are related with two measured quantities: the Fermi constant and the weak mixing angle. The UA1 experiment and the discovery of the vector bosons.
The precision tests of the electroweak theory performed at the LEP electron–positron collider and at the Tevatron proton–antiproton collider.
The last missing element of the SM, the Higgs boson. The spontaneous symmetry breaking and the boson. The searches at LEP and at the Tevatron. The Large Hadron Collider and the ATLAS and CMS experiments. The discovery of 2012. Checking Higgs physics, measuring its mass and width, its spin and parity, its couplings to the bosons and to the fermions. All agree with the predictions of the Standard Model, so completing the experimental verification of its basic building blocks.
As the chapter points out, standard cosmology textbooks offer a triumphant account of the Hot Big Bang model as the orthodoxy that was almost instantly accepted with the discovery of the CMB by Penzias and Wilson. Yet this historical account does not reveal the whole story of the orthodoxy’s gradual ascent and the all but forgotten, but at the time important, side roads. We argue that historical analysis must shun the triumphant partial account, but resist eliminating the current understanding as irrelevant for understanding the past. If approached in a balanced manner, the story of the CMB evidence often diverges from the values that unequivocally lent support to the orthodoxy, thus opening the space for alternative interpretations.
Due to the low angular resolution of sources in observations early after the discovery of the CMB, there was a possibility that the radiation’s uniformity and diffuse emission were produced by multiple unresolved very distant sources. This possibility was one of many similar dilemmas in other areas of astronomy and physics, and it was fairly quickly resolved. In the late 1960s, Gold and Pacini suggested the idea of unresolved sources was plausible, while Wolfe and Burbidge, working across the orthodoxy-alternatives divide, addressed the unexplained density of the radiation and its spectral shape by pointing to the possibility of unresolved sources potentially being observable at radio frequencies. However, observational tests demonstrated a lack of suitable objects radiating at predicted frequencies. The epistemic motivation for these and related models was to introduce minimal astrophysical assumptions to explain the nature of the CMB. There was also an anticipation of yet unknown astrophysical objects lurking in the background. Finally, although quickly refuted observationally, a clear and comprehensive model of Rowan-Robinson exhibited all the key features of such models while also anticipating key role of Active Galactic Nuclei in future research.
Sciences studying the deep past, including cosmology, reconstruct past phenomena from often meager remnants available in the present. This leads to prolonged periods of indecisiveness about discrepant theoretical explanations and models. The chapter argues that the notion of the so-called underdetermination of competing theories by evidence captures the epistemic situation that characterizes modern cosmology. The logical notion of underdetermination of competing theories predicated on total possible evidence is not so interesting in understanding and tracking details of the actual historical episodes. An historical notion of underdetermination more realistically assumes only partially equivalent evidential bases of competing theories. Protracted periods of underdetermination also question idealized notions of observational facts as opposed to speculative theories, as pointed out by Bondi in the 1950s. Prematurely establishing certain observations as immutable facts, which, in turn, eliminates various theoretical accounts, impedes the field that operates at the observational limit. The chapter argues that a qualified notion of these concepts is needed to approach historical analysis of cosmology properly.
The chapter draws some epistemic lessons from forgotten alternatives, theoretical conjectures that led to them, observational refutations, and the roles they played in building orthodox consensus. Most alternatives have never been fully developed, and some examples suggest that the potential for their improvement should not be underestimated. Moreover, viable alternatives and criticisms in cosmology can arrive piecemeal, not necessarily as fully worked out models. Finally, less apparent general theoretical assumptions may always lurk in the background of any model, a reflection on which may improve it or lead to a new one. Features of old models can once again become attractive, as the field interrelates various observations and theoretical presuppositions. The chapter offers some examples of this. It then ranks the alternative explanations in terms of plausibility, persuasiveness, and possible fruitfulness of some of their features.
Narlikar and Wickramasinghe’s 1968 version of steady state theory made use of the dust grains hypothesis, while arguing that the precise measurements of the CMB spectral shape deviate from the black body shape. As critics pointed out, an unrealistic consequence of the model (CMB exceeding visible radiation by a factor of 100) was Wickramasinghe’s 1975 version; it included a very detailed account of elongated whiskers as thermalizers that compose cosmic dust. The 1990s version of steady state constructed by a larger group of authors – the usual proponents of the steady state – introduced the discrete creation of matter (“mini bangs”) in the cellular form of observed galactic structures. The motivation was to eliminate the epistemically problematic singularity of the Hot Big Bang, explain creation of matter as an inherent feature of physical laws, and introduce strong gravitational fields of galactic nuclei as the sites of synthesis of atomic nuclei (with a detailed description of the mechanisms of nucleosynthesis). The chapter discusses a number of combinations of steady state, Population III as sources, and dust grain thermalizers devised until early 2000.
While the problem governing Stokes flow about a single particle that is subject to an external force is ill posed in two dimensions (the ‘Stokes paradox’), the related problem of two mutually repellent particles is well posed. Motivated by self-assembly phenomena in thin viscous membranes, we consider this problem in the limit of remote particles. Such limits are typically handled in the literature using reflection techniques, which provide successive approximations to the mutual hydrodynamic interactions. Since their starting point is a single particle in an unbounded fluid domain, these techniques are futile in the present two-dimensional problem. We show how this apparent contradiction is resolved via use of singular perturbations. We obtain a two-term approximation for the velocity acquired by circular disks, considering both rigid and free particle surfaces. We also illustrate our perturbation scheme for elliptic disks, deriving a renormalised single-particle velocity. The utility of our asymptotic scheme is illustrated in the general problem of hydrodynamic interaction between a cluster of remote disks.
The bubbly shock-driven partial cavitation in an axisymmetric venturi is studied with time-resolved two-dimensional X-ray densitometry. The bubbly shock waves are characterised using the vapour fraction and pressure changes across it, propagation velocity, and Mach number. The sharp changes in vapour fraction measured with X-ray densitometry, combined with high-frequency dynamic pressure measurements, reveal that the interaction of the pressure wave with the vapour cavity dictates the shedding dynamics. At the lowest cavitation number ($\sigma \sim 0.47$), the condensation shock front is the predominant shedding mechanism. However, as $\sigma$ increases ($\sigma \sim 0.78$), we observe an upstream travelling pressure discontinuity that changes into a condensation shock as it approaches the venturi throat. This coincides with the increasing strength of the bubbly shock wave as it propagates upstream, manifested by the increasing velocity of the shock front and the pressure rise across it. Consequently, the Mach number of the shock front increases and surpasses the critical value 1, favouring condensation shocks. Further, at higher $\sigma$ (${\sim }0.84\unicode{x2013}0.9$), both the re-entrant jet and pressure wave can cause cavity detachment. However, at such $\sigma$, the pressure wave likely remains subsonic. Hence cavity condensation is not favoured readily. This leads to the re-entrant jet causing the cavity detachment at higher $\sigma$. The shock front is accelerated as it propagates upstream through the variable cross-section of the venturi. This enhances its strength, favouring cavity condensation and eventual shedding. These observations explain the existence of shock fronts in an axisymmetric venturi for a large range of $\sigma$.
Networks can get big. Really big. Examples include web crawls, online social networks, and knowledge graphs. Networks from these domains can have billions of nodes and hundreds of billions of edges. Systems biology is yet another area where networks will continue to grow. As sequencing methods continue to advance, more networks and larger, denser networks will need to be analyzed. This chapter discusses some of the challenges you face and solutions you can try when scaling up to massive networks. These range from implementation details to new algorithms and strategies to reduce the burden of such big data. Various tools, such as graph databases, probabilistic data structures, and local algorithms, are at our disposal, especially if we can accept sampling effects and uncertainty.
Every network has a corresponding matrix representation. This is powerful. We can leverage tools from linear algebra within network science, and doing so brings great insights. The branch of graph theory concerned with such connections is called spectral graph theory. This chapter will introduce some of its central principles as we explore tools and techniques that use matrices and spectral analysis to work with network data. Many matrices appear in different cases when studying networks, including the modularity matrix, nonbacktracking matrix, and the precision matrix. But one matrix stands out—the graph Laplacian. Not only does it capture dynamical processes unfolding over a networks structure, its spectral properties have deep connections to that structure. We show many relationships between the Laplacians eigendecomposition and network problems, such as graph bisection and optimal partitioning tasks. Combining the dynamical information and the connections with partitioning also motivates spectral clustering, a powerful and successful way to find groups of data in general. This kind of technique is now at the heart of machine learning, which well explore soon.
The fundamental practices and principles of network data are presented in this book, and the preface serves as an important starting point for readers to understand the goals and objectives of this text. The preface explains how the practical and fundamental aspects of network data are intertwined, and how they can be used to solve real-world problems. It also gives advice on how to use the book, including the boxes that will be featured throughout the book to highlight key concepts and provide practical examples of working with network data. Readers will find this preface to be a valuable resource as they begin their journey into the world of network science.
In this chapter, we begin our dive into the fundamentals of network data. We delve deep into the strange world of networks by considering the friendship paradox, the apparently contradictory finding that most people (nodes) have friends (neighbors) who are more popular than themselves. How can this be? Where are all these friends coming from? We introduce network thinking to resolve this paradox. As we will see, It is due to constraints induced by the network structure: pick a node randomly and you are much more likely to land next to a high-degree node than on a high-degree node because high-degree nodes have many neighbors. This is unexpected, almost profoundly so; a local (node-level) view of a network will not accurately reflect the global network structure. This paradox highlights the care we need to take when thinking about networks and network data mathematically and practically.
Network studies follow an explicit form, from framing questions and gathering data, to processing those data and drawing conclusions. And data processing leads to new questions, leading to new data and so forth. Network studies follow a repeating lifecycle. Yet along the way, many different choices will confront the researcher, who must be mindful of the choices they are making with their data and the choices of tools and techniques they are using to study their data. In this chapter, we describe how studies of networks begin and proceed, the life-cycle of a network study
Problems with calculations of Berry properties in real and reciprocal spaces and physical characteristics involving manifestations of Berry properties are included.