To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The field of gravitational lensing has evolved from a theoretical fantasy to a robust astrophysical tool in the past few decades. In this chapter, we introduce the basics of gravitational lensing. We explain what gravitational lensing is in Section 1.1 and briefly recount the history of lensing in Section 1.2. We continue with the basic theory for lensing in Section 1.3 and work through properties of simple lens mass distributions in Section 1.4.
Introduction
A perhaps familiar example of lensing is the bending of light by optics, such as the glasses that some people wear, or binoculars that some people use for viewing wildlife or events. These two examples of optical lenses are linear in the sense that one sees only a single (perhaps magnified) image of the object of interest. In contrast, the base of a wine glass is a non-linear lens so that through the glass one can see multiple images of the background object. Figure 1.1 is an illustration of this phenomenon. In the top-left panel, we see the original source of light that emanates from the candle. In the top-right panel, we see four images of the source as lensed by the wine glass: one in the lower left, two close pairs on the lower right and one behind the stem of the wine glass. By tilting the base of the wine glass, we change the properties of the optical lens and thus the light paths of the images we see. For example, in the bottom-right panel, there are only two multiple images of the background source. In the case where the stem of a symmetric wine glass is aligned perfectly along of the line of sight to the source, the source is lensed into a ring, as shown in the bottom-left panel.
In gravitational lensing, a massive object takes on the role of the lens, similar to the wine glass in optical lensing. According to Einstein's General Theory of Relativity, mass curves spacetime. Light, taking the shortest path in this curved spacetime, thus ‘bends’ around massive objects. Anything that has mass (e.g. planets, stars, galaxies, and clusters of galaxies) can thus act as a gravitational lens.
Turbulence is generally associated with the formation of vortices in a fluid. There are numerous experiences in daily life where one can note the presence of turbulence: the movements of a river downstream of an obstacle, the smoke escaping through a chimney, vortical motions of the air, or the turbulence zones that we sometimes cross by plane. Since it is not necessary to use powerful microscopes or telescopes to study turbulence one could conclude that it is probably not difficult to understand it. Unfortunately that is not the case! Although significant progress has been made since the middle of the twentieth century, several important questions remain unanswered and it is clear that at the beginning of the twenty-first century turbulence remains a central research topic in physics.
The first theoretical bricks of turbulence were laid from the moment physicists started to tackle the non-linearities of the hydrodynamic equations. As we will see, it is in this context that the first fundamental law of turbulence was established: this was the statistical law of Kolmogorov (1941). Nowadays the theoretical treatment of turbulence is partly based on numerical simulations which, accompanied by very powerful tools of visualization, allow us to tackle this difficult problem from a different angle and stimulate new questions. The purpose of this chapter is to present concepts and fundamental results on fully developed turbulence. This chapter is devoted to hydrodynamics, from which some foundations of the theory of turbulence have emerged. The two other chapters in this part of the book will be devoted to MHD turbulence.
What is Turbulence?
Unpredictability and Turbulence
It is not easy to define turbulence quantitatively because to do this one requires knowledge of a number of concepts that will be defined partly in this chapter.
Without going into the details, we can notice that the disordered – or chaotic – aspect seems to be the main characteristic of turbulent flows. It is often said that a system is chaotic when two points originally very close to each other in phase space separate exponentially over time. As we will see later, this definition can be extended to the case of fluids.
This chapter discusses computational and statistical methods for fitting models to strong lens data. It centres on parametric models of point-like lenses but includes extensions to composite models, free-form models, and extended sources. It describes how to use statistical tools including Monte Carlo Markov chains and nested sampling to explore the range of models that are consistent with data.
Introduction
Strong lensing is a versatile tool for astrophysics that can be used to study the physical properties and environments of lensing galaxies, to dissect the structure of source quasars and galaxies, to constrain cosmological parameters, and much more. Other chapters in this volume review the theory of strong lensing, the status of observations, and the variety of astrophysical applications that result. The goal of this chapter is to outline methods for fitting models to strong lens data. Since modelling is required for most applications of strong lensing, understanding the strengths and weaknesses of the analysis is key for drawing robust conclusions.
When discussing methodology, we need to distinguish between point-like and extended images. Point-like images (in a lensed quasar, for example) provide constraints on the potential and its derivatives at discrete positions, which can be described with a modest number of constraint equations or a straightforward χ2 goodness of fit statistic. Established statistical methods can then be used to find the best fit and explore the range of allowed models. In this case the barrier to entry is low in the sense that fitting basic models does not require tremendous expertise, yet the potential for growth is high in the sense that advanced analyses can combine lensing with other astrophysical probes to draw conclusions that have broad reach. Extended images, by contrast, provide many more pixels of data but require many more free parameters (associated with the unknown shape of the source). Specialized methods must be used to simultaneously fit a mass model for the lens and a light model for the source. For pedagogical purposes, I focus on analysis methods that are applicable to point-like sources but include an overview of methods for modelling extended images (Section 7.5.4).
In our familiar environment, matter appears in solid, liquid, or gaseous form. This triptych vision of the world was shaken in the twentieth century when astronomers revealed that most of the extraterrestrial matter – namely more than 99% of the ordinary matter in the Universe – is actually in an ionized state called plasma whose physical properties differ fundamentally from those of a neutral gas. The study of this fourth state of matter was developed mainly in the second half of the twentieth century and is now considered a major branch of modern physics. A decisive step was taken in 1942 when the Swedish astrophysicist Hannes Alfvén (1908–1995) proposed the theory of magnetohydrodynamics (MHD) by connecting the Maxwell electrodynamics with the Navier–Stokes hydrodynamics. In this framework, plasmas are described macroscopically as a fluid and the corpuscular aspect of ions and electrons is ignored. Nowadays, MHD has emerged as the central theory to understand the machinery of the Sun, stars, stellar winds, accretion disks around super-massive objects such as black holes with the formation of extragalactic jets, interstellar clouds, and planetary magnetospheres. Also, when H. Alfvén was awarded the Nobel Prize in Physics in 1970, the Committee congratulated him “for fundamental work and discoveries in magnetohydrodynamics with fruitful applications in different parts of plasma physics.”
The MHD description is not limited to astrophysical plasmas, but is also widely used in the framework of laboratory experiments or industrial developments for which plasmas and conducting liquid metals are used. In the first case, the emblematic example is certainly controlled nuclear fusion with the International Thermonuclear Experimental Reactor (ITER) in Cadarache. Indeed, the control of a magnetically confined plasma requires an understanding of the large-scale equilibrium and the solution of stability problems whose theoretical framework is basically MHD. Liquid metals are also used, for example, in experiments to investigate the mechanism of magnetic field generation – the dynamo effect – that occurs naturally in the liquid outer core of our planet via turbulent motions of a mixture of liquid metals. Most of the natural MHD flows cited above are far from thermodynamic equilibrium, with highly turbulent dynamics.
When a static equilibrium has been found (see Chapter 8), the next question that we have to address concerns the stability of this equilibrium. A part of the answer is given by the linear perturbation theory, which consists of analyzing the result of a small (i.e. linear) perturbation of the equilibrium. If the equilibrium is stable, the perturbation will behave as a wave that propagates in the medium; if it is unstable, the perturbation will increase exponentially.
Instabilities
Classification
In Figure 9.1, we present some unstable and stable situations arising from the example of a sphere placed in an external potential field. In case 1 a sphere is at the bottom of a well of infinite potential. In this position the sphere can only perform oscillations around its equilibrium position. These oscillations, once generated, are damped due to friction until the sphere reaches a static equilibrium position at the bottom of the potential well. This is a situation of stable equilibrium. In case 2 a sphere is placed a the top of a potential (a hill). In this case, a small displacement of the sphere is sufficient to move it to much lower potentials: this is an unstable situation that is often associated with a linear instability. The third case is that of a metastable state where the sphere is placed initially on a locally flat potential (a plateau): a small displacement around the initial position does not change the potential of the sphere. Finally, the last case (case 4) is that of a sphere placed in a hollow. This is an example of non-linear instability: the sphere is stable against small perturbations but becomes unstable for larger disturbances.
In plasma physics, the sphere in the previous paragraph corresponds to a particular mode of a wave and the shape of the potential can be a source of free energy. There are many energy sources in space plasmas. For example, the solar wind is a continuous source of energy for the Earth's magnetospheric plasma
which is never in a static equilibrium. The consequences of this energy input are the generation of large-scale gradients and the deformation of the distribution functions of particles at small scales.
Active galactic nuclei (AGN) and quasi-stellar radio sources (quasars) are very luminous compact objects at cosmological distances. Right after their discovery in the 1960s, Sjur Refsdal realized that these properties made them ideal targets for determining the Hubble constant with a measurement of the time delay in a gravitationally lensed quasar system. The discovery of the first double quasar Q0957+561 in 1979 (Walsh, Carswell and Weymann 1979) paved the way for monitoring of multiple quasars. Chang and Refsdal (1979) immediately realized that individual stars in a lensing galaxy can act as microlenses and modify the magnification on time scales of years or months. Today, a few hundred gravitationally lensed quasars are known. Time delays have been determined with an accuracy of a few per cent in a few dozen systems. Averaged over an ensemble of lenses, the Hubble constant H0 can be determined with an uncertainty of about 5%, the error budget being usually dominated by the mass model of the lensing galaxy. Uncorrelated fluctuations in the multiple images of a lensed quasar originate from microlensing and contain information on the lensing objects as well as on the quasar luminosity profile and size. Originally, quasar microlensing studies focused on the visual light; more recently, microlensing fluctuations in the broad emission lines have been analysed as well. Microlensing is a natural explanation for the flux-ratio anomaly in some of the quadruply imaged quasars: A smooth dark matter component produces an asymmetric magnification distribution between the two images in a close pair, with a relatively high probability of high demagnification of the saddle point (negative parity) image. Comparison of the observed flux ratios with microlensing simulations even allows us to quantify the most likely dark matter fraction in such systems.
This chapter summarizes the four lectures that the author presented at the XXIV Canary Islands Winter School of Astrophysics in Puerto de La Cruz, Tenerife, which took place over November 4–16, 2012. A very brief introduction to AGN/quasars is followed by a section on the basics of (micro)lensing and the relevant length and time scales. The two main sections then present results on time delay measurements in multiple quasar systems and subsequent determinations of the Hubble constant on the one hand and on various applications of quasar microlensing on the other.
In its primitive form the Kolmogorov theory states that the four-fifths law can be generalized to higher-order structure functions according to relation (11.33) by assuming self-similarity. Experiments and numerical simulations clearly show a discrepancy from this prediction (see Figure 11.10): this is what is commonly called intermittency. Even if intermittency remains a still poorly understood property of turbulence because it still challenges any attempt at a rigorous analytical description from first principles (i.e. the Navier–Stokes equations), several models have been proposed to reproduce the statistical measurements, of which the simplest is probably the fractal model, also called the β model, which was introduced in 1978 (Frisch et al., 1978). As we shall see, this model is based on the idea of a fractal (incompressible) cascade and is therefore inherently a self-similar model. However, because the structure-function exponents are not those predicted by the Kolmogorov theory, one speaks of intermittency and anomalous exponents. Refined models have also been proposed, and we will present in this chapter the two most famous models: the log-normal and log-Poisson models.
Intermittency
Fractals and Multi-fractals
The idea underlying the β fractal model is Richardson's cascade (Figure 11.7): at each step of the cascade the number of children vortices is chosen so that the volume (or the surface in the two-dimensional case) occupied by these eddies decreases by a factor β (0 < β < 1) compared with the volume (or surface) of the parent vortex. The β factor is a parameter less than one of the model to reflect the fact that the filling factor varies according to the scale considered: the smallest eddies occupy less space than the largest.
We define by ln the discrete scales of our system: the fractal cascade is characterized by jumps from the scale ln to the scale ln+1.We show an example of a fractal cascade in Figure 13.1: at each step of the cascade the elementary scale is divided by two.