To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
So far we have treated the model parameters as continuous functions in three-dimensional space, e.g. ρ(r) for the density at location r. Sooner or later, however, we must represent the model by a finite set of numbers in order to perform the direct and inverse calculations. One could, of course, simply discretize the model by sampling it at a sufficiently dense set of pixels (sometimes called ‘voxels’ in 3D). This has the advantage that one does not restrict the smoothness of the model, but the price to be paid is a significant loss of computational efficiency, and this is something we can ill afford. The proper approach is to parametrize the model – taking care, however, that the imposed smoothness does not rule out viable classes of models. In addition, the model parametrization should allow for the data to be fit to the error level attributed to them. Note that these two conditions are not identical! In practice, one does well to overparametrize and allow for more parameters than can be resolved. This reduces the risk that the limitations of the parameter space appreciably influence the inversion. Overparametrization poses some problems to the inverse problem, but these can be overcome. We shall deal with that in Chapter 14. If one is forced to underparametrize, effects of bias can be suppressed by using an ‘anti-leakage’ operator such as proposed by Trampert and Snieder.
Until now, we have considered the Earth to be isotropic, even though minerals are anisotropic at the scale of a single crystal. For a planet to behave like an anisotropic solid these crystals must ‘line up’ and be oriented in the same direction over length scales comparable to that of the Fresnel zone of a seismic wave, i.e. over tens or hundreds of kilometres. Surprisingly, there is now ample evidence that such lattice-preferred orientation (LPO) occurs in nature and that the Earth is at least weakly anisotropic near the surface and in its inner core. Since the magnetic field influences the acoustic wave speed in the Sun's convection zone, and the magnetic field has a distinct direction, anisotropy must affect helioseismic observations as well. However, the strong magnetic field associated with sunspots couples acoustic and magneto-acoustic waves and still poses significant problems of interpretation. Away from such anomalies, the averaging of Doppler measurements over annuli (see Chapter 6) destroys any azimuthal anisotropy, however, and magnetic anisotropy plays no role in the interpretation of solar travel times.
The first indication that anisotropy measurably affects terrestrial seismic waves came in 1964, when Hess discovered that the horizontally travelling Pn-waves in the oceans travel with a velocity that depends on direction, indicating that the fast direction of olivine crystals is aligned in the direction of spreading.
One of the most important tasks of the seismic tomographer is to make sure he or she knows the limitations of the final model, and is able to convey that knowledge to others in a digestible form. This is not an easy problem: even within a narrow band of acceptable χ2 values, there will be infinitely many models that satisfy the data at this misfit level. Yet some features will change little among those models. Such features are ‘resolved’ if the change is less than some pre-specified variance. Of course, one cannot calculate infinitely many models and usually resigns oneself to present one possible inversion outcome with an assessment of its resolution and uncertainty.
To estimate resolution and uncertainty is a major task that will usually consume far more time than the actual inversion. As we shall see, all of the methods we currently know have shortcomings. Our means to present the results in an accessible form are equally poorly developed. There exists also some confusion about the meaning of damping parameters and their role in resolution and sensitivity tests. Many tomographers do not distinguish clearly between Bayesian constraints (damping parameters based on somewhat objective information) and damping parameters used to obtain a smooth model, which are inherently subjective if not based on prior information.
To find the correct geometry of a ray in realistic models of the Earth or Sun, we need to solve (2.27) numerically. This is comparatively easy in the case of layered or spherically symmetric media. On the other hand, if the seismic velocity is also a function of one or two horizontal coordinates, it may be very difficult. Fortunately, Fermat's Principle often allows us to use background models with lateral homogeneity, as I discussed in Section 2.9. In extreme cases, however, the seismic velocities may change sufficiently fast that the ray computed for a layered Earth is too far away from the ray in the true, heterogeneous Earth. In that case we must use full 3D ray tracing. In this chapter we take a look at the most promising algorithms available for both cases but warn the reader that accurate ray tracing in 3D is still an active area of research that has not yet converged to one ‘ideal’ method. In fact all methods still have shortcomings.
The shooting method
To find the correct ray geometry between a given source and receiver location we not only need an accurate solver for the differential equations such as (2.34) and (2.32), but also a way to determine which initial condition (ray orientation at the source) satisfies the end condition (ray arriving in the receiver).
Body wave amplitudes are influenced by three factors: the loss of energy by attenuative effects, the focusing or defocusing of rays and the local impedance (the product of density and velocity). The effect of perturbations in density or impedance can easily influence the amplitude of a wave, but it is essentially a local effect, independent of the larger-scale structure of the Earth, and we shall treat impedance variations with the method of corrections detailed in Chapter 13. Here we concentrate on focusing and attenuation.
In contrast to the numerous studies of delay times, body wave amplitude studies are rare. Early regional studies, resulting in 1D models for the attenuation beneath a particular province, were done by Solomon and Toksöz, Jordan and Sipkin and Lay and Helmberger. Such studies made use of the fact that the low–Q asthenosphere below the source and receiver causes a major part of the body wave attenuation. Since teleseismic rays in the upper mantle travel close to the vertical, differences in amplitude observed at stations for the same event can be attributed to differences in the strength and/or thickness of the asthenosphere approximately beneath the station. This allows us to see strong differences in attenuation between different regions. Though studies of this kind continue to provide insight (e.g. Warren and Shearer), this method is not really tomographic. Sanders et al. and Ho-Liu et al. were the first to apply the methods of tomography to attenuation data.
Binary star systems serve as laboratories for the measurement of star masses through the gravitational effects of the two stars on each other. Three observational types of binaries – namely, visual, eclipsing, and spectroscopic – yield different combinations of parameters describing the binary orbit and the masses of the two stars. We consider an example of each type – respectively, α Centauri, β Persei (Algol), and φ Cygni.
Kepler described the orbits of solar planets with his three laws. They are grounded in Newton's laws. The equation of motion from Newton's second and gravitational force laws may be solved to obtain the elliptical motions described by Kepler for the case of a very large central mass, M ≫ m. The results can then be extended to the case of two arbitrary masses orbiting their common barycenter (center of mass). The result is a generalized Kepler's third law, a relation between the masses, period, and relative semimajor axis. We also obtain expressions for the system angular momentum and energy. Kepler's laws are useful in determining the orbital elements of a binary system.
The generalized third law can be restated so that the measurable quantities for a star in a spectroscopic binary yield the mass function, a combination of the two masses and inclination. This provides a lower limit to the partner mass. […]
This volume is based on notes that evolved during my teaching of astrophysics classes for junior and senior physics students at MIT beginning in 1973 and thereafter on and off until 1997. The course focused on a physical, analytical approach to underlying processes in astronomy and astrophysics. In each class, I would escort the students through a mathematical and physical derivation of some process relevant to astrophysics in the hope of giving them a firm comprehension of the underlying principles.
The approach in the text is meant to be accessible to undergraduates who have completed the fundamental calculus-based physics courses in mechanics and electromagnetic theory. Additional physics courses such as quantum mechanics, thermodynamics, and statistics would be helpful but are not necessary for large parts of this text. Derivations are developed step by step – frequently with brief reviews or reminders of the basic physics being used because students often feel they do not remember the material from an earlier course. The derivations are sufficiently complete to demonstrate the key features but do not attempt to include all the special cases and finer details that might be needed for professional research.
This text presents twelve “processes” with derivations and focused, limited examples. It does not try to acquaint the student with all the associated astronomical lore. It is quite impossible in a reasonable-sized text to give both the physical derivations of fundamental processes and to include all the known applications and lore relating to them across the field of astronomy.
A normal star is basically a ball of hot gas held together by gravity. Processes that underlie the stability of a star begin when the stellar matter is still part of the diffuse interstellar medium (ISM). A portion of the ISM can not begin condensation to higher densities unless its size exceeds the Jeans length. Its gravitational potential must be sufficient to prevent the escape of individual atoms with thermal kinetic energies.
A star is in hydrostatic equilibrium when the inward pull of gravity on each mass element of the star is balanced by the upward force due to the pressure gradient at the location of the element. The potential and kinetic energies of the mass elements summed over an entire star in hydrostatic equilibrium yield the virial theorem. The theorem states that the sum of twice the kinetic energy and the (negative) potential energy equals zero. Its application to clusters of galaxies indicates they are bound by a preponderance of dark matter.
Several time constants characterize a star. A star would radiate away its current thermal content at its current luminosity in the Kelvin–Helmholtz or thermal time. In the dynamical time, a mass element at radius r without pressure support would fall inward a distance r under the influence of the (fixed) gravitational force at r. A photon will travel from the center of the star to its surface through many random scatters in the diffusion time. […]
A hot plasma of ionized atoms emits thermal bremsstrahlung radiation through the Coulomb collisions of the electrons and ions. The electrons experience large accelerations in the collisions and thus efficiently radiate photons, which escape the plasma if it is optically thin. The energy Q radiated in a single collision is obtained from Larmor's formula. The characteristic frequency of the emitted radiation is estimated from the duration of the collision, which, in turn, depends on the electron speed and its impact parameter (projected distance of closest approach to the ion). Multiplication of Q by the electron flux and ion density and integration over the range of speeds in the Maxwell–Boltzmann distribution yield the volume emissivityjν(ν) (W m−3 Hz−1), the power emitted from unit volume into unit frequency interval at frequency ν as a function of frequency. It is proportional to the product of the electron and ion densities and is approximately exponential with frequency. A slowly varying Gaunt factor modifies the spectral shape somewhat. Most of the power is emitted at frequencies near that specified by hν ≈ kT.
Integration of the volume emissivity over all frequencies and over the volume of a plasma cloud results in the luminosity of the cloud. […]
Albert Einstein postulated that the speed of light has the same value in any inertial frame of reference or, equivalently, that there is no preferred frame of reference. The consequence of this postulate is the special theory of relativity, which yields nonintuitive relations between measurements in different inertial frames of reference. We demonstrate the Lorentz transformations for space and time (x, t) and the compact and invariant four-vector formulation. From this, the four-vectors for momentum-energy (p, U) and wave propagation-frequency (k,ω) are formed, and these in turn yield the associated Lorentz transformations. The transformations for electric and magnetic field vectors are also presented. Examples of each type of transformation are given. The relativistic Doppler shift of wavelength or frequency is derived from time dilation and also directly from the k, ω transformations. The latter yield the transformation of radiation direction (aberration) from one inertial frame to another. Stellar aberration explains the displaced celestial positions of stars due to the earth's motion about the sun.
Astrophysical jets often emerge from objects that are accreting matter such as protostars, stellar black holes in binary systems, and active galactic nuclei (AGN) of galaxies. With our special-relativity tools, we study three aspects of the jet phenomenon: the beaming of radiation from objects traveling near the speed of light, the associated Doppler boosting of intensity, and superluminal motion. […]