To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Photodissociation regions (PDRs) are regions where FUV photons dominate the energy balance or chemistry of the gas. In this chapter, we will examine the physical characteristics of dense and luminous PDRs near bright O and B stars, starting with the ionization balance (Section 9.2) and the energy balance (Section 9.3) of the gas in PDRs. We follow that with a discussion of the dust temperature in PDRs. The chemistry of PDRs (Section 9.5) is very similar to that of diffuse clouds (cf. Section 8.7), except for possible time-dependent effects. We have, then, all the ingredients to understand the structure of PDRs (Section 9.6). The remainder of this chapter focusses on the analysis and interpretation of observations of PDRs. We will start this off with back-of-the-envelope estimates of the incident FUV field, density, temperature, and mass based on a few key observations (Section 9.7). More thorough analysis tools are discussed in Section 9.8. These different techniques for estimating the physical conditions in PDRs are then compared, based on a case study of the Orion Bar (Section 9.9). Section 9.10 contrasts the physical conditions derived for various well-known PDRs. Finally, we examine the H2 IR fluorescence spectrum of PDRs in Section 9.11.
Figure 9.1 illustrates the structure of a PDR. FUV photons penetrate a molecular cloud, ionizing, dissociating, and heating the gas.
The Milky Way is largely empty. Stars are separated by some 2 pc in the solar neighborhood (ρ⋆ = 6 × 10−2 pc−3). If we take our Solar System as a measure, with a heliosphere radius of ≃235 AU, stars and their associated planetary systems fill about 3 × 10−10 of the available space. This book deals with what is in between these stars: the interstellar medium (ISM). The ISM is filled with a tenuous hydrogen and helium gas and a sprinkling of heavier atoms. These elements can be neutral, ionized, or in molecular form and in the gas phase or in the solid state. This gas and dust is visibly present in a variety of distinct objects: HII regions, reflection nebulae, dark clouds, and supernova remnants. In a more general sense, the gas is organized in phases – cold molecular clouds, cool HI clouds, warm intercloud gas, and hot coronal gas – of which those objects are highly visible manifestations. This gas and dust is heated by stellar photons, originating from many stars (the so-called average interstellar radiation field), cosmic rays (energetic [∼GeV] protons), and X-rays (emitted by local, galactic, and extragalactic hot gas). This gas and dust cools through a variety of line and continuum processes and the spectrum will depend on the local physical conditions. Surveys in different wavelength regions therefore probe different components of the ISM. This first chapter presents an inventory of the ISM with an emphasis on prominent objects in the ISM and the global structure of the ISM.
We now begin three chapters which are primarily aimed at a discussion of the main concepts of frequentist statistical inference. This is currently the prevailing approach to much of scientific inference, so a student should understand the main ideas to appreciate current literature and understand the strengths and limitations of this approach.
In this chapter, we introduce the concept of a random variable and discuss some general properties of probability distributions before focusing on a selection of important sampling distributions and their relationships. We also introduce the very important Central Limit Theorem in Section 5.9 and examine this from a Bayesian viewpoint in Section 5.10. The chapter concludes with the topic of how to generate pseudo-random numbers of any desired distribution, which plays an important role in Monte Carlo simulations.
In Chapter 6, we address the question of what is a statistic and give some common important examples. We also consider the meaning of a frequentist confidence interval for expressing the uncertainty in parameter values. The reader should be aware that study of different statistics is a very big field which we only touch on in this book. Some other topics normally covered in a statistics course like the fitting of models to data are treated from a Bayesian viewpoint in later chapters.
Finally, Chapter 7 concludes our brief summary of frequentist statistical inference with the important topic of frequentist hypothesis testing and discusses an important limitation known as the optional stopping problem.
One of the main objectives in science is that of inferring the truth of one or more hypotheses about how some aspect of nature works. Because we are always in a state of incomplete information, we can never prove any hypothesis (theory) is true. In Bayesian inference, we can compute the probabilities of two or more competing hypotheses directly for our given state of knowledge.
In this chapter, we will explore the frequentist approach to hypothesis testing which is considerably less direct. It involves considering each hypothesis individually and deciding whether to (a) reject the hypothesis, or (b) fail to reject the hypothesis, on the basis of the computed value of a suitable choice of statistic. This is a very big subject and we will give only a limited selection of examples in an attempt to convey the main ideas. The decision on whether to reject a hypothesis is commonly based on a quantity called a P-value. At the end of the chapter we discuss a serious problem with frequentist hypothesis testing, called the “optional stopping problem.”
Basic idea
In hypothesis testing we are interested in making inferences about the truth of some hypothesis. Two examples of hypotheses which we analyze below are:
The radio emission from a particular galaxy is constant.
The mean concentration of a particular toxin in river sediment is the same at two locations.
Science is all about identifying and understanding organized structures or patterns in nature. In this regard, periodic patterns have proven especially important. Nowhere is this more evident than in the field of astronomy. Periodic phenomena allow us to determine fundamental properties like mass and distance, enable us to probe the interior of stars through the new techniques of stellar seismology, detect new planets, and discover exotic states of matter like neutron stars and black holes. Clearly, any fundamental advance in our ability to detect periodic phenomena will have profound consequences in our ability to unlock nature's secrets. The purpose of this chapter is to describe advances that have come about through the application of Bayesian probability theory, and provide illustrations of its power through several examples in physics and astronomy. We also examine how non-uniform sampling can greatly reduce some signal aliasing problems.
New insights on the periodogram
Arthur Schuster introduced the periodogram in 1905, as a means for detecting a periodicity and estimating its frequency. If the data are evenly spaced, the periodogram is determined by the Discrete Fourier Transform (DFT), thus justifying the use of the DFT for such detection and measurement problems. In 1965, Cooley and Tukey introduced the Fast Discrete Fourier Transform (FFT), a very efficient method of implementing the DFT that removes certain redundancies in the computation and greatly speeds up the calculation of the DFT.
This book is primarily concerned with the philosophy and practice of inferring the laws of nature from experimental data and prior information. The role of inference in the larger framework of the scientific method is illustrated in Figure 1.1.
In this simple model, the scientific method is depicted as a loop which is entered through initial observations of nature, followed by the construction of testable hypotheses or theories as to the working of nature, which give rise to the prediction of other properties to be tested by further experimentation or observation. The new data lead to the refinement of our current theories, and/or development of new theories, and the process continues.
The role of deductive inference in this process, especially with regard to deriving the testable predictions of a theory, has long been recognized. Of course, any theory makes certain assumptions about nature which are assumed to be true and these assumptions form the axioms of the deductive inference process. The terms deductive inference and deductive reasoning are considered equivalent in this book. For example, Einstein's Special Theory of Relativity rests on two important assumptions; namely, that the vacuum speed of light is a constant in all inertial reference frames and that the laws of nature have the same form in all inertial frames.
In the last chapter, we discussed a variety of approaches to estimate the most probable set of parameters for nonlinear models. The primary rationale for these approaches is that they circumvent the need to carry out the multi-dimensional integrals required in a full Bayesian computation of the desired marginal posteriors. This chapter provides an introduction to a very efficient mathematical tool to estimate the desired posterior distributions for high-dimensional models that has been receiving a lot of attention recently. The method is known as Markov Chain Monte Carlo (MCMC). MCMC was first introduced in the early 1950s by statistical physicists (N. Metropolis, A. Rosenbluth, M. Rosenbluth, A. Teller, and E. Teller) as a method for the simulation of simple fluids. Monte Carlo methods are now widely employed in all areas of science and economics to simulate complex systems and to evaluate integrals in many dimensions. Among all Monte Carlo methods, MCMC provides an enormous scope for dealing with very complicated systems. In this chapter we will focus on its use in evaluating the multi-dimensional integrals required in a Bayesian analysis of models with many parameters.
The chapter starts with an introduction to Monte Carlo integration and examines how a Markov chain, implemented by the Metropolis–Hastings algorithm, can be employed to concentrate samples to regions with significant probability. Next, tempering improvements are investigated that prevent the MCMC from getting stuck in the region of a local peak in the probability distribution.
This chapter can be thought of as an extension of the material covered in Chapter 4 which was concerned with how to encode a given state of knowledge into a probability distribution suitable for use in Bayes' theorem. However, sometimes the information is of a form that does not simply enable us to evaluate a unique probability distribution p(Y|I). For example, suppose our prior information expresses the following constraint:
I ≡ “the mean value of cos y = 0.6.”
This information alone does not determine a unique p(Y|I), but we can use I to test whether any proposed probability distribution is acceptable. For this reason, we call this type of constraint information testable information. In contrast, consider the following prior information:
I1 ≡ “the mean value of cos y is probably > 0.6.”
This latter information, although clearly relevant to inference about Y, is too vague to be testable because of the qualifier “probably.”
Jaynes (1957) demonstrated how to combine testable information with Claude Shannon's entropy measure of the uncertainty of a probability distribution to arrive at a unique probability distribution. This principle has become known as the maximum entropy principle or simply MaxEnt.
We will first investigate how to measure the uncertainty of a probability distribution and then find how it is related to the entropy of the distribution. We will then examine three simple constraint problems and derive their corresponding probability distributions.