To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Time is that great gift of nature which prevents everything from happening at once.
–Clarence J. Overbeck
The key position played by the field of atomic physics in the development of modern quantum theory is owed in large part to the high precision with which the energy-level structure of the atom can be measured by the methods of high wavelength-resolution optical spectroscopy. Wavelength and frequency measurement accuracies that exceed parts in 108 are not only obtainable, but are required if the database is to be useful for diagnostic applications. By contrast, the measurement accuracies that can be obtained for other types of atomic structure properties is much lower. For lifetimes, transition probabilities, and oscillator strengths, extraordinary effort is required to achieve accuracies better than one percent. For cross section measurements, one must often be content with order-of-magnitude determinations, but the range of possible values makes reliable measurements to this accuracy valuable. While great strides have been made in ab initio theoretical methods, the attainable measurement accuracies for these quantities still exceeds the general reliability of calculations for cases involving complex many-electron atoms. Moreover, the accurate specification of wavelength and energy-level data does not ensure correct predictions of transition probabilities and lifetimes.
Measurements of lifetimes are particularly important, since they provide absolute rate values necessary to normalize relative transition probabilities obtained by time-integrated techniques. The availability of a comprehensive database for atomic transition probability rates has a significant impact on progress in other fields of science and technology, e.g., in fundamental physics and precise measurements; in the generation of coherent light; in atomic analysis in complex environments; in solar and astrophysics; and in plasma diagnostics.
In a complex atom or ion, the only rigorous constraints that are imposed on radiative transitions between levels are those of conservation of energy, conservation of angular momentum, and conservation of parity. For electric dipole transitions, conservation of parity leads to “Laporte's rule,” which states that the parity of the atom must change because the E1 photon carries away one unit of parity. For a single out-of-shell electron, the parity is given (nonrelativistically) by (−1)l and the angular momentum is given by j = l ±½. Thus it is not possible for two different levels with the same parity to also have the same total angular momentum. For systems with multiple out-of-shell electrons it is possible for two levels with the same parity to have the same total angular momentum, and the eigenvectors of these levels can (and in real cases always do) contain an admixture of other LS quantum numbers. In the simplest LS formulation (nonrelativistic E1), this mixing is neglected, and the spectrum consists of levels of noninteracting multiplicities (singlets and triplets for two valence electrons, doublets and quartets for three-valence-electron systems, etc.). If the exact LS-coupling assumption is relaxed, the individual multiplicity amplitudes in the admixtures lead to E1-allowed “intersystem” or “intercombination” (relativistic E1) transitions between the levels despite their nominal LS labels.
Selection rules
The fact that an E1 photon carries away one unit of angular moment and one unit of parity imposes the selection rules on the atom ΔJ = 0,±1 (no 0→0), ΔMJ = 0,±1 (no 0→0), with a parity change.
Let there be light. Take the rest of the week off.
Atomic energy levels deduced from optical spectra comprise one of the most precisely known sets of physical measurements that exist. However, the precision of the determinations of the relative oscillator strengths of these spectral lines from the relative intensities of spectral lines is much less precise. Fortunately, time-dependent methods for the study of the dynamics of the emission process exist (and are being developed) that permit the transition probability rates, oscillator strengths, and reaction rates to be determined with ever-increasing precision.
In most elementary quantum mechanics textbooks, the section on the emission of radiation is the least satisfactory section of the book. Whereas the development of relationships among various spatial overlap integrals between state vectors for various operators (such as the electrostatic dipole moment) can be formulated in a very elegant ab initio manner, the connection of these matrix elements to the time dependence of the system often seems driven by a posteriori assumptions that are extended beyond their justifiable range of applicability. As Fermi observed in stating his Golden Rule, “the transition probability and energy perturbation can be calculated with the help of perturbation theory (i.e., there is no better way known).” The Weisskopf–Wigner approximation offers a scheme for making precise calculations, but does not provide the rigor that has characterized so many other areas of quantum mechanics.
The main purpose of this book is to provide algorithms for direct N-body simulations, based on personal experience over many years. A brief description of the early history is included for general interest. We concentrate on developments relating to collisional direct integration methods but exclude three- and four-body scattering, which will be discussed in a separate chapter. In the subsequent section, we introduce some basic concepts which help to understand the behaviour of self-gravitating systems. The topics covered include two-body relaxation, violent relaxation, equipartition of kinetic energy and escape. Although the emphasis is on collisional dynamics, some of the theory applies in the large-N limit that is now being approached with modern hardware and improved numerical techniques. After these theoretical considerations, we turn to the problem at hand and introduce the general principles of direct integration as a beginner's exercise and also describe the first N-body method.
Historical developments
Numerical investigations of the classical N-body problem in the modern spirit can be said to have started with the pioneering effort of von Hoerner [1960]. Computational facilities at that time were quite primitive and it needed an act of faith to undertake such an uncertain enterprise. Looking back at these early results through eyes of experience, one can see that the characteristic features of binary formation and escape are already present for particle numbers as small as N = 16, later increased to 25 [von Hoerner, 1963].
N-body simulations involve a large number of decisions and the situation becomes even more complex when astrophysical processes are added. The guiding principle of efficient code design must be to provide a framework for decision-making that is sufficiently flexible to deal with a variety of special conditions at the appropriate time. Since the direct approach is based on a star-by-star treatment at frequent intervals, this prerequisite is usually satisfied. However, we need to ensure that the relevant tests are not performed unnecessarily. The development of suitable criteria for changing the integration method or identifying procedures to be carried out does in fact require a deep understanding of the interplay between many different modes of interactions. Hence building up the network for decision-making is a boot-strapping operation needing much patience and experience. The aim of a good scheme should be that this part of the calculation represents only a small proportion of the total effort.
This chapter discusses several distinct types of decisions necessary for a smooth performance. First we deal with the important task of selecting the next particle, or block of particles, to be advanced in time. The challenge is to devise an optimized strategy in order to reduce the overheads. Another aspect concerns close encounters, either between single particles or where one or more subsystems already consist of binaries.
In the last few years, the subject of dynamical planetary formation has undergone a remarkable transformation. Thus we now have an increasing database of actual observed systems which provides much material for theoretical and numerical work. It therefore seems appropriate to devote a small chapter to direct N-body simulations of planetary systems. The emphasis in the early days was on performing idealized calculations to ascertain whether the eccentricity growth due to weak perturbations could lead to significant accretion by collisions. With the increase of computing power, more realistic modelling has become feasible by direct methods, but powerful tree codes are an attractive alternative. The latter technique has proved effective for studying both planetesimal systems and planetary rings. Following the discovery of many extra-solar systems, the question of stability has become topical. Stability is often addressed using symplectic integrators which are outside the main scope of this book. In the following we distinguish between planetary formation and planetesimal dynamics. This division is somewhat arbitrary but planetesimal simulations are usually concerned with particles distributed in a thin annulus which therefore represents larger systems.
Planetary formation
Although there are early integrations of planetary dynamics relating to Bode's Law [Hills, 1970], it took another decade for the subject proper to get under way.
The basic theory of chain regularization [Mikkola & Aarseth, 1990, 1993] is described in chapter 5, while algorithms that deal with different treatments of physical collisions are detailed in chapter 9. Here we are concerned with a number of additional features that deal with aspects relating to what might be termed the N-body interface, namely how to combine two different solution methods in a consistent way. Strong interactions in compact subsystems are usually of short duration, with the ejection of energetic particles a characteristic feature. First we give some algorithms for unperturbed triple and quadruple systems while the more extensive treatment of perturbed chain regularization is discussed in the subsequent sections. As far as the internal chain subsystem is concerned, this requires extra procedures that add to the program complexity and cost. Having selected a suitable subsystem for special treatment, we also need to consider the change of membership and possible astrophysical processes. Moreover, the question of when to terminate a given configuration requires suitable decision-making for the switch to alternative methods, as well as identification of hierarchical stability in order to prevent inefficiency. Finally, since the implementation of the time-transformed leapfrog scheme has many similarities with chain regularization, we include some relevant algorithms here.
The question of numerical accuracy has a long history and is a difficult one. We are mainly concerned with the practical matter of employing convergent Taylor series in order to obtain statistically viable results. At the simplest level, the basic integration schemes can be tested for the two-body problem, whereas trajectories in larger systems exhibit error growth on short time-scales. However, it is possible to achieve solutions of high accuracy for certain small systems when using regularization methods. There are no generally agreed test problems at present but we suggest some desirable objectives, including comparison with Monte Carlo methods. Since large simulations inevitably require the maximum available resources, due attention must be paid to the formulation of efficient procedures. The availability of different types of hardware adds another dimension to programming design, which therefore becomes very specialized. Aspects of optimization and alternative hardware are also discussed, together with some performance comparisons.
Error analysis
It is a fact of computer applications that an error is made every time two arbitrary real numbers are added. Hence the task is to control the propagation of numerical errors and if possible keep them below an acceptable level. Since the N-body problem constitutes a system of non-linear differential equations the error growth tends to be exponential, as was demonstrated right at the outset of such investigations [Miller, 1964] and emphasized in a subsequent study [Miller, 1974].
A large number of algorithms are connected with regularization. Many of these concern the KS treatment which plays a key role in the N-body simulation codes. In this chapter, we derive some expressions relating to the conversion of regularized time, followed by other considerations of a practical nature. A separate section provides essential details of the Stumpff KS method as employed in an N-body code. This is followed by an algorithmic discussion of KS termination. Next we describe decision-making procedures for unperturbed two-body motion which speed up the calculation by a large factor. Another important feature with the same objective is the so-called ‘slow-down device’, where the principle of adiabatic invariance is exploited. The theory was given previously in connection with chain regularization and here we discuss the KS implementation. Special treatments of stable hierarchies also contribute significantly to enhanced efficiency while retaining the essential dynamics. Finally, the last sections deal with several processes relating to tidal interactions in close binaries that are connected through an evolutionary sequence. We discuss tidal circularization and two-body capture, as well as Roche-lobe mass transfer which all contribute to making star cluster modelling such an exciting and challenging project.
General KS considerations
We first discuss various general features that are applicable to all the KS methods and also include some aspects of the divided difference scheme, while the next section deals specifically with the Stumpff version.