To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We describe some relevant algorithms for initializing and terminating higher-order systems which are selected for the merger treatment discussed in chapter 11. Such configurations may consist of a single particle or binary in bound orbit around an existing triple or quadruple, or be composed of two stable triples. There are some significant differences from the standard case, the main one being that the KS solution of the binary is not terminated at the initialization. In other words, the complexity of the structure is increased, and the whole process is reminiscent of molecular chemistry. However, once formed as a KS solution, the new hierarchy needs to be restored to its original constituents at the termination which is usually triggered by large perturbations or mass loss. Special procedures are also required for removing all the relevant components of escaping hierarchies, and here we include merged triples and quadruples since the treatment is similar to that for higher-order systems.
Initialization
Consider an existing hierarchy of arbitrary multiplicity and mass, mi, which is to be merged with the mass-point mj, representing any object in the form of a single particle or even another hierarchy. Again we adopt the convention of denoting the component masses of a KS pair by mk and ml, respectively. Some of the essential steps are listed in Algorithm C.1.
Nearly all of these steps also appear in the standard case and therefore do not require comment. We note one important difference here, in that there is no termination of the KS solution.
In the preceding chapter, we have considered several methods for dealing with the perturbed two-body problem. Formally all these methods work or quite large perturbations, provided the regularized time-step is chosen sufficiently small. However, the selection of the dominant pair in a triple resonance interaction frequently calls for new initializations where the intruder is combined with one of the components. Likewise, one may have situations in which two hard binaries approach each other with small impact parameter. Hence a description in terms of one dominant two-body motion tends to break down during strong interactions, precisely at times when interesting outcomes are likely to occur. Since the switching of dominant components reduces the efficiency and also degrades the quality of the results, it it highly desirable to seek alternative methods for improved treatment.
In this chapter, we discuss several multiple regularization methods that have turned out to be very beneficial in practical simulations. By multiple regularization it is understood that at least two separations in a compact subsystem are subject to special treatment where the two-body singularities are removed. We begin by describing a three-body formulation that may be considered the Rosetta Stone for later developments. The generalization to more members with just one reference body is also included for completeness. A subsequent section outlines the elegant global formulation and is followed by a detailed discussion of the powerful chain regularization.
This book spans my entire life as a research worker at Cambridge. The circumstances that created this opportunity were based entirely on luck and this aspect played a vital part during subsequent developments. In the following chapters, I have tried to give details of the most relevant methods used in so-called ‘direct integration’ of the classical N-body problem, a method of attack somewhat analogous to scaling a mountain the hard way. This has been enhanced by an extensive discussion of the main algorithms implemented in the associated computer codes. A comprehensive review of related work in the field over the last 40 years is also presented. Throughout the term N-body simulations is used exclusively for methods based on direct summation, in keeping with tradition.
Although a wide range of problems is covered, the emphasis is on the dynamics of star clusters. This involves many aspects of stellar evolution. It is fortuitous that the University of Cambridge has a long tradition in this field that dates back to Eddington and Jeans. Fred Hoyle continued this school, which eventually gave rise to the application of synthetic stellar evolution. This subject was pioneered entirely at the Institute, mainly by the sequential efforts of Peter Eggleton, Christopher Tout and Jarrod Hurley, whose work has been vital for realistic star cluster simulations.
Direct N-body simulations are of necessity expensive because of the need to evaluate all the N(N –1)/2 force interaction terms. We have seen that the Ahmad–Cohen [1973] neighbour scheme only alleviates the problem to some extent. However, once the particle number becomes sufficiently large, the dynamical behaviour begins to change because close encounters are less important. This behaviour has inspired methods for collisionless systems to be developed, such as tree codes or multipole expansions. In this chapter, we are concerned with tree codes since some relevant aspects of the latter have already been discussed. First we review the basic features of the pioneering Barnes & Hut [1986] scheme which is widely used in a variety of applications. Since the emphasis in this book is on collisional stellar dynamics, we devote a section to describing a tree code for point-mass interactions [McMillan & Aarseth, 1993] in the hope that it might be revived. The final section deals with an independent development for flattened systems [Richardson, 1993a,b] that has been used to study different stages of planetary formation as well as ring dynamics, where collisions play an important role.
Basic formulation
In view of the rapid growth in the computational requirements for increasing particle numbers when using direct summation, it is not surprising that several tree-based approaches have been made to speed up the expensive force calculation.
Recent years have seen impressive advances in the simulation of collisionless systems. However, a wide range of problems in galactic dynamics are still amenable to direct N-body integration and will therefore be discussed for completeness. This chapter also serves as a historical review of several topics that blossomed from a primitive beginning. All such problems are characterized by the employment of a small softening of the potential which reduces the effect of close encounters. We begin by describing an application of the grid perturbation method to a ring of interacting molecular clouds in the inner part of the Galaxy [Aarseth, 1988b]. This enables a realistic number of clouds to be considered. The tidal disruption of dwarf spheroidal galaxies orbiting the Milky Way has also been studied by direct means [Oh, Lin & Aarseth, 1995].
More extreme interactions of galaxies often result in the formation of one system, subsequently denoted as a remnant to distinguish this process from mergers used for hierarchical stellar configurations. Studies of galaxy interactions involving black holes have also become topical [Makino & Ebisuzaki, 1996; Makino, 1997]. This problem is particularly challenging because of the difficulty of scaling to realistic conditions. Small galaxy groups and clusters are ideal for N-body simulations if close encounters can be treated using a softened potential.
A variety of procedures need to be carried out before the calculation proper can begin. The prescriptions for input parameters and options are discussed in chapter 7. Here we concentrate on different types of initial conditions for star cluster simulations, whereas planetary systems are described elsewhere. The cluster models are first generated for single stars with a specified initial mass function (hereafter IMF) and scaled to internal units. Since a variety of distributions may be considered, we provide several detailed algorithms. Next we present some procedures for including a realistic distribution of primordial binaries. Modelling of star clusters also requires external effects to be added. We distinguish between the motion of open clusters in circular orbits and globular clusters in 3D, with the galactic tidal force truncating the outer parts. Interstellar clouds form another perturbing agent which may be taken into account. Finally, with these procedures completed, the force polynomials for direct solutions as well as for any dominant two-body motions can be initialized.
Initial conditions for clusters
Although the choice of starting configurations for star cluster simulations is extremely wide, we may be guided by certain principles and observational constraints. Such models are usually represented by a smooth IMF and centrally concentrated density distribution. Depending on the objectives, the velocities may represent approximate equilibrium or initial collapse, whereas cosmological models are characterized by expansion.
In the preceding chapters, we have described a variety of numerical methods and their implementation. Given these tools and algorithms, it should be possible in principle to construct direct N-body codes for dynamical studies. However, the combination of methods introduces considerable complications which have taken many years to master and it is therefore much easier to take over one of the large codes. On the other hand, it would be good programming practice to implement a stand-alone code based on direct summation with softened potential or three-body regularization, where decision-making is considerably simplified.
In the following we provide some hints to facilitate the use of one of the main codes, as well as guidelines for implementing movies. These range from practical comments on getting started to producing a meaningful description of the results. Special counters that record events of interest provide further information. Graphics packages are generally avoided in the codes for reasons of compatibility. However, two versions of stand-alone regularization codes are available with software for movie making which is a good way to study dynamical interactions. We also discuss some diagnostic aspects that may assist in the dissemination of the results and outline various strategies for identifying numerical problems.
We now make an abrupt transition to a presentation of various algorithms utilized by the direct summation codes. Before proceeding further, it will be useful to include some practical aspects in order to have a proper setting for the subsequent more technical procedures. First we introduce the main codes that have been developed for studying different gravitational N-body problems. Where possible, the same data structure has been employed, except that the most recent versions are formulated in terms of the Hermite integration scheme. Since the largest codes are quite complicated, we attempt to describe the overall organization by tables and a flowchart to provide some enlightenment. Later sections give further details concerning input parameters, variables and data structure; each of these elements play an important role for understanding the general construction. We also discuss a variety of optional features which provide enhanced flexibility for examining different processes.
N-body codes
Before describing the characteristics of the codes, we introduce some short-hand notation to illustrate the different solution methods employed [cf. Makino & Aarseth, 1992]. Thus by ITS we denote the basic individual time-step scheme, whereas ACS defines the Ahmad–Cohen [1973] neighbour scheme. Likewise, HITS and HACS are used for the corresponding Hermite integration methods. Finally, MREG refers to the implementations of unperturbed three-body [Aarseth & Zare, 1974] and four-body chain regularization [Mikkola & Aarseth, 1990], as well as perturbed chain regularization [Mikkola & Aarseth, 1993].
In this last chapter, we return to the subject of numerical experiments in the classical sense. Using the computer as a laboratory, we consider a large number of interactions where the initial conditions are selected from a nearly continuous range of parameters. Such calculations are usually referred to as scattering experiments, in direct analogy with atomic physics. Moreover, the results are often described in terms of the physicist's cross sections, with the results approximated by semi-analytical functions. The case of three or four interacting particles is of special interest. However, the parameters space is already so large that considerable simplifications are necessary. In addition to the intrinsic value, applications to stellar systems provide a strong motivation. Naturally, the conceptual simplicity of such problems has also attracted much attention.
In the following we discuss a number of selected investigations with emphasis on those that employ regularization methods. It is convenient to distinguish between simulations and scattering experiments, where in the former case all particles are bound. The aim is to obtain statistical information about average quantities such as escape times and binary elements, and determine their mass dependence. Small bound systems often display complex behaviour and therefore offer ample opportunities for testing numerical methods. On the other hand, scattering experiments are usually characterized by hyperbolic relative velocities, the simplest example being a single particle impacting a binary.
In this chapter, we provide the tools needed for standard N-body integration. We first review the traditional polynomial method which leads to increased efficiency when used in connection with individual time-steps. This self-contained treatment follows closely an earlier description [Aarseth, 1985a, 1994]. Some alternative formulations are discussed briefly for completeness. We then introduce the simpler Hermite scheme [Makino, 1991a,b] that was originally developed for special-purpose computers but is equally suitable for workstations or laptops and is attractive by its simplicity. As discussed in a later section, the success of this scheme is based on the novel concept of using quantized time-steps (factor of 2 commensurate), which reduces overheads. Variants of the Hermite method were attempted in the past, such as the low-order scheme of categories [Hayli, 1967, 1974] and the full use of explicit Taylor series derivatives [Lecar, Loeser & Cherniack, 1974]. The former study actually introduced the idea of hierarchical time-steps with respect to individual force calculations using a low-order scheme, whereas the latter formulation is expensive (but accurate) even for modest particle numbers.
Force Polynomials
The force acting on a particle usually varies in a smooth manner throughout an orbit, provided the particle number is sufficiently large. Hence by fitting a polynomial through some past points, it is possible to extend the time interval for advancing the equations of motion and thereby reduce the number of force evaluations. In other words, we can use the past information to predict the future motion with greater confidence.
Simulating star clusters by direct N-body integrations is the equivalent of scaling mountains the hard way. At any time the maximum particle number depends on hardware and is therefore limited by technology. Some of the methods that have been described in this book are ideally suited to studying the classical point-mass problem. In addition, a wide variety of astrophysical processes can be included for realistic modelling of actual clusters. Recently the simulations have been devoted to systems with up to N ≃ 104 particles which includes rich open clusters. However, with the construction of the GRAPE-6 special-purpose computer we are now able to investigate small globular clusters as observed in the Large Magellanic Cloud (LMC) [Elson et al., 1998].
In the following we concentrate on various aspects of star cluster simulations not covered in earlier chapters. We first describe algorithms for determining the core radius and density centre which are useful tools for data analysis. For historical reasons, idealized models (i.e. isolated systems) are also considered, particularly because of their relevance for more approximate methods. After further discussions of the IMF, we return to the subject of assigning primordial binaries and illustrate their importance by some general results. External effects due to the tidal field and interstellar clouds form an important ingredient in star cluster modelling even though the latter are rarely studied. Algorithms for combining stellar evolution with the dynamical treatment have been outlined previously.
Direct N-body simulations on conventional computers benefit greatly from the use of the Ahmad–Cohen [1973] or AC neighbour scheme. Algorithms for both the divided difference method and Hermite formulation will therefore be discussed in the following sections. We also consider the implementation of the code N BODY6++ on a popular type of parallel computer [Spurzem, Baumgardt & Ibold, 2003], since it seems that the future of large-N calculations is evolving in this direction at least for those who do not use the special-purpose HARP or GRAPE machines described previously. The important problem of massive black hole binaries in galactic nuclei is very challenging and appears amenable to direct integration using parallel architecture and neighbour schemes. A direct solution method is described [Milosavljević & Merritt, 2001]. This treats the massive components by two-body regularization, whereas the formation process itself is studied by a tree code. Some of the drawbacks of this method inspired a new formulation where the massive binary is considered as part of a compact subsystem which is advanced by a time-transformed leapfrog method [Mikkola & Aarseth, 2002]. Over the years, the quest for larger particle numbers has also encouraged the construction of partially collisional methods.
Most star clusters are characterized by large memberships that make direct N-body simulations very time-consuming. In order to study such systems, it is therefore necessary to design methods that speed up the calculations while retaining the collisional approach. One good way to achieve this is to employ a neighbour procedure that requires fewer total force summations. The AC neighbour scheme [Ahmad & Cohen, 1973, 1974] has proved very effective for a variety of collisional and collisionless problems. It is particularly suitable for combining with regularization treatments, where dominant particles as well as perturbers can be selected from the corresponding neighbour lists without having to search all the particles.
The AC method can also be used for cosmological simulations. The study of galaxy clustering usually employs initial conditions where the dominant motions are due to the universal expansion. By introducing comoving coordinates, we can integrate the deviations from the smooth Hubble flow, thereby enabling much larger time-steps to be used, at least until significant density perturbations have been formed. Naturally, such a direct scheme cannot compete with more approximate methods, such as the P3M algorithm [Efstathiou & Eastwood, 1981; Couchman, 1991] or the parallel tree code [Dubinski, 1996], which have been developed for large N, but in some problems the ability to perform detailed modelling may still offer considerable advantages.