To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The renowned English chemist and meteorologist John Dalton (1766–1844) published A New System of Chemical Philosophy in two volumes, between 1808 and 1827. Dalton's discovery of the importance of the relative weight and structure of particles of a compound for explaining chemical reactions transformed atomic theory and laid the basis for much of what is modern chemistry. Volume 2 was published in 1827. It contains sections examining the weights and structures of two-element compounds in five different groups: metallic oxides; earthly, alkaline and metallic sulphurets; earthly, alkaline and metallic phosphurets; carburet; and metallic alloys. An appendix contains a selection of brief notes and tables, including a new table of the relative weights of atoms. A planned second part was never published. Dalton's work is a monument of nineteenth-century chemistry. It will continue to be read and enjoyed by anybody interested in the history and development of science.
Through Space and Time is based upon the 1933 Christmas Lectures that James Jeans gave at the Royal Institution, London. Intended to appeal to a wide readership and presenting a broad understanding of the Earth, solar system and the universe, the book begins its journey with the history, structure and main features of our planet, and ends in the vast expanses of space among the nebulae.
Sir George Darwin (1845–1912) was the second son of Charles Darwin. After studying mathematics at Cambridge he read for the Bar, but soon returned to science and to Cambridge, where in 1883 he was appointed Plumian Professor of Astronomy and Experimental Philosophy. His work was concerned primarily with the effect of the sun and moon on tidal forces on Earth, and with the theoretical cosmogony which evolved from practical observation: he formulated the fission theory of the formation of the moon (that the moon was formed from still-molten matter pulled away from the Earth by solar tides). He also developed a theory of evolution for the Sun–Earth–Moon system based on mathematical analysis in geophysical theory. This volume, published in 1916 after the author's death, includes a biographical memoir by his brother Sir Francis Darwin, his inaugural lecture and his lectures on George W. Hill's lunar theory.
The authors analyze how the structure of a package determines its developmental complexity according to such measures as bug search times and documentation information content. The work presents arguments for why these issues impact solution cost and time more than does scalable performance. The final chapter explores the question of scalable execution and shows how scalable design relates to scalable execution. The book's focus is on program organization, which has received considerable attention in the broader software engineering community, where graphical description standards for modeling software structure and behavior have been developed by computer scientists. These discussions might be enriched by engineers who write scientific codes. This book aims to bring such scientific programmers into discussion with computer scientists. The authors do so by introducing object-oriented software design patterns in the context of scientific simulation.
These Personal Recollections contain the memoirs and a selection of the correspondence of the nineteenth-century polymath Mary Somerville (1780–1872). The book was first published in 1873, a year after Mary's death, by her daughter Martha, who wrote brief introductions to the text. Mary Somerville is best known for her pioneering scientific publications which include her translation of Laplace's Mécanique Céleste (1831: also resissued in this series); On the Connection of the Physical Sciences (1834); Physical Geography (1848); and On Molecular and Microscopic Science (1869). Through these publications, Somerville made a lasting contribution to the dissemination of scientific knowledge. Somerville's correspondence deals primarily with her public life, while the memoirs offer insight into her private sphere: the discouragement she faced in pursuit of learning; her passion for women's education and suffrage; family life; and personal faith. Her story is compelling, and her experiences may resonate with many women today.
The Home Life of Sir David Brewster, originally published in 1869, records the remarkable life of inventor, physicist, mathematician and astronomer, Sir David Brewster (1781–1868). Written by his daughter, Mrs Margaret M. Gordon, the book is aimed at a non-academic audience, and details the extraordinary life and work of this amazing scientist, who began his studies at Edinburgh University at the age of just twelve, and who is best known for his invention of the kaleidoscope and of the apparatus that initially formed the structure of the core of the lighthouse, and thus his work on the polarization of light. Mrs. Gordon cites Brewster's many activities, including the publication of over 2,000 scientific papers, though she stresses that she has written about her father as the man, and not the scientist. The book will appeal to anyone interested in the life and career of this undoubtedly brilliant Scotsman.
“But if thought corrupts language, language can also corrupt thought.”
George Orwell
Automated Interoperability Tools
In human cultures, membership in a community often implies sharing a common language. This holds as much for computer programming languages as for natural human tongues. Certain languages dominate certain research communities. As a result, the modern drive towards multidisciplinary studies inevitably leads to mixed-language development. As noted in the preface, survey evidence suggests that approximately 85% of high-performance computing users write in some flavor of C/C++/C#, whereas 60% write in Fortran. These data imply that at least 45% write in both Fortran and C/C++/C#, so the union of these two language families likely contains the most important pairings in mixed-language scientific programming.
In many settings, the language version matters as much as the language identity. For example, interfacing other languages with object-oriented Fortran 2003 poses a much broader set of challenges than does interfacing with procedural Fortran 77 or even object-based Fortran 95. It also matters whether one language invokes code in a second language, the second invokes code in the first, or both. As suggested by Rasmussen et al. (2006), one can account for invocation directionality by considering ordered pairs of languages, where the order determines which language is the caller and which is the callee.
Consider a set {A,B,C, …} of N dialects, where each member is a language or language version distinct from each other member.
“Never be afraid to try something new. Remember, amateurs built the ark. Professionals built the Titanic.”
Miss Piggy
The Problem
While the abstract calculus and strategy patterns apply to the integration of a single physics abstraction, our chief concern lies in linking multiple abstractions. This poses at least two significant software design problems. The first involves how to facilitate interabstraction communication. The GoF addressed interabstraction communication with the mediator pattern. When N objects interact, a software architect can reduce the N(N−1)+ associations between the objects to 2N associations by employing a mediator.
The mediator association count stems from the requirements that the Mediator know each communicating party and those parties know the Mediator. For example, in a mediator implementation presented by Gamma et al. (1995), the sender passes a reference to itself to the mediator. The sender must be aware of the mediator in order to know where to send the message. Likewise, the mediator must be aware of the sender in order to invoke methods on the sender via the passed reference. Figure 8.1 illustrates the associations in an atmospheric boundary layer model, wherein the air, ground, and cloud ADTs might solve equation sets for the airflow, ground transpiration, and discrete droplet motion, respectively.
A second and conceptually more challenging problem concerns how one assembles quantities of an inherently global nature – that is, information that can only be determined with simultaneous knowledge of the implementation details of each of the single-physics abstractions.
“All professions are conspiracies against the laity.”
George Bernard Shaw
The Problem
The context of abstract calculus is the construction of numerical software that approximates various differential and integral forms. Two pairs of conflicting forces arise in this context. In the first pair, the low-level nature of the mathematical constructs provided by mainstream programming languages constrains the design of most scientific programs. A desire for syntax and semantics that naturally represents the much richer mathematical language of scientists and engineers opposes this constraint.
The C++ language contains native scalar and one-dimensional array variables. The C++ STL extends these in vectors with nice properties such as automatic memory management, including sizing and resizing upon assignment. Fortran 2003 provides similar capabilities with its multidimensional allocatable array construct. It also provides numerous useful intrinsic procedures for determining array properties including size, shape, maximum element, and minimum element, as well as intrinsic procedures and operators for combining arrays into sums, matrix vector products, and other derived information. It is common in scientific and engineering work to build up from these native constructs a set of array classes with a variety of additionally useful methods (Barton and Nackman 1994; Heroux et al. 2005). Nonetheless, the resulting objects model very low-level mathematical entities in the sense that one typically arrives at these entities after fairly involved derivations from, and approximations to, much higher-level constructs.
This appendix summarizes the Unified Modeling Language (UML) diagrammatic notation employed throughout this book along with the associated terminology and brief definitions of each term. We consider the elements that appear in the five types of UML diagrams used in the body of the current text: use case, class, object, package, and sequence diagrams. At the end of the appendix, we give a brief discussion of Object Constraint Language (OCL), a declarative language for describing rules for UML models.
Use Case Diagrams
A use case is a description of a system's behavior as it responds to an outside request or input. It captures at a high level who does what for the system being modeled. Use cases describe behavior, focusing on the roles of each element in the system rather than on how each element does its works.
A use case diagram models relationships between use cases and external requests, thus rendering a visual overview of system functionality. Figure B.1 reexamines the fin heat conductor analyzer diagram from Figure 2.6, adding notations to identify the elements of the use case diagram.
Use case diagrams commonly contain the following elements:
Actors: people or external systems that interact with the system being modeled. Actors live outside the system and are the users of the system. Typically actors interact with the system through use cases. In UML, actors are drawn as stick figures. In the fin analyzer system example, system architect, thermal analyst, and numerical analyst are actors.
“When sorrows come, they come not single spies but in battalions.”
William Shakespeare
Toward a Scalable Abstract Calculus
The canonical contexts sketched in Section 4.3 and employed throughout Part II were intentionally low-complexity problems. Such problems provided venues for fleshing out complete software solutions from their high-level architectural design through their implementation in source code. As demonstrated by the analyses in Chapter 3, however, the issues addressed by OOA, OOD, and OOP grow more important as a software package's complexity grows. Complexity growth inevitably arises when multiple subdisciplines converge into multiphysics models. The attendant increase in the scientific complexity inevitably taxes the hardware resources of any platform employed. Thus, leading-edge research in multiphysics applications must ultimately address how best to exploit the available computing platform.
Recent trends in processor architecture make it clear that fully exploiting the available hardware on even the most modest of computing platforms necessitates mastering parallelism. Even laptop computers now contain multicore processors, and the highest-end machines contain hundreds of thousands of cores. The process of getting a code to run efficiently on parallel computers is referred to as getting a code to scale, and code designs that facilitate scaling are termed scalable. The fundamental performance question posed by this chapter is whether one can construct a scalable abstract calculus. The Sundance project (Long 2004) has already answered this question in the affirmative for C++.
Whereas code reuse played an important role in Part I of this text, design reuse plays an equally important role in Part II. The effort put into thinking abstractly about software structure and behavior pays off in high-level designs that prove useful independent of the application and implementation language. Patterns comprise reusable elements of successful designs.
The software community typically uses the terms “design patterns” and “object-oriented design patterns” interchangeably. This stems from the expressiveness of OOP languages in describing the relationships and interactions between ADTs. Patterns can improve a code's structure and readability and reduce its development costs by encouraging reuse.
Software design patterns comprise four elements (Gamma et al. 1995):
The pattern name: a handle that describes a design problem, its solution, and consequences in a word or two.
The problem: a description of when to apply the pattern and within what context.
The solution: the elements that constitute the design, the relationships between these elements, their responsibilities, and their collaborations.
The consequences: the results and trade-offs of applying the pattern.
Although there have been suggestions to include additional information in identifying a pattern, for example, sample code and known uses to validate the pattern as a proven solution, authors generally agree that elements 2-4 enumerate the three essential factors in each pattern.
“Memory is a crazy woman [who] hoards colored rags and throws away food.”
Austin O'Malley
The Problem
Large software development efforts typically require a degree of consistency across the project to ensure that each developer follows practices consistent with the critical goals of the project. In high-performance computing (HPC), for example, Amdahl's law (Chapter 1) suggests that scaling up to the tens or hundreds of thousands of processor cores available on leadership-class machines requires that every dusty corner of the code make efficient use of the available cores. Otherwise, whichever fraction of the code speeds up more slowly with increasing numbers of cores eventually determines the overall speedup of the code.
Another form of consistency proves useful when one desires some universal way to reference objects in a project. Doing so facilitates manipulating an object without knowledge of its identity. The manipulated object could equally well be an instance of any class in the project.
In HPC, communicating efficiently between local memories on distributed processors represents one of the most challenging problems. One might desire to ensure consistency in communication practices across the project. In these contexts, two broad requirements drive the desire to impose a degree of uniformity across a design: one stemming from a need for consistent functionality, and the other stemming from a need for consistent referencing.
Opposing these forces is the desire to avoid overconstraining the design. In the worst-case scenario, imposing too much uniformity stifles creativity and freezes counterproductive elements into the design.