We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
First published in 2007, this second edition describes the computational methods used in theoretical physics. New sections were added to cover finite element methods and lattice Boltzmann simulation, density functional theory, quantum molecular dynamics, Monte Carlo simulation, and diagonalisation of one-dimensional quantum systems. It covers many different areas of physics research and different computational methodologies, including computational methods such as Monte Carlo and molecular dynamics, various electronic structure methodologies, methods for solving partial differential equations, and lattice gauge theory. Throughout the book the relations between the methods used in different fields of physics are emphasised. Several new programs are described and can be downloaded from www.cambridge.org/9781107677135. The book requires a background in elementary programming, numerical analysis, and field theory, as well as undergraduate knowledge of condensed matter theory and statistical physics. It will be of interest to graduate students and researchers in theoretical, computational and experimental physics.
Although computation and the science of physical systems would appear to be unrelated, there are a number of ways in which computational and physical concepts can be brought together in ways that illuminate both. This volume examines fundamental questions which connect scholars from both disciplines: is the universe a computer? Can a universal computing machine simulate every physical process? What is the source of the computational power of quantum computers? Are computational approaches to solving physical problems and paradoxes always fruitful? Contributors from multiple perspectives reflecting the diversity of thought regarding these interconnections address many of the most important developments and debates within this exciting area of research. Both a reference to the state of the art and a valuable and accessible entry to interdisciplinary work, the volume will interest researchers and students working in physics, computer science, and philosophy of science and mathematics.
This Element has three main aims. First, it aims to help the reader understand the concept of computation that Turing developed, his corresponding results, and what those results indicate about the limits of computational possibility. Second, it aims to bring the reader up to speed on analyses of computation in physical systems which provide the most general characterizations of what it takes for a physical system to be a computational system. Third, it aims to introduce the reader to some different kinds of quantum computers, describe quantum speedup, and present some explanation sketches of quantum speedup. If successful, this Element will equip the reader with a basic knowledge necessary for pursuing these topics in more detail.
Thoroughly revised for its second edition, this advanced textbook provides an introduction to the basic methods of computational physics, and an overview of progress in several areas of scientific computing by relying on free software available from CERN. The book begins by dealing with basic computational tools and routines, covering approximating functions, differential equations, spectral analysis, and matrix operations. Important concepts are illustrated by relevant examples at each stage. The author also discusses more advanced topics, such as molecular dynamics, modeling continuous systems, Monte Carlo methods, genetic algorithm and programming, and numerical renormalization. It includes many more exercises. This can be used as a textbook for either undergraduate or first-year graduate courses on computational physics or scientific computation. It will also be a useful reference for anyone involved in computational research.
Computational mineralogy is fast becoming the most effective and quantitatively accurate method for successfully determining structures, properties and processes at the extreme pressure and temperature conditions that exist within the Earth's deep interior. It is now possible to simulate complex mineral phases using a variety of theoretical computational techniques that probe the microscopic nature of matter at both the atomic and sub-atomic levels. This introductory guide is for geoscientists as well as researchers performing measurements and experiments in a lab, those seeking to identify minerals remotely or in the field, and those seeking specific numerical values of particular physical properties. Written in a user- and property-oriented way, and illustrated with calculation examples for different mineral properties, it explains how property values are produced, how to tell if they are meaningful or not, and how they can be used alongside experimental results to unlock the secrets of the Earth.
Providing a detailed and pedagogical account of the rapidly-growing field of computational statistical physics, this book covers both the theoretical foundations of equilibrium and non-equilibrium statistical physics, and also modern, computational applications such as percolation, random walks, magnetic systems, machine learning dynamics, and spreading processes on complex networks. A detailed discussion of molecular dynamics simulations is also included, a topic of great importance in biophysics and physical chemistry. The accessible and self-contained approach adopted by the authors makes this book suitable for teaching courses at graduate level, and numerous worked examples and end of chapter problems allow students to test their progress and understanding.
Since their inception, the Perspectives in Logic and Lecture Notes in Logic series have published seminal works by leading logicians. Many of the original books in the series have been unavailable for years, but they are now in print once again. In this volume, the first publication in the Perspectives in Logic series, Pour-El and Richards present the first graduate-level treatment of computable analysis within the tradition of classical mathematical reasoning. The book focuses on the computability or noncomputability of standard processes in analysis and physics. Topics include classical analysis, Hilbert and Banach spaces, bounded and unbounded linear operators, eigenvalues, eigenvectors, and equations of mathematical physics. The work is self-contained, and although it is intended primarily for logicians and analysts, it should also be of interest to researchers and graduate students in physics and computer science.
There is an increasing need for undergraduate students in physics to have a core set of computational tools. Most problems in physics benefit from numerical methods, and many of them resist analytical solution altogether. This textbook presents numerical techniques for solving familiar physical problems where a complete solution is inaccessible using traditional mathematical methods. The numerical techniques for solving the problems are clearly laid out, with a focus on the logic and applicability of the method. The same problems are revisited multiple times using different numerical techniques, so readers can easily compare the methods. The book features over 250 end-of-chapter exercises. A website hosted by the author features a complete set of programs used to generate the examples and figures, which can be used as a starting point for further investigation. A link to this can be found at www.cambridge.org/9781107034303.
Albert Einstein encapsulated a commonly held view within the scientific community when he wrote in his book Out of My Later Years (Einstein 1950, page 54)
‘When we say that we understand a group of natural phenomena, we mean that we have found a constructive theory which embraces them.’
This represents a dual challenge to the scientist: on the one hand, to explain the real world in a very basic, and if possible, mathematical, way; but on the other, to characterise the extent to which this is even possible. Recent years have seen the mathematics of computability play an increasingly vital role in pushing forward basic science and in illuminating its limitations within a creative coming together of researchers from different disciplines. This special issue of Mathematical Structures in Computer Science is based on the special session ‘Computability of the Physical’ at the International Conference Computability in Europe 2010, held at Ponta Delgada, Portugal, in June 2010, and it, together with the individual papers it contains, forms what we believe to be a special contribution to this exciting and developing process.
Following the evolution of a star cluster is among the most computer-intensive and delicate problems in science, let alone stellar dynamics. The main challenges are to deal with the extreme discrepancy of length and time scales, the need to resolve the very small deviations from thermal equilibrium that drive the evolution of the system, and the sheer number of computations involved. Though numerical algorithms of many kinds are used, this is not an exercise in numerical analysis: the choice of algorithm and accuracy are dictated by the need to simulate the physics faithfully rather than to solve the equations of motion as exactly as possible.
Length/time scale problem
Simultaneous close encounters between three or more stars have to be modelled accurately, since they determine the exchange of energy and angular momentum between internal and external degrees of freedom (Chapter 23). Especially the energy flow is important, since the generation of energy by double stars provides the heat input needed to drive the evolution of the whole system, at least in its later stages (Chapter 27). Unfortunately, the size of the stars is a factor 109 smaller than the size of a typical star cluster. If neutron stars are taken into account, the problem is worse, and we have a factor of 1014 instead, for the discrepancy in length scales.
The time scales involved are even worse, a close passage between two stars taking place on a time scale of hours for normal stars, milliseconds for neutron stars (Table 3.1).
Laboratory experiments and well data serve as main sources of controlled experimental data where a number of physical properties are measured on the same samples at varying conditions, such as saturation and pressure. Chapter 2 discusses how these data are used to derive theoretical models as well as establish the relevance of these models to rock types.
Computational rock physics, also called digital rock physics or DRP, is the third such source. The principle of this technique is “image and compute”: image the pore structure of rock and computationally simulate various physical processes in this space, including single-phase viscous fluid flow for absolute permeability; multiphase flow for relative permeability; electrical flow for resistivity; and loading and stress computation for the elastic properties.
The principle of DRP is simple but its implementation is not. It requires at least three main steps: imaging; image processing and segmentation; and physical property simulation.
Three-dimensional imaging of a rock sample is usually performed in a CT scanning machine by rotating the sample relative to an X-ray source. The actual 3D geometry is reconstructed tomographically from these raw data and the image appears in shades of gray. The brightness of a voxel in such a 3D image is directly affected by the effective atomic number of the material and is approximately proportional to its density. For example, dense pyrite will appear bright while less dense quartz will appear light gray. The empty pore space will be black and parts of it illed with, for example, water or bitumen will be dark gray. To image very small features present in shale or micrite in carbonates, even the sharpest CT resolution may not be enough. A different technique, the so-called FIB-SEM, is used where the focused ion beam gradually shaves off thin slices of the sample and the exposed 2D surface is imaged (photographed) by the scanning electron microscope to produce a stack of closely spaced 2D images.
Computers are one of the most important tools available to physicists, whether for calculating and displaying results, simulating experiments, or solving complex systems of equations. Introducing students to computational physics, this textbook, first published in 2006, shows how to use computers to solve mathematical problems in physics and teaches students about choosing different numerical approaches. It also introduces students to many of the programs and packages available. The book relies solely on free software: the operating system chosen is Linux, which comes with an excellent C++ compiler, and the graphical interface is the ROOT package available for free from CERN. This broad scope textbook is suitable for undergraduates starting on computational physics courses. It includes exercises and many examples of programs. Online resources at www.cambridge.org/0521828627 feature additional reference information, solutions, and updates on new techniques, software and hardware used in physics.
In this chapter, we present the teaching initiative of Interaction Design and Physical Computing that is currently under development at Pontifícia Universidade Católica do Rio de Janeiro – PUC-Rio in Brazil, within the class called Interfaces Físicas e Lógicas, and in the LIFE Lab – Laboratório de Interfaces Físicas Experimentais. A discussion about the emerging area of Physical Computing in the field of Interaction Design is conducted from the conceptual framework of its practice and teaching initiatives in the Design school used for the study. In this context, thoughts on Design Thinking and Reflective Practice by theorists such as Donald Schön, Herbert Simon, and Nigel Cross are brought into the scene, as part of a learning methodology proposal, exploring the relation between the theory of Design and its practice. To expand the discussion, Weiser's ubiquitous computing concept is added as one more approach. In this sense, the idea that we live surrounded by digital interfaces embedded in objects that assist us in activities related to communication, information, entertainment, safety, health, and wellbeing becomes each day more present. Thus we are witnessing the rise of a new area of Design practice, in which designers, besides being users of the digital interactive systems, are involved in the process of development of the interfaces that are responsible for the mediation between the computer systems and men. This emerging area of Design is called Interaction Design, an area that has the discipline known as Physical Computing – in Brazil, Interfaces Físicas – as a key content. The discipline explores the relationship between the physical world and computer systems. Another author, who is brought to the discussion, is O'Sullivan, who advocates that through the use of software, hardware, electronics, sensors, microcontrollers, automation systems, and motors, the Physical Computing devices are digital interactive systems that sense and react to the physical world. Even though there is undeniable growth of the Interaction Design field globally, in Brazilian Design Courses there are still few academic initiatives concerning its teaching, especially at the undergraduate level. In this context, the Interfaces Físicas e Lógicas class from PUC-Rio's Design-Digital Media course has been – since 2010 – among the first Physical Computing classes offered in a Brazilian undergraduate Design program. This teaching–learning initiative has come together with the conception of a laboratory, designed as a space for the practice of Physical Computing. The LIFE Lab is an initiative of PUC-Rio's Department of Arts & Design that aims to provide students with the appropriate environment and technology for the practical development of Physical Computing projects. The lab is integrated to the other labs of the design program such as the product design lab, and the graphic production lab, favoring interdisciplinary and multidisciplinary projects, as well as the interchange between cultures and languages of Design. Occupying 36 square meters of floor space, the lab is equipped with computers, open source software, electronic components, Arduino boards, and a small library. Ever since the implementation of the LIFE Lab, a growing variety of Physical Computing projects have been developed. The projects quite often reveal a thorough design thinking process, in which the reflective practice with Physical Computing tools leads to a variety of creative and innovative Design solutions. One selection of these projects – along with a brief description and analysis of the individual design processes – is presented at the end of the essay. For more information, please visit www.life.dad.puc-rio.br
There are several traditional models of computation such as Church's lambda calculus, Herbrand-Gödel equational calculus and representability in formal systems of arithmetic, that appear unrelated to the general framework of physics. The extreme example in this context is Fenstad's axiomatization of computability. On the other hand, several models that are often more popular with theoretical computer scientists and complexity theorists such as Turing machines, register machines and cellular automata are physics-like in the sense that they are arguably realizable in any plausible model of physics. Of course, all these logical models are equivalent in a certain precise technical sense, but implementations of the latter class are far more direct and they avoid tedious coding issues. The question thus arises whether any insights into the nature of computability can be gained from a careful study of logical versus physical computation. As a case in point we consider discovered rather than constructed universal systems and the old problem of the epistemological status of intermediate recursively enumerable degrees.
Computers in the future may weigh no more than 1.5 tons.
– Popular Mechanics, forecasting the relentless march of science, 1949
I think there is a world market for maybe five computers.
– Thomas Watson, chairman of IBM, 1943
Quantum computation and quantum information is a field of fundamental interest because we believe quantum information processing machines can actually be realized in Nature. Otherwise, the field would be just a mathematical curiosity! Nevertheless, experimental realization of quantum circuits, algorithms, and communication systems has proven extremely challenging. In this chapter we explore some of the guiding principles and model systems for physical implementation of quantum information processing devices and systems.
We begin in Section 7.1 with an overview of the tradeoffs in selecting a physical realization of a quantum computer. This discussion provides perspective for an elaboration of a set of conditions sufficient for the experimental realization of quantum computation in Section 7.2. These conditions are illustrated in Sections 7.3 through 7.7, through a series of case studies, which consider five different model physical systems: the simple harmonic oscillator, photons and nonlinear optical media, cavity quantum electrodynamics devices, ion traps, and nuclear magnetic resonance with molecules. For each system, we briefly describe the physical apparatus, the Hamiltonian which governs its dynamics, means for controlling the system to perform quantum computation, and its principal drawbacks.
Carbon nanotubes are the fabric of nanotechnology. Investigation into their properties has become one of the most active fields of modern research. This book presents the key computational modelling and numerical simulation tools to investigate carbon nanotube characteristics. In particular, methods applied to geometry and bonding, mechanical, thermal, transport and storage properties are addressed. The first half describes classic statistical and quantum mechanical simulation techniques, (including molecular dynamics, Monte Carlo simulations and ab initio molecular dynamics), atomistic theory and continuum based methods. The second half discusses the application of these numerical simulation tools to emerging fields such as nanofluidics and nanomechanics. With selected experimental results to help clarify theoretical concepts, this is a self-contained book that will be of interest to researchers in a broad range of disciplines, including nanotechnology, engineering, materials science and physics.
Piccinini’s usability constraint states that physical processes must have “physically constructible manifestation[s]” to be included in epistemically useful models of physical computation. But to determine what physical processes can be implemented in physical systems (as parts of computations), we must already know what physical processes can be implemented in physical systems (as parts of processes for constructing computing systems). We need additional assumptions about what qualifies as a building process. Piccinini implicitly assumes a classical computational understanding of executable processes, but this is an assumption imposed on physical theories and may artificially limit our picture of epistemically useful physical computation.