To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Experiments and experimentation have central roles to play in the education of scientists. For many destined to participate in scientific enquiry through laboratory or field based studies, the ability to apply ‘experimental methods’ is a key skill that they rely upon throughout their professional careers. For others whose interests and circumstances take them into other fields upon completion of their studies, the experience of ‘wrestling with nature’ so often encountered in experimental work, offers enduring rewards: Skills developed in the process of planning, executing and deliberating upon experiments are of lasting value in a world in which some talents become rapidly redundant.
Laboratory and field based experimentation are core activities in the physical sciences. Good experimentation is a blend of insight, imagination, skill, perseverance and occasionally luck. Vital to experimentation is data analysis. This is rightly so, as careful analysis of data can tease out features and relationships not apparent at a first glance at the ‘numbers’ emerging from an experiment. This, in turn, may suggest a new direction for the experiment that might offer further insight into a phenomenon or effect being studied. Equally importantly, after details of an experiment are long forgotten, facility gained in applying data analysis methods remains as a highly valued and transferable skill.
‘The principle of science, the definition almost, is the following:The test of all knowledge is experiment. Experiment is thesole judge of scientific “truth”’.
So wrote Richard Feynman, famous scientist and Nobel Prize winner, noted for hiscontributions to physics.
It is possible that when Feynman wrote these words he had in mind elaborateexperiments devised to reveal the‘secrets of the Universe’, suchas those involving the creation of new particles during high energy collisionsin particle accelerators or others to determine the structure of DNA.Experimentation encompasses an enormous range of more humble (but extremelyimportant) activities such as testing the temperature of a baby's bathwater by immersing an elbow into the water, or pressing on a bicycle tyre toestablish whether it needs inflating. The absence of numerical measures ofquantities distinguishes these experiments from those normally performed byscientists.
Thorough analysis of experimental data frequently requires extensive numerical manipulation. Many tools exist to assist in the analysis of data, ranging from the pocket calculator to specialist computer based statistics packages. Despite limited editing and display options, the pocket calculator remains a well-used tool for basic analysis due to its low cost, convenience and reliability. Intensive data analysis may require a statistics package such as Systat or Origin. As well as standard functions, such as those used to determine means and standard deviations, these packages possess advanced features routinely required by researchers and professionals. Between the extremes of the pocket calculator and specialised statistics package is the spreadsheet. While originally designed for business users, spreadsheet packages are popular with other users due to their accessibility, versatility and ease of use. The inclusion of advanced features into spreadsheets means that, in many situations, a spreadsheet is a viable alternative to a statistics package. The most widely used spreadsheet available for personal computers (PCs) is Excel by Microsoft. Excel appears within this book in the role of convenient data analysis tool with short sections within most chapters devoted to describing specific features. Its clear layout, extensive help facilities, range of in-built statistical functions and availability for both PCs and Mac computers make Excel a popular choice for data analysis. This chapter introduces Excel and describes some of its basic features using examples drawn from the physical sciences. Some familiarity with using a PC is assumed, to the extent that terms such as ‘mouse’, ‘pointer’, ‘Enter key’ and ‘save’ are assumed understood in the context of using a program such as Excel.
What is a spreadsheet?
A computer based spreadsheet is a sophisticated and versatile analysis and display tool for numeric and text based data. As well as the usual arithmetic and mathematical functions found on pocket calculators, spreadsheets offer other features such as data sorting and display of data in the form of an x–y graph. Some spreadsheet packages include more advanced analysis options such as linear regression and hypothesis testing. An attractive feature of many spreadsheets is the ability to accept data directly from other computer based applications, simplifying and speeding up data entry as well as avoiding mistakes caused by faulty transcription.
What can reasonably be inferred from data gathered in an experiment? This simple question lies at the heart of experimentation, as an experiment can be judged by how much insight can be drawn from data. An experiment may have a broad or narrow focus, and may be designed to:
challenge a relationship that has an established theoretical basis;
critically examine a discovery that results from ‘chance’ observations;
check for drift in an instrument;
compare analysis of materials carried out in two or more laboratories.
Such general goals give way to specific questions that we hope can be answered by careful analysis of data gathered in well designed experiments. Questions that might be asked include:
is there a linear relationship between quantities measured in an experiment;
could the apparent correlation between variables have occurred ‘by chance’;
does a new manufacturing process produce lenses with focal lengths that are less variable than the old manufacturing process;
is there agreement between two methods used to determine the concentration of iron in a specimen;
has the gain of an instrument changed since it was calibrated?
It is usually not possible to answer these questions with a definite ‘yes’ or definite ‘no’. Though we hope data gathered during an experiment will provide evidence as to which reply to favour, we must be satisfied with answers expressed in terms of probability.
Consider a situation in which a manufacturer supplies an instrument containing an amplifier with a gain specified as 1000. Would it be reasonable to conclude that the instrument is faulty or needs recalibrating if the gain determined by a single measurement is 995? It is possible that random errors inherent in the measurement process, as revealed by making repeat measurements of the gain, would be sufficient to explain the discrepancy between the ‘expected’ value of gain of 1000 and the ‘experimental’ value of 995. What we would really like to know is whether, after taking into account the scatter in the values of the gain obtained through repeat measurements, the difference between the value we have reason to expect will occur and those actually obtained through experiment or observation is ‘significant’.
Chemists, physicists and other physical scientists are proud of the quantitative nature of their disciplines. By subjecting nature to ever closer examination, new relationships between quantities are discovered, and established relationships are pushed to the limits of their applicability. When ‘numbers’ emerge from an experiment, they can be subjected to quantitative analysis, compared to the ‘numbers’ obtained by other experimenters and be expressed in a clear and concise manner using tables and graphs. If an unfamiliar experiment is planned, an experimenter will often carry out a pilot experiment. The purpose of such an experiment might be to assess the effectiveness of the experimental methods being used, or to offer a preliminary evaluation of a theoretical prediction. It is also possible that the experimenter is acting on instinct or intuition. If the results of the pilot experiment are promising, the experimenter typically moves to the next stage in which a more thorough investigation is undertaken and where there is increased emphasis on the quality of the data gathered. The analysis of these data often provides crucial and defensible evidence sought by the experimenter to support (or refute) a particular theory or idea.
The goal of an experiment might be to determine an accurate value for a particular quantity such as the electrical charge carried by an electron. Experimenters are aware that influences exist, some controllable and others less so, that conspire to adversely affect the values they obtain. Despite an experimenter’s best efforts, some uncertainty in an experimentally determined value remains. In the case of the charge on the electron, its value is recognised to be of such importance that considerable effort has gone into establishing an accurate value for it. Currently (2011) the best value for the charge on the electron is (1.602176487 ± 0.000000040) × 10−19 C. A very important part of the expression for the charge is the number following the ± sign. This is the uncertainty in the value for the electronic charge and, though the uncertainty is rather small compared to the size of the charge, it is not zero. In general, every value obtained through measurement has some uncertainty and though the uncertainty may be reduced by thorough planning, prudent choice of measuring instrument and careful execution of the experiment, it cannot be eliminated entirely.
The process of analysing experimental data frequently involves many steps which begin with the tabulation and graphing of data. Numerical analysis of data may require simple but repetitive calculations such as the summing and averaging of values. Spreadsheet programs are designed to perform these tasks, and in previous chapters we considered how Excel’s built in functions such as AVERAGE() and CORREL() can assist data analysis. While the functions in Excel are extremely useful, there is still some effort required to:
enter data into the functions;
format numbers returned by the functions so that they are easy to assimilate;
plot suitable graphs;
combine functions to perform more advanced analysis.
Excel contains numerous useful data analysis tools designed around the built in functions which will, as examples, fit an equation to data using least squares or compare the means of many samples using analysis of variance. Once installed, these tools can be found via Analysis Group on the Data Ribbon. The dialog box that appears when a tool is selected allows for the easy input of data. Once the tool is selected and applied to data, results are displayed in a Worksheet with explanatory labels and headings. As an added benefit, some tools offer automatic plotting of data as graphs or charts.
In this chapter we consider several of Excel’s advanced data analysis tools which form part of the Analysis ToolPak add-in, paying particular attention to those tools which relate directly to principles and methods described in this book. The Histogram and Descriptive Statistics tools are described in sections 2.8.1 and 2.8.2 respectively and will not be discussed further in this chapter. Tools which relate less closely to the material in this book are described briefly with references given to where more information may be found.
We investigate existence and uniqueness of p-means ep and the median e1 of a probability measure μ on a Finsler manifold, in relation with the convexity of the support of μ. We prove that ep is the limit point of a continuous time gradient flow. Under some additional condition which is always satisfied for p≥2, a discretization of this path converges to ep. This provides an algorithm for determining the Finsler center points.
Counting lies at the heart of much mathematics, and Niven's subtitle is How to count without counting. This is the whole art of combinatorics: permutations, combinations, binomial coefficients, the inclusion-exclusion principle, combinatorial probability, partitions of numbers, generating polynomials, the pigeonhole principle, and much more.
Our aim in this paper is to identify the limit behavior of the solutions of random degenerate equations of the form −div Aε(x′,∇Uε)+ρεω(x′)Uε=F with mixed boundary conditions on Ωε whenever ε→0, where Ωε is an N-dimensional thin domain with a small thickness h(ε), ρεω(x′)=ρω(x′/ε), where ρω is the realization of a random function ρ(ω) , and Aε(x′,ξ)=a(Tx′ /εω,ξ) , the map a(ω,ξ)being measurable in ω and satisfying degenerated structure conditions with weight ρ in ξ. As usual in dimension reduction problems, we focus on the rescaled equations and we prove that under the condition h(ε)/ε→0 , the sequence of solutions of them converges to a limit u0, where u0 is the solution of an (N−1) -dimensional limit problem with homogenized and auxiliary equations.
In this paper we determine the suborbits of Janko’s largest simple group in its conjugation action on each of its two conjugacy classes of involutions. We also provide matrix representatives of these suborbits in an accompanying computer file. These representatives are used to investigate a commuting involution graph for J4.
We develop the theory of Abelian functions associated with algebraic curves. The growth in computer power and the advancement of efficient symbolic computation techniques have allowed for recent progress in this area. In this paper we focus on the genus three cases, comparing the two canonical classes of hyperelliptic and trigonal curves. We present new addition formulae, derive bases for the spaces of Abelian functions and discuss the differential equations such functions satisfy.
Our focus in this work is to investigate an efficient state estimation scheme for a singularly perturbed stochastic hybrid system. As stochastic hybrid systems have been used recently in diverse areas, the importance of correct and efficient estimation of such systems cannot be overemphasized. The framework of nonlinear filtering provides a suitable ground for on-line estimation. With the help of intrinsic multiscale properties of a system, we obtain an efficient estimation scheme for a stochastic hybrid system.
We describe an algorithm to prove the Birch and Swinnerton-Dyer conjectural formula for any given elliptic curve defined over the rational numbers of analytic rank zero or one. With computer assistance we rigorously prove the formula for 16714 of the 16725 such curves of conductor less than 5000.
We calculate all decomposition matrices of the cyclotomic Hecke algebras of the rank two exceptional complex reflection groups in characteristic zero. We prove the existence of canonical basic sets in the sense of Geck–Rouquier and show that all modular irreducible representations can be lifted to the ordinary ones.
Accurately predicting the behaviour of multiphase flows is a problem of immense industrial and scientific interest. Modern computers can now study the dynamics in great detail and these simulations yield unprecedented insight. This book provides a comprehensive introduction to direct numerical simulations of multiphase flows for researchers and graduate students. After a brief overview of the context and history the authors review the governing equations. A particular emphasis is placed on the 'one-fluid' formulation where a single set of equations is used to describe the entire flow field and interface terms are included as singularity distributions. Several applications are discussed, showing how direct numerical simulations have helped researchers advance both our understanding and our ability to make predictions. The final chapter gives an overview of recent studies of flows with relatively complex physics, such as mass transfer and chemical reactions, solidification and boiling, and includes extensive references to current work.