To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Braunsweig, Fr. Vieweg & Son, 1922. VIII, 224 pages and 10 illustrations. 8o. Price, M. 7,50 stapled and M. 9,25 bound.
A comprehensive work that sets out the philosophical problems of the theory of relativity in detail would be a true enrichment of the literature. Unfortunately, Müller's volume contains mistakes. It distinguishes itself by committing these errors in a very dull and matter-of-fact way and not in the usual emotion-laden warlike fashion of most of Einstein's opponents – yet it is built on an entirely flawed understanding of the special theory of relativity that results in faulty conclusions. A necessary prerequisite for any philosophical critique of relativity is an analysis of the theory's factual basis and conceptual foundation; and Müller's efforts are frustrated by the inability to offer such an analysis. His first mistake concerns the concept of simultaneity. He claims that simultaneity can be known and does not need to be defined. He fails to grasp that there are two types of definitions: the definition of concepts within a conceptual system and “coordinative definitions” which specify how given concepts acquire an empirical reality. Thereby, a conceptual definition is posited for a unit length; but in order to actually carry out measurements a coordinative definition must be put in place to fix that “this rod here” is a unit long.
In a recently published article, I have reported on an axiomatization of the relativistic space-time theory. In light of this, we may now test the possibility of absolute time by investigating which axioms are and are not compatible with it.
One possibility for defining the synchronization of clocks, so that the same simultaneity relation holds for all systems, arises from the transportation of clocks. Two clocks that are brought into synchronization at the same place are to be called synchronized as well when one or the other is transported to a different place. This is a definition of simultaneity and is neither true nor false, but an arbitrary rule. For that reason it can be used in any case; but in order to be univocal, it must satisfy the following axiom:
Axiom A
Two clocks that are synchronized at one place are always synchronized when compared at the same place regardless of the paths of transport.
By “clock” we understand here a closed periodic system. is satisfied is a mere matter of fact and can be decided independently of the definition of simultaneity for distant places and independently of the theory of relativity. The theory maintains ‘and it has received extremely indirect confirmation’ that axiom A is false, and therefore it rejects absolute time; but to the present, no means had been found to directly test the axiom.
1. Mathematics and reality. 2. Time order. 3. Simultaneity. 4. Uniformity.
Recently, Hj. Mellin, in a lengthy examination, offered a critique of my Axiomatization of the Theory of Relativity. A discussion of the objections that Mellin raises to my axiomatization, and thereby to the theory of relativity, seem to me to serve the general interest because of their fundamental nature and his clear formulation of views which are most often only operative on a subconscious level, and I would therefore like to answer them here.
The most significant difference in our positions lies in our understandings of the relationship between the mathematical discipline of geometry and reality. Here, I adopt the perspective (which is often incorrectly termed conventionalism) that the geometrical axioms as mathematical propositions are not at all descriptive of reality; this only occurs when physical things are shown to be coordinated to the elements of geometry (coordinative definitions). If we take very small bits of mass to be points, light rays to be straight lines, and the length of a segment to be determined by the repeated placement of a rigid rod, then the statement that straight lines are the shortest becomes an (empirically proven) statement about real things. Without such coordinative definitions, these propositions say nothing about reality. Mellin's objections to this view rest on his stressing the so- called intuitive necessity of the geometric axioms.
In number 5114 of volume 214 of this journal, Mr. Anderson has raised several objections to my reply to Mr. Wulf. I have only now become aware of them, so my response will seem belated. All the same, I do not want to forgo this response because it is important to clarify the basis of this incessant misunderstanding of the theory of relativity.
Anderson admits that the theory of relativity provides a contradiction-free explanation for the “relativistic” perspective of the carousel at rest and the stars rotating with large angular velocity. He believes, however, that the theory becomes flawed when the direction of motion reverses; he contends that it is a coincidence in the relativistic perspective that all stars reverse their direction of motion at the same time. Let me begin by saying that obviously the changes in the directions of motion are contained in the differential equations of the gμν-fields. Further, it is not a matter of chance that the motions of the stars define the celestial axis as a privileged straight line, but rather it is well grounded in the distribution of stellar motions; the same conditions that, from the non-relativistic viewpoint, place the Earth at rest, also determine the distribution of the stars from the relativistic perspective after an appropriate transformation. Hence, this can in no way be called a coincidence.
Since the dispute over the theory of relativity has begun to die down over the last several years and the new theory has been even more successfully worked through, the most recent attacks upon it have come from the flank from which it was the least expected. They are not attacks on the philosophical motivation, and thereby not the well-known reproaches that the theory is “inconceivable” or “incompatible with common senses”; rather, we are now confronted with a physical experiment that stands in explicit contradiction to an assertion of the theory of relativity. This experiment was conducted by the American D. C. Miller at Mount Wilson and was published in the Proceedings of the National Academy, Washington (11, 382, 1925).
It concerns the so-called Michelson experiment, one of the most foundational pillars upon which the theory of relativity is constructed. This experiment traces back to the ideas of Maxwell, but it was Michelson, a scholar famous for his precision in optical measurement, who first carried it out. Michelson had already begun his investigation in the seventies in Berlin as an assistant to Helmholtz, and carried it out in the eighties in America. We can describe the experiment in schematic form in the following way (Fig. 13.1): two rigid arms are placed at right angles with mirrors S1 and S2 fixed perpendicularly to the arms at their end points.
Dynamics, or the theory of motion, is the science of the temporal passage of spatial events. At least this is how this study has come to be defined. But is it really true that we can arrive at a clear understanding of dynamics using this definition?
The naïve understanding is satisfied with this explanation. Indeed, what is so clear and simple as space and time? Space is what we see with our eyes, and time is what we feel as everything is passing by, one thing always after the other. But is this true? Who has ever seen space? I mean that one can only see objects in space and that they stand in the particular relations we call “in front of,” “behind,” “to the right of” and “to the left of.” We coordinate every object with a place in space; but to speak of the space itself, we then have to mentally extract all of the objects. That is a very broad abstraction. How do I know that this space exists when all bodies are removed? Not through experience, since all observations refer to those real things and their respective distances can only be defined with respect to the things around them. It is therefore a peculiar construction in which we embed things such that it is attached to them but can never be observed, and, unlike forces or heat that will make them glow, has no effect on them, yet it dictates far-reaching laws.
Although there is still resistance to the theory of relativity, it should be pointed out that this resistance is founded upon conceptual objections. It is beyond dispute that the theory is physically useful, that its assertions are well verified by observable phenomena. What opponents of the theory find problematic are the ideas upon which the theory is founded. On the other hand, it is precisely these ideas that the defenders of the theory hold to be its greatest achievement and in which they claim to find the true significance of Einstein's work. It is therefore of interest to study the formation of these ideas, their content and their significance.
We begin by rejecting two ineffective objections. The theory has been criticized for contradicting ordinary common sense. It is necessary to concede that this is true, but we refuse to see this as a criticism because a theory like this, which provides an analysis of the most profound abstract ideas, will necessarily contradict certain naïve intuitions from everyday life. We do not discount the value which is attached to this simplicity of understanding, but a mentality adapted to the practical needs of existence (and is ordinary common sense anything else?) cannot be required to possess the proper critical faculty of a theory of knowledge: “The chisels and hammers are fine for working a piece of wood, but to engrave you need an engraver's needle,” these words in Kant should not be far from view whenever one wants to contradict the theory of relativity with elementary objections.
An investigation of the extent to which astronomical measurements of the speed of light from the eclipsing of Jupiter's moons confirm the principle of the constancy of the speed of light. The result is a reduction of the question to the problem of absolute transport time.
Having placed the empirical foundation of the principles of the constancy of the speed of light in axiomatic form and recognizing that several of these axioms have not yet received conclusive experimental support, it will now be of interest to consider an experiment to confirm the light principle which I had not mentioned, but to which Born has referred.
The eclipses of the moons of Jupiter may be used for an astronomical measurement of the speed of light. It is well known that the delays in the eclipses of a moon will progressively increase over the course of a year; the resulting overall delay corresponds to the time that the light has taken to traverse the length of the axis of the Earth's orbit. Hence, we are measuring the speed of light in a one directional sense. Now Maxwell has already pointed out that the speed of light must be different depending upon whether it travels with or against the direction of the orbit of the Earth because the inertial system S, in which the sun and the elliptical orbits of the planets are at rest, will itself have a velocity V (which Born calls v) with respect to a preferred inertial system J, which according to the older theory is that in which the aether is at rest; the speed of light must therefore really be c + V in one direction and c − V in the other.
If it is reasonable to assume that a population consists of values that have a Gaussian distribution, then what will be the distribution of a property (a ‘statistic’) of a sample drawn from this Gaussian ‘parent’? The property might be the mean, variance or standard deviation of the sample. Each of these properties has a sampling distribution, which can be described as follows.
We imagine a very large or infinite population that has a Gaussian distribution with mean μand standard deviation ?. A sample consisting of n values is randomly drawn from this population. A property of the sample is calculated, in order to estimate the corresponding population parameter. We then draw another sample, also of size n, and calculate the same property for this second sample. The process is repeated many times. Next the distribution of that property is examined; the distribution becomes manifest as a result of taking a large number of repeated samples (all of size n). The distribution is the sampling distribution of the property in question. It is understood that, in any particular experimental situation, we do not actually need to draw a large number of samples; this process is a conceptual one that enables us to infer, from one actual sample, the variability (depicted by the shape of the sampling distribution) of our estimate of the population parameter. In section 9.1 we review the material already discussed in section 8.6.2.
The uncertainty that accompanies the best estimate of a measurand is usually based on fewer than 20 degrees of freedom, and sometimes fewer than 10. The reason is as follows.
For Type A evaluations of uncertainty, the number of degrees of freedom, v, is related to the sample size, n. Thus, when calculating the mean of a sample, v= n–1. Where measurements are made ‘manually’ (not under computer control), n and therefore ? are likely to be small. Where measurements are computer-controlled and the environment is sufficiently stable, it is easy to amass samples consisting of hundreds or even thousands of values from the same population. We might therefore think that the number of degrees of freedom associated with the uncertainty in the measurand is also very high. However, this is unlikely to be so, since there will probably exist systematic errors that can be corrected for but that will nevertheless leave a Type B uncertainty. Such an uncertainty is generally associated with fewer degrees of freedom. Admittedly, the estimation of a systematic error may also be based on a large number of repeated measurements. The calibration of the 312 -digit DMM by means of simultaneous measurements with an 812 -digit DMM in section 6.1.2 is a case in point. A large number of such measurements could in principle allow us to determine an uncertainty in the systematic error of the 312 -digit DMM that is associated with a large number of degrees of freedom. However, the readings of the 812 -digit DMM themselves have an uncertainty obtained from its calibration report that is likely to be based on fewer degrees of freedom.
In writing this book, we address several groups of readers who require an understanding of measurement, and of uncertainty in measurement, in science and technology.
Undergraduates in science, for example, should have texts that set out the concepts and terminology of measurement in a clear and consistent manner. At present, students often encounter texts that are mutually inconsistent in several aspects. For example, some texts use the terms error and uncertainty interchangeably, whilst others assign them distinctly different meanings. Such inconsistency is liable to confuse students, who are consequently unsure about how to interpret and communicate the results of their measurements.
Until recently, a similar lack of consistency affected those whose primary occupation includes measurement, the evaluation of uncertainty in measurement, instrument and artefact calibration and the maintenance of standards of measurement – that is, professional metrologists. International trade, for example, requires mutual agreement among nations on what uncertainty is, how it is calculated and how it should be communicated; for a global economy to work efficiently, lack of such agreement cannot be tolerated. In the mid 1990s, international bodies, charged with the definition, maintenance and development of technical standards and standards of measurement in a variety of fields, published and disseminated the Guide to the Expression of Uncertainty in Measurement – the ‘GUM’. These bodies included the Bureau International des Poids et Mesures (BIPM) or International Bureau of Weights and Measures, the International Standardisation Organisation (ISO) and the International Electrotechnical Commission (IEC).
For the newcomer, unfamiliarity with the specialist vocabulary of scientific disciplines like physics and chemistry can act as an obstacle to learning those disciplines. What can be even more challenging is that science employs many words such as force and energy that are used in various ways in everyday language. The science of measurement, in particular, has many terms, such as error, uncertainty and accuracy, that also occur in day-to-day use in contexts far removed from measurement. In this chapter we consider terms used in measurement, including those with an everyday or popular meaning such as error, and we clarify their meaning when used in the context of measurement.
Measurement and related terms
Measurement
Measurement is a process by which a value of a particular quantity such as the temperature of a water bath or the pH of a solution is obtained. In the case of length measurement, this might involve measuring the atomic-scale topography of a surface using an instrument such as an atomic-force microscope (AFM), or measuring the length of a pendulum using a metre rule. Values obtained through measurement form the foundation upon which we are able to
test both new and established scientific theories;
decide whether a component, such as a resistor, is within specification;
compare values obtained by workers around the world of a particular quantity, such as the thickness of the ozone layer of the atmosphere;
quantify the amount of a particular chemical species, such as the amount of steroid in a sample of urine taken from an athlete; and
establish the proficiency of laboratories involved with the testing and calibration of equipment.
Random errors arise from uncontrollable small changes in the measurand, instrumentation or environment. These changes are evident as variations in the values obtained when we carry out repeat measurements. In this chapter we shall consider methods of quantifying these variations: that is, describing them numerically using statistical methods. Some basic statistical concepts will therefore be introduced and discussed.
Sampling from a population
In statistics, the term population refers to the number of possible, but not necessarily actual, measured values. In some situations a population consists of an infinite number of values. In practice, we can measure only a sample drawn from a population, since time and resources are always limited. We hope and expect that the sample is representative of the population. In almost every case of measurement we sample a population, and the quantities of interest obtained from the sample (sometimes called sample statistics) should reliably represent corresponding parameters in the population (the population parameters). An example of such a quantity of interest, which quantifies the amount of scatter in values, is the standard deviation of the values.
There are cases where a sample may, in fact, be the entire population. Thus the examination results of a class of 30 students can be analysed statistically in order to determine, for example, the mean mark and the range of marks, with no attempt at generalising. The teacher of the class may be interested simply in that particular class.
Random errors, evaluated using statistical methods, create a Type A uncertainty. A known systematic error in a measured value should be corrected for, and after the correction has been made, the uncertainty in the correction contributes to the uncertainty in that value. The uncertainty in the correction, and hence in the value, may be Type A or Type B, depending on how the uncertainty is evaluated. The finally reported uncertainty of a measurand, called the combined uncertainty, is likely to have both Type A and Type B components, but becomes wholly Type B when subsequent use is made of it.
In this chapter we consider how to evaluate the combined uncertainty of a measurand. The procedure to be described makes no distinction between Type A and Type B uncertainties. It may appear then as if we have gone to unnecessary trouble in assigning types to uncertainties, but this classification is desirable since it emphasises the different methods by which they are evaluated. It is also useful as a reminder that, whereas an ‘error’ can be random or systematic, ‘uncertainty’ is a separate concept whose two types are distinguished from each other by different names, ‘Type A’ and ‘Type B’. However, once uncertainties have been classified, Type A and Type B uncertainties are treated identically thereafter.
In this chapter we describe how consistency and clarity may be brought to the calculation and expression of uncertainty in measurement.
The goal of any measurement is to establish a numerical value for the measurand. Depending on the accuracy that we wish to claim for the numerical value, the procedure that gives us the value may be relatively simple and direct, involving no more than a tape-measure, for example. In other situations the process may be more complicated, with several intermediate stages requiring the resources of a well-equipped laboratory. Thus, if the measurand is the width of a table, the tapemeasure is all that is needed. On the other hand, if the measurand is the accurate mass of an object, we need to know the value of the buoyancy correction (since the weight of the object is less by an amount equal to the weight of the volume of air that it displaces). This in turn requires knowledge of the volume of the object and of the density of air (which is a function of temperature, pressure and composition) at the time of measurement.
There are three components of a measurement: the measurand itself; the measuring instrument (which can be a stand-alone instrument such as a thermometer, or a complex system that occupies a whole laboratory); and the environment (which includes the human operator). The environment will, in general, affect both the measurand and the measuring instrument.