To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
If one wishes to express in precise and general terms any statement in physics in which positions, directions and motions in space are involved the most appropriate language to use is the language of vectors. In the mechanics of particles and rigid bodies vectors are used extensively and it is assumed in this monograph that the reader has some prior knowledge of vector algebra, which is the part of vector theory required in mechanics. Nevertheless chapter 1 provides a summary of vector algebra. There the notation to be used is made explicit and a brief survey of the whole field is given with stress laid on a number of particular results that become especially important later. The reader is also given the opportunity to test his understanding of vector algebra and his facility in applying it to detailed problems: a fairly extensive set of examples (exercises A) follows the chapter, with some comments and answers provided at the end of the book.
A considerably widened theory of vectors becomes necessary when one turns to such parts of physics as fluid dynamics and electromagnetic theory where one deals not just with things at certain particular points in space but with the physical objects as distributed continuously in space. Quantities that are continuous functions of the coordinates of a general point in space are called fields and some of the fields of greatest interest in physics are vector fields.
In the discussion so far we have talked of fields being irrotational or solenoidal without paying much attention to the regions in space where these fields exist and have such properties. All actual fields in physics exist only in confined regions of space and very often one must think of fields as terminating on outer boundary surfaces. Or conditions may change discontinuously within regions of interest – an electric field may exist partly in air, partly in glass. In this way, boundary surfaces enter. Also it may happen that the curl or divergence of a vector field – or indeed the gradient of a scalar field – is only different from zero in some confined region. This may be a narrow boundary layer which for practical purposes can be treated as a surface or it may lie effectively on a curve or at a point. Very often the general nature of the field is decisively determined by discontinuities on surfaces or by singularities on lines and points. The study of such behaviour will be the theme of this chapter.
We begin by discussing discontinuities of vector fields across surfaces. It is rather natural to find that we need to distinguish between the behaviour of a field component normal to a surface of discontinuity and one tangential to it. We shall see shortly that the behaviour of the former is related to div f and that of the latter to curl f. We shall put these statements into quantitative form.
The preceding chapter was kept brief because it deals with matters of detailed technique rather than with general ideas. However, these techniques are quite essential to the use of vector analysis at the point where it merges into potential theory, and there is no limit to the illustration that might have been chosen. The following examples serve on the one hand to show that the general idea of using curvilinear systems is not in practice only realisable in the two types of polar coordinates defined in the text and, on the other, to demonstrate the power of precisely these polar coordinates. In this latter function what is presented is a selection of applications to potential theory – a selection which reflects the author's individual preference. It is far from a substitute for a systematic text on potential theory.
The reader should note that at this point there has occurred an inevitable ‘clash of symbols‘. In vector analysis ϕ normally stands for a scalar potential. In polar coordinates it denotes an azimuth angle. In this section only the letter ψ rather than ϕ has been used to denote all scalar potential functions that occur.
Throughout this book so far we have talked about fields as functions of position only. In most physical applications of this theory the fields are also functions of time. To physicists fields are dynamical things that both experience and effect interactions with other systems, so the study of their development in time is of great importance.
As the time dependence of any physical quantity is usually governed by some basic dynamical law and the laws are different in different branches of physics, it might be thought that the ideas of vector analysis would be too general and all embracing to have anything to say about the time dependence of fields. To some extent this is true and we cannot here go far into dynamical problems. But there are two aspects of time dependence that can be very profitably discussed as aspects of vector analysis. It is fitting to conclude any study with a chapter on these topics, particularly because they provide an excellent illustration of how our general ideas can actually be used in a real physical context.
In this chapter we shall talk of scalar and vector fields ø(t, r) and f(t, r) and, in addition to the differentiation symbolised by ▽, we shall frequently form derivatives of the fields with respect to time: ∂ø/∂t and ∂f/∂t. We assume that differentiation with respect to time does not alter the vector character of a field: like ø itself, ∂ø/∂t is a scalar and, like f, ∂f/∂t is a vector.
The material of chapters 7–9 is so interrelated that it would not have been practical to provide separate exercises for each, or even to arrange the exercises in an order that relates strictly to the topics of the individual chapters. The student should, however, find little difficulty in identifying questions that require the understanding of earlier material only. An attempt at logical progress has been made except that the first three questions clearly relate to chapters 7, 8 and 9 separately. Sections 2 and 3 of chapter 10 do not call for illustration by exercises, but the last few questions in the following relate in part to section 1 of chapter 10.
At this stage in the book the idea behind the quasi-square and quasi-cube technique can be taken for granted. The technique is still relevant to some of the exercises, but many are now independent of it – and particular fields that are of importance in physics are now introduced when possible. However, some of the exercises still deal with quite artificial constructs. As before, these serve to illustrate general ideas by non-trivial particular examples. In some case the work is saved from being intolerably cumbersome only by the symmetries built in. Artificial though this may be, training in the exploitation of symmetries is of value in itself.
Many physical quantities may be suitably characterised by scalar functions of position in space. Given a system of cartesian axes a scalar field ø can be represented as ø = ø(r), where r is the position vector defined in chapter 2. There is no real difference between this way of referring to a scalar field and the alternative statement ø = ø(x, y, z), except that in this latter form one is definitely committed to a particular set of cartesian coordinates for r, while the form ø(r) can be taken to refer to any coordinate axes – or indeed to any other equivalent way of defining the position vector r of a point.
In dealing with functions of a single variable, x say, the universal standby that helps one visualise the function is the simple graph, fig. 1. This notion is familiar enough to need no explanation and is so closely associated with the function itself that it is sometimes difficult to remember that the graph is not the function. But when we come to functions of more than one variable, things become rather different. Let us go one step at a time and first consider a function of two variables f(x, y). The values over which x and y vary cover all or part of the xy-plane – a plane where before we had a line – and the function f(x, y) may then be thought of as plotted in a third dimension which we may call z; z = f(x, y).