To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The years of the third decade of the present century were heady times. Old social orders were being swept away, monarchies becoming republics and the United States of America was emerging as a world power. In physics too, revolutions were taking place, none more potent than that which resulted in the emergence of quantum mechanics as a model for the behaviour of sub-atomic particles.
It was Newton who first suggested that light was particulate but during the nineteenth century this view had fallen into neglect. Indeed a number of experiments, for example the classic Young's slits experiment, demonstrated quite conclusively that light was a wave motion. How else could interference fringes occur? However, the experiments on the photoelectric effect demonstrated just as conclusively that light energy was carried in packets, or quanta, and that a continuous wave description was not applicable.
This ‘wave or particle’ dilemma was not unique to the behaviour of light. J. J. Thompson showed that cathode rays were charged, had a well defined mass and had all the properties expected of a beam of particles. Nevertheless, Davisson and Germer showed that diffraction of electrons could take place, an effect made visually much more dramatic by G. P. Thompson's transmission electron diffraction patterns. Nowadays, electron diffraction is used as a routine analytical tool in all advanced metallurgical and materials science laboratories (Fig. 2.1). Clearly new axioms were required.
Probably the most spectacular phenomenon associated with the breakdown of the independent electron approximation is that of superconductivity. In the superconducting state, the material loses all resistivity and becomes a perfect conductor. The discovery in 1986 of oxide materials which were superconducting at temperatures above that of the boiling point of nitrogen sparked an unprecedented surge of activity in the field which remains an area of high profile and popular interest.
The discovery of superconductivity
In 1908 Kammerlingh Onnes succeeded in liquefying helium and set about the task of studying the properties of metals at these extremely low temperatures. As we saw in Chapter 1, the resistivity of metals such as platinum fell to a small, non-zero value when extrapolated to T = 0. This residual resistivity fell with improvements in purity and thus Onnes studied mercury, which was the most pure metal available at that time. To the great surprise of Onnes and the whole scientific community, the resistivity fell monotonically until just above the boiling point of helium and then fell abruptly to zero. Fig. 10.1 shows an example of the superconducting phase transition in yttrium barium copper oxide, one of the high temperature superconducting oxides. Onnes was unable to measure precisely the transition width or whether the resistivity was genuinely zero. However, in 1963 File and Mills measured the decay of a persistent current set up in a superconducting ring using nuclear magnetic resonance as the probe.
One of the most remarkable developments of the last decade has been the growth of the semiconductor industry. The exploitation of the properties of semiconducting materials to build large logic arrays on a single piece of crystal has led to a dramatic increase in the processing power and memory capacity of small computers, with a simultaneous fall in price. Development of single chip microprocessors has revolutionized our mode of working in a whole range of fields from the factory floor to the office. In this chapter we will examine the basic physics associated with a number of devices, and as many devices rely on the properties of junctions between n- and p-type semiconductors a fairly detailed discussion of such junctions is given before individual devices are treated. However, because all devices require connection to metallic wires in order to join components together we will first examine the metal–semiconductor junction. As it turns out this proves to be an excellent introduction to the p–n junction as well as providing a glimpse into some of the not-so-obvious pitfalls associated with device manufacture.
Metal–semiconductor junctions
The Schottky barrier
Let us suppose that a piece of metal is brought into contact with a piece of n-type semiconductor. (Of course, in practice, the metal would be evaporated on the semiconductor as a thin film, or attached with a soldered connection in which an alloy is formed, but such a naive picture is useful to fix our ideas.)
Most textbooks on solid state physics begin the exposition from what might be called a ‘structural position’. Space and point groups are discussed, followed by consideration of the Bravais lattice. The reader is thus led on to elementary ideas about crystallography and the use of diffraction techniques for the solution of crystal structures. Having laid the foundation of how atoms and molecules order to form crystalline structures, electron motion in such periodic structures is treated and band theory developed. The free electron model is seen as an approximation of the more general band theory. In many, rather formal, ways this approach is very satisfying. It would seem obvious that in the first instance one must understand the structure of the material on which one is working before attempting to understand its other physical properties. However, in practice, it proves rather hard to teach solid state physics this way and to retain student enthusiasm in the early stages of the teaching of crystallography where one is dealing with rather difficult geometrical concepts and very little physics. There is a very real danger of making the introduction to the subject so unexciting that the inspiration is lost and students come to regard solid state physics as the ‘dull and dirty’ branch of their physics course. However, elementary quantum mechanics, including the one-dimensional solution of the time independent Schrödinger equation, is included quite early in many undergraduate courses and there is much attraction in illustrating at this early stage the important technological context of the apparently abstruse quantum mechanics.
At the beginning of the previous chapter, we reviewed some experimental data which could not be explained using the free electron model. While the Hall effect unequivocally indicates the breakdown of the free electron model, it does not provide direct evidence for the existence of energy bands. There are, however, a number of techniques which do provide a direct measure of the energy gaps and the density of states.
Optical techniques for band structure measurements
Infra-red absorption in semiconductors
The first technique is both the easiest to understand and the easiest to perform experimentally. If one looks at a piece of polished silicon or germanium, it has the appearance of a metal. However, if thinned to below a few micrometres in thickness a piece of silicon is translucent, having a red appearance. A certain amount of light is transmitted in the red end of the spectrum. If one goes further into the infra-red, we find that these semiconductors are transparent.
Monochromatic radiation can be obtained from the continuous spectrum of infra-red radiation emitted by a hot filament by use of a diffraction grating spectrometer. The intensity transmitted through the semiconductor is measured as a function of wavelength, and, as illustrated in Fig. 5.1, a very abrupt drop in transmission is observed at a frequency characteristic of the semiconductor. For germanium and other common semiconductors this ‘absorption edge’, as it is called, occurs at around 1–2 μm wavelength. This is in the near infra-red.
We have until now made use of the independent electron approximation, in which it is assumed that we can treat each electron independently of all of the others. In this chapter we will examine the consequence of the breakdown of this phenomenon.
It has been known for centuries, indeed it was known to the ancient Chinese, that magnetite or lodestone was attracted by the earth's field. Two pieces of lodestone attracted or repelled each other depending on which end of the lump of rock was pointed at the other. These chunks of material possess a spontaneous magnetic moment, i.e. they have a magnetization in zero external magnetic field.
We find that the elements iron, nickel and cobalt, bunched together in the middle of the periodic table, can also be induced to have a spontaneous moment at room temperature. The spontaneous magnetization M (defined as the magnetic moment per unit volume) is very large compared with that induced by a magnetic field in materials such as copper or zinc, which are very close in the periodic table. Alloys of iron, cobalt and nickel also have such properties which became known as ferromagnetism.
Basic phenomena
Hysteresis loops
Ferromagnetic materials show a characteristic M – H (or M – B0) loop. The susceptibility, defined by k = M/B0 where B0 is the external field, is very large and the magnetization displays hysteresis (Fig. 9.1). In sufficiently high field the magnetization saturates, this saturation magnetization being a characteristic of the material.
The “Problem of Time” in time-parametrized theories appears at the quantum level in various ways, depending on the quantization procedure. In (Dirac) canonical quantization, the usual Schrodinger evolution equation is replaced by a constraint on the physical states HΨ[q] = 0; such states depend on the generalized coordinates {qα}, but not on any additional time evolution parameter t. In path-integral formulation, one integrates over all paths, extending over all possible proper time lapses, which connect the initial and final configurations. The resulting Green's function G[qf,q0], like the physical state wavefunctions Φ[qα], has no dependence on an extra time parameter t. In certain theories, e.g. parametrized non-relativistic quantum mechanics, or the case of a relativistic particle moving in flat Minkowski space, it is possible to identify one of the generalized coordinates as the time variable, and to associate with that variable a conserved and positive-definite probability measure. In other theories, such as a relativistic particle moving in an arbitrary curved background spacetime, or in the case of quantum gravity, it has proven very difficult to identify an appropriate evolution parameter, and a unique, positive, and conserved probability measure. This is the “Problem of Time”, reviewed in ref. [1].
Our proposal for resolving this problem begins with the rather trivial observation that, at the classical level, there is no difference between the action S and the action S′ = const, × S; these actions obviously have the same Euler- Lagrange equations.
During recent years there has been an activity in the development of a, so called, twistor-like, doubly supersymmetric approach for describing superparticles and superstrings [1]–[8]. The aim of the approach is to provide with clear geometrical meaning an obscure local fermionic symmetry (k-symmetry) of superparticles and superstrings [9, 10], which plays an essential role in quantum consistency of the theory. At the same time this local fermionic symmetry causes problems with performing the covariant Hamiltonian analysis and quantization of the th eories. This is due to the fact that the first–class constraints corresponding to the k-symmetry form an infinit reducible set, and in a conventional formulation of superparticles and superstrings (see [10] and references therein) it turned out impossible to single out an irreducible set of the fermionic first–class constraints in a Lorentz covariant way. So the idea was to replace the k-symmetry by a local extended supersymmetry on the worldsheet by constructing superparticle and superstring models which would be manifestly supersymmetric in a target superspace and on the world-sheet with the number of local supersymmetries being equal to the number of independet k-symmetry transformations, that is n = D - 2 in a space–time with the dimension D = 3, 4, 6 and 10. Note that it is just in these space–time dimensions the classical theory of Green–Schwarz superstrings may be formulated [10], and twistor relations [11] take place.
The doubly supersymmetric formulation provides the ground for natural incorporating twistors into the structure of supersymmetric theories.
The possible external couplings of an extended non-relativistic classical system are characterized by gauging its maximal symmetry group at the center-of-mass. The Galilean one-time and two-times harmonic oscillators are exploited as models.
Introduction
In this paper we exploit the gauge technique to characterize the possible couplings of extended systems. The non-relativistic harmonic oscillator with center-of-mass is used as a model. We make the ansatz that the essential structural elements and the extension of a dynamical system are represented and summarized by its maximal dynamical symmetry group viz. by the algebraic structure of the constants of the motion. Then, we apply the gauge procedure to this group by localizing it at the center-of-mass of the system. We show thereby that the gauge procedure is meaningful also for dynamical symmetries besides the usual kinematical ones. In spite of the evident paradigmatic and heuristic nature of our ansatz, the results obtained here seem notably expressive.
The technical steps of the work are the following: 1) the standard Utiyama procedure for fields is applied to the possible trajectories of the center-of-mass as described by a canonical realization of the extended Galilei group. This determines the gravitational-inertial fields which can couple to the center-of-mass itself. As shown elsewhere [4], the requirement of invariance (properly quasi-invariance) of the Lagrangian leads to the introduction of eleven gauge compensating fields and their transformation properties. 2) The generalized Utiyama procedure is then applied to the internal dynamical U(3) symmetry so that gauge compensating fields have to be introduced in connection to the internal angular momentum (spin) and the quadrupole momentum.
The systematic construction of hierarchies of Abelian (in d=2) and non-Abelian (in d>2) Higgs models in d dimensions which support finite action and topologically stable lump solutions, was reviewed in Ref.[l]. Very briefly, the method involves the construction of a hierarcy of Yang-Mills (YM) models in all even dimensions supporting such lump solutions, and then subjecting these to dimensional reduction where the residual systems are the hierarchies of Higgs models in question.
In this article we shall investigate in more detail, the asymptotic properties of the lumps of the Higgs models. These are very different from the asymptotic properties of the even (higher) dimensional YM hierarchies whose connectionfields have a pure-gauge type of behaviour at infinity, their lumps are localised with a power behaviour, and in those cases where these systems are scale invariant this localisation exhibits a further scale arbitrariness. By contrast, the lumps of the hierarchies of Higgs models are in general exponentially localised to an absolute scale. This property is potentially very important from the viewpoint of physical applications and hence is highlighted in the title of this article.
The material is presented in two sections below. In the first, we present the hierarchy of Higgs models in d dimensions in a formal way and examine particular asymptotic properties of the lumps by examining the fields in the (Dirac) string-gauge.
Presymplectic manifolds underlie all relevant physical theories, since all of them are, or may be, described by singular Lagrangians [1] and therefore by Dirac-Bergmann constraints [2] in their Hamiltonian description. In Galilean physics both Newtonian mechanics [3] and gravity [4] have been reformulated in this framework. In particular one obtains a multitime formulation of non-relativistic particle systems, which generalizes the non-relativistic limit of predictive mechanics [5] and helps one to understand features unavoidable at the relativistic level, where each particle, due to manifest Lorentz covariance, has its own time variable. Instead, all both special and general relativistic theories are always described by singular Lagrangians. See the review in Ref.[6] and Ref.[7] for the so-called multitemporal method for studying systems with first class constraints (second class constraints are not considered here).
The basic idea relies on Shanmugadhasan canonical transformations [8], namely one tries to find a new canonical basis in which all first class constraints are replaced by a subset of the new momenta [when the Poisson algebra of the original first class constraints is not Abelian, one speaks of Abelianization of the constraints]; then the conjugate canonical variables are Abelianized gauge variables and the remaining canonical pairs are special Dirac observables in strong involution with both Abelian constraints and gauge variables. These Dirac observables, together with the Abelian gauge variables, form a local Darboux basis for the presymplectic manifold [9] defined by the first class constraints (maybe it has singularities) and coisotropically embedded in the ambient phase space when there is no mathematical pathology.