To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
What happens if one sprinkles electrons onto the surface of liquid helium? Surprisingly the electrons are not absorbed into the bulk of the fluid, but form a quasi-two-dimensional sheet of electrons concentrated at some distance above the helium surface. In general, electrons hovering above the surface of a dielectric are called surface state electrons. An excellent review of surface state electrons is that by Cole (1974).
Surface state electrons are especially interesting in the context of chaos and quantum chaos. This is so because driven by strong microwave fields, their classical dynamics shows a transition to chaos. The investigation of microwave-driven surface state electrons as a testing ground for quantum chaos was first proposed by Jensen in 1982. So far, and to the best of our knowledge, a successful surface-state-electron (SSE) microwave ionization experiment was never carried out in the chaotic regime. This is mainly due to the formidable experimental difficulties in controlling the fragile SSE system. Electric stray fields, residual helium vapour pressure and interactions with the quantized surface modes (“ripplons”) of the liquid helium substrate make it very difficult to reach the high quantum numbers necessary for a quantum chaos experiment. It was, however, realized early on (see, e.g., Shepelyansky (1985)) that the dynamics of low angular momentum hydrogen Rydberg atoms is very similar to the dynamics of surface state electrons. Therefore, building on the knowledge accumulated in the field of surface state electrons, the focus of research shifted to the investigation of microwave-driven hydrogen and alkali Rydberg atoms.
Einstein (1917) appreciated early on that within the “old” pre-1925/26 quantum mechanics absence of integrability is a serious obstacle for the quantization of classical systems. Therefore, in retrospect not surprisingly, the quantization problem was not adequately solved until the advent of the “new” quantum mechanics by Heisenberg, Born, Jordan and Schrödinger. The new quantum mechanics did not rely at all on the notion of classical paths, and this way, unwittingly, sidestepped the chaos problem. Within the framework of the new theory, any classical system can be quantized, including classically chaotic systems. But while the quantization of integrable systems is straightforward, the quantization of classically chaotic systems, even today, presents a formidable technical challenge. This is especially true for quantization in the semiclassical regime, where the quantum numbers involved are large. In fact, efficient semiclassical quantization rules for chaotic systems were not known until Gutzwiller (1971, 1990) intoduced periodic orbit expansions. Gutzwiller's method is discussed in Section 4.1.3 below. It is important to emphasize here that the existence of chaos in certain classical systems in no way introduces conceptual problems into the framework of modern quantum theory, although, let it be emphasized again, chaos came back with a vengeance from the “old” days of quantum mechanics. Even given all the modern day computer power accessible to the “practitioner” of quantum mechanics, chaos is the ultimate reason for the slow progress in the numerical computation of even moderately excited states in such important, but chaotic problems as the helium atom.
In this and the following chapters we will encounter various time dependent and time independent atomic physics systems whose classical counterparts are chaotic. All the systems discussed in the remaining chapters of this book are examples of type I systems, i.e. examples for quantized chaos. This is natural since quantized chaos is by far the most important type of quantum chaos relevant in atomic and molecular physics. In the category “time dependent systems”, we discuss the rotational dynamics of diatomic molecules (Section 5.4), the microwave excitation of surface state electrons (Chapter 6), and hydrogen Rydberg atoms in strong microwave fields (Chapters 7 and 8). All these systems are driven by externally applied microwave fields. For strong fields none of these three systems can be understood on the basis of quantum perturbation theory, as the involved multi-photon orders are very high. Processes of multiphoton orders 100 to 300, typically, have to be considered. It is important to realize that in this day and age, with powerful super-computers at hand, there is no problem in implementing a perturbation expansion of such high orders. But the emphasis is on understanding the processes involved. Although multi-photon perturbation theory provides valuable insight into the physics of low order multi-photon processes important in the case of weak applied fields (an example is discussed in Section 6.3), not much insight is gained from a perturbation expansion that has to be carried along to the 100th order and beyond in order to converge.
In all of the atomic and molecular systems studied in the previous chapters the relevant dynamics was the bound-space dynamics with the continuum playing either no role at all (see, e.g., the kicked rotor and the driven CsI molecule), or only an auxiliary role for probing the bound-space dynamics with the help of the observed ionization signal (see, e.g., the driven surface state electrons and microwave-driven hydrogen atoms). In this chapter we focus on atomic and molecular scattering, i.e. on processes in which the continuum plays an essential role. This subject has recently attracted much attention as dynamical instabilities and chaos have been discovered in the simplest scattering systems. Complicated scattering in an atomic physics system was noticed as early as 1971 by Rankin and Miller in the theoretical description of a simple chemical reaction. In 1983 Gutzwiller observed complicated behaviour of the quantum phase shift in a schematic model of chaotic scattering. 1986 saw the publication of various important papers on chaotic scattering. Eckhardt and Jung (1986) reported on the occurrence of chaos in a model scattering system. Chaos was found by Davis and Gray (1986) in the classical dynamics of unimolecular reactions, and Noid et al. (1986) noticed fractal behaviour in the He – I2 scattering system. These papers were an important catalyst for the creation of a whole new field: chaotic scattering.
Poincaré (1892, reprinted (1993)) was the first to appreciate that exponential sensitivity in mechanical systems can lead to exceedingly complicated dynamical behaviour. Surprisingly, complicated systems are not necessary for chaos to emerge. In fact, chaos can be found in the simplest dynamical systems. Well known examples are the driven pendulum (Chirikov (1979), Baker and Gollub (1990)), the double pendulum (Shinbrot et al. (1992)), and the classical versions of the hydrogen atom in a strong magnetic (Friedrich and Wintgen (1989)) or microwave (Casati et al. (1987)) field.
In general it is not possible to understand the spectra and wave functions of highly excited atoms and molecules without reference to their classical dynamics. The correspondence principle, e.g., assumes knowledge of the classical Hamiltonian as a starting point. Since the Lagrangian and Hamiltonian formulations of classical mechanics provide the most natural bridge to quantum mechanics, we start this chapter with a brief review of elementary concepts in Lagrangian and Hamiltonian mechanics (see Section 3.1). The double pendulum, an example of a classically chaotic system, is investigated in Section 3.2. This is also the natural context in which to introduce the idea of Poincaré sections. With the help of Poincaré sections we can reduce the continuous motion of a mechanical system to a discrete mapping. This is essential for visualization and analysis of a chaotic system. A discussion of integrability and chaos in Section 3.3 concludes Chapter 3.
In Chapters 5 – 7 we studied the onset of global chaos and its various manifestations in atomic and molecular systems. It was shown that in the kicked molecule (Section 5.4) the onset of chaos is responsible for population transfer to highly excited rotational states. A similar effect is active in microwave-driven surface state electrons and hydrogen Rydberg atoms where the onset of chaos results in strong ionization. But so far the focus has been on the computation of critical strengths and control parameters, whereas the ionization signal was reduced to play a secondary role as a probe, or an indicator for the onset of chaos. In this chapter we shift the focus to the investigation of the ionization signal itself, especially its time dependence.
The time dependence of weakly ionizing systems that are well described by a multi-photon process of order p has been studied extensively in the literature. In this case the time dependence of the ionization signal does not offer any surprises. We expect exponential decay with a decay rate ρ that is proportional to the pth power of the field intensity I according to ρ ∼ Ip. This prediction of multi-photon theory has been verified in numerous experiments. In fact, experimentalists often use the field dependence of the ionization rates to assign a multi-photon order to an experimentally observed ionization signal.
The purpose of this chapter is to discuss briefly, and as far as we are aware of it, the present status of research on chaos in atomic physics including trends and promising research directions. Given the enormous and rapidly growing volume of literature published every year, we cannot provide within the scope of this chapter a complete overview of existing published results. The best we can do is to select – in our opinion – representative results that indicate the status and trends in the field of chaos in atomic physics.
In Section 11.1 we discuss recent advances in quantum chaology, i.e. the semiclassical basis for the analysis of atomic and molecular spectra in the classically chaotic regime. In Section 11.2 we discuss some recent results in type II quantum chaos within the framework of the dynamic Born-Oppenheimer approximation. Recent experimental and theoretical results of the hydrogen atom in strong microwave and magnetic fields are presented in Sections 11.3 and 11.4, respectively. We conclude this chapter with a brief review of the current status of research on chaos in the helium atom.
Quantum chaology
Quantized chaos, or quantum chaology (see Section 4.1), is about understanding the quantum spectra and wave functions of classically chaotic systems. The semiclassical method is one of the sharpest tools of quantum chaology. As discussed in Section 4.1.3 the central problem of computing the semiclassical spectrum of a classically chaotic system was solved by Gutzwiller more than 20 years ago.
By now the “chaos revolution” has reached nearly every branch of the natural sciences. In fact, chaos is everywhere. To name but a few examples, we talk about chaotic weather patterns, chaotic chemical reactions and the chaotic evolution of insect populations. Atomic and molecular physics are no exceptions. At first glance this is surprising since atoms and molecules are well described by the linear laws of quantum mechanics, while an essential ingredient of chaos is nonlinearity in the dynamic equations. Thus, chaos and atomic physics seem to have little to do with each other. But recently, atomic and molecular physicists have pushed the limits of their experiments to such high quantum numbers that it starts to make sense, in the spirit of Bohr's correspondence principle, to compare the results of atomic physics experiments with the predictions of classical mechanics, which, for the most part, show complexity and chaos. The most striking observation in recent years has been that quantum systems seem to “know” whether their classical counterparts display regular or chaotic motion. This fact can be understood intuitively on the basis of Feynman's version of quantum mechanics. In 1948 Feynman showed that quantum mechanics can be formulated on the basis of classical mechanics with the help of path integrals. Therefore it is expected that the quantum mechanics of an atom or molecule is profoundly influenced, but of course not completely determined, by the qualitative behaviour of its underlying classical mechanics.
In Chapter 1 we discussed some concepts of chaos, its manifestations and applications on an introductory level from a purely qualitative point of view. The concepts were introduced ad hoc and in a pictorial manner. We will now turn to a more detailed investigation of chaos in order to prepare the tools and concepts needed for the discussion of chaotic atomic and molecular systems.
For a long time researchers thought that every given nonlinear system required its own individual method of solution. If this were the case, there could not be any general theory of nonlinear systems. Rather, the science of nonlinear systems would resemble descriptive sciences such as 19th century biology or geology. The best one could offer would be a catalogue of nonlinear systems together with their individual properties and methods of solution (if any). Luckily, the situation is much more promising. Not so long ago it was shown by Feigenbaum (1978, 1979) that there is universality in chaos. Universality is the key property of chaos. Universality means that all nonlinear (chaotic) systems can be analysed using a common set of methods and tools. Thus, a given nonlinear system does not require special treatment. It is always amenable to a general analysis, whose elements are discussed in the following sections.
It was Poincaré who introduced a major revolution in the analysis of dynamical systems in classical mechanics.
The experimental investigation of chaos in atomic physics began with the historic experiment on microwave ionization of Rydberg atoms reported by Bayfield and Koch in 1974. The central result is the existence of an ionization threshold as a function of the microwave field. At the time (1974) this result was totally unexpected since ionization thresholds, in analogy to the photo-electric effect, were thought to appear only as a function of the frequency. Nowadays, especially in the light of the material presented in Chapters 5 and 6, the existence of an ionization threshold in the microwave field is less surprising and may be attributed to the existence of a critical microwave field that marks the onset of global chaos in the classical analogue of the Bayfield-Koch experiments. But at the time the Bayfield-Koch experiment was conducted, a connection with chaos was not suspected. Leopold and Percival (1978, 1979) were the first to investigate the Bayfield-Koch experiments using purely classical mechanics. This is allowed on the basis of the correspondence principle, Leopold and Percival argued, since the quantum numbers involved in the ionization experiments are large. This line of thought turned out to be very fruitful. With the help of Monte Carlo simulations of the time evolution of classical trajectories in phase space, Leopold and Percival were able to reproduce the existence and location of the microwave thresholds established by the Bayfield-Koch experiment.
The ultimate measure of acceptable image quality from a lens is customer or user satisfaction with the image that is produced. This measure can be the result of a quantitative evaluation, such as signal-to-noise ratio under certain conditions. In many cases, it may be a subjective evaluation as to the acceptable quality of the image as determined by the viewer. In any case, the goal for the lens designer needs to be expressed in some quantitative values that can be computed by the designer from the lens data. Only in this way can the designer know that the design task is completed.
The goal of lens design is to produce a system which will provide images of acceptable quality for a specified user. Image quality is frequently very subjective, based upon the opinion of a user as to whether the appearance of the image is pleasing or informative. In some applications the image quality can be determined in very objective ways, such as the level of contrast of certain fine details exceeding some specified threshold value. In either cases, there are physical quantities describing the image structure that can be used to evaluate the probable degree of acceptability of an image produced by a lens design.
“Image quality” is a somewhat elusive quantity. The quality of an image may be defined by its technical content or pictorial content. Quantifying the technical content or image structure is easier than attempting to quantify the pictorial content of an image.
In this book, the process of lens design has been dissected. The combination of art and science necessary to successfully carry out a design have been demonstrated. Each request for a design requires a combination of the skills that are brought to be resident in the mind of a skilled designer. The components for success in optical design are acquired through a combination of study and practice. The same is true of any acquired skill.
By now it is evident that optical design requires access to up-to-date computer programs in order to be competitive. It should also be evident that the computer program alone does not produce the design. The algorithms resident in the program are a consequence of the history and ingenuity that have taken place in the field. Each new design task requires a new path to be generated under the guidance of the designer using the computation tools. The successful designer does not just react to a specification provided by the customer, but is an active participant in developing the solution to the problem.
This book has been directed toward supplying a view into the process of design. The introduction to geometrical and physical optics, aberrations, and image evaluation defines the basis for optical design methods. The examples carried out on a number of types of lenses illustrate how the process of design is carried out using different computer design tools.
It is not essential that a designer understand all of the details of the process by which a design program carries out the optimization. Successful designers will, however, understand the principles and options that are available. Much time and effort has been expended by program developers to make the process as bulletproof and transparent to the user as possible. The past few years have seen an incredible improvement in the ability to control the modification of lenses by a program, and to explore new regions for solutions.
A basic comprehension of the important issues and procedures used is needed by any successful designer. This chapter provides enough insight to permit the designer to make the decisions necessary, but does not provide enough information to write design optimization programs. For detailed information the reader is referred to papers by Levenberg (1944), Wynne (1959), Rosen and Eldert (1954), Spencer (1963), and the summary by Kidger (1993). Discussion of newer techniques for optimization are found in papers by Kuper, Harris, and Hilbert (1994), Forbes and Jones (1993), and, of course, the various program manuals.
Optimization consists of adjusting the parameters of a lens to meet as closely as possible the requirements placed on the design. Current design programs have achieved a high degree of sophistication, and can rapidly search the design space for the closest approach to the design goals. The process of optimization requires the selection of a starting point and a set of variables.
No design task is complete until the tolerances have been evaluated. The lens design alone, while interesting to the designer, needs to be fabricated within some range of realistic tolerances in order to be of interest to the user. A full tolerancing of a lens may become a more difficult task than the original design of the lens. The specified tolerances must be sufficient to ensure that the image quality goals are likely to be met. The tolerances required for fabrication are the major drivers in determining the cost of actually building and assembling a lens.
Before proceeding to carry out tolerancing the designer must decide upon the allowable degradation in the image. Despite some claims to the contrary, no system will ever be built absolutely perfectly with no deviations from the specified parameters. Therefore the imagery produced by a real system will differ from that of a perfectly fabricated system. During the design stage the designer should have considered this problem and designed into the lens sufficient margin that some errors in the lens parameters can be allowed. The balance between design margin and allowable tolerance loss is frequently an important economic issue. It is also important to remember that some margin usually must be assigned to operational considerations, such as setting the focal position, or to environmental effects such as temperature changes.
The process of optical design is both an art and a science. There is no closed algorithm that creates a lens, nor is there any computer program that will create useful lens designs without general guidance from an optical designer. The mechanics of computation are available within a computer program, but the inspiration and guidance for a useful solution to a customer's problems come from the lens designer. A successful lens must be based upon technically sound principles. The most successful designs include a blend of techniques and technologies that best meet the goals of the customer. This final blending is guided by the judgment of the designer.
Let us start by looking at a lens design. Figure 1.1 shows the layout of a photographic type of lens, showing some of the ray paths through the lens. The object is located a long distance (100,000,000mm) to the left. This is what the computer program considers equivalent to an infinite object distance. The bundles of rays from each object point enter the lens as parallel bundles of rays. Each ray bundle passes through the lens and is focused toward an image point. On the lens shown, the field covered is 21° half width, which defines the size of the object that will be imaged by the lens.
The diameter of the bundles of light rays entering the lens determines the brightness of the image, and is established by the aperture stop of the lens.
The purpose of this book is to provide an introduction to the practice of lens design. As the title suggests, successful design will require the application of individual creativity as well as artful manipulation and thorough comprehension of the numeric tools available in lens design programs. The technology, user connection, and breadth of the commercial lens design programs have reached a very high level. The availability of inexpensive, high-speed personal computers and user-friendly operating systems has brought the computational tools within economic reach of any individual.
This book covers the basics of image formation, system layout, and image evaluation, and contains a number of examples of lens designs. There are several excellent books in existence that are principally a compilation of the results of the design of several types of lenses. In this book, it is my intention to describe the process rather than the results. The explanations of the basics are provided here in a practical manner and in a level of detail sufficient to provide an understanding of the principles. The selected examples of designs include a narrative of the thinking and approach toward the decisions that need to be made by the designer when carrying out the work. The principles are, of course, independent of the software used. Each example shown does provide the opportunity to exploit different avenues of approach to the design.
Several different lens design programs have been used to provide the majority of the illustrations in this book.