To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The purpose of this chapter is to show how the formalism of differential forms reduces a broad class of problems in computational electromagnetics to a common form. For this class of problems, the differential complexes and orthogonal decompositions associated with differential forms make questions of existence and uniqueness of solution simple to answer in a complete way which exposes the role played by relative homology groups. When this class of problems is formulated variationally, the orthogonal decomposition theorem developed in Section MA-M generalizes certain well known interrelationships between gauge transformations and conservation laws (see [Ton68]) to include global conditions between dual cohomology groups. The orthogonal decomposition theorem can then be used to construct an alternate variational principle whose unique extremal always exists and can be used to obtain a posteriori measures of problem solvability, that is to verify if any conservation law was violated in the statement of the problem. A diagrammatic representation of the problem along the lines of [Ton72b] will be given and the role of homology groups will be reconsidered in this context. This of course will be of interest to people working in the area complementary variational principles.
In addition to the usual literature cited in the bibliography, the work of Tonti [Ton68, Ton69, Ton72b, Ton72a, Ton77], Sibner and Sibner [SS70, SS79, SS81] and [Kot82] have been particularly useful in developing the ideas presented in this chapter. From the view of computational electromagnetics, the beauty of formulating a paradigm variational problem in terms of differential forms is that the finite element method and Whitney form interpolation yield a discretization which faithfully reproduces all the essential features of the continuum problem.
The authors are long-time fans of MSRI programs and monographs, and are thrilled to be able to contribute to this series. Our relationship with MSRI started when Paul Gross was an MSRI/Hewlett-Packard postdoctoral fellow and had the good fortune of being encouraged by Silvio Levy to coauthor a monograph. Silvio was there when we needed him, and it is in no way an understatement to say that the project would never have been completed without his support.
The material of this monograph is easily traced back to our Ph.D. theses, papers we wrote, and courses taught at Boston University over the years. Our apologies to anyone who feels slighted by a minimally updated bibliography. Reflecting on how the material of this monograph evolved, we would like to thank colleagues who have played a supporting role over the decades. Among them are Alain Bossavit, Peter Caines, Roscoe Giles, Robert Hermann, Lauri Kettunen, Isaak Mayergoyz, Peter Silvester, and Gilbert Strang. The authors are also indebted to numerous people who read through all or part of the manuscript, produced numerous comments, and provided all sorts of support. In particular, Andre Nicolet, Jonathan Polimeni, and Saku Suuriniemi made an unusually thorough effort to review the draft.
Paul Gross would like to acknowledge Nick Tufillaro at Hewlett-Packard and Agilent Technologies for mentoring him throughout his post-doc at MSRI. Tim Dere graciously provided his time and expertise for illustrations. This book could not have happened without help and encouragement from Tanya.
In this chapter we consider a general finite element-based algorithm to make cuts for magnetic scalar potentials and investigate how the topological complexity of the three-dimensional region, which constitutes the domain of computation, affects the computational complexity of the algorithm. The algorithm is based on standard finite element theory with an added computation required to deal with topological constraints brought on by a scalar potential in a multiply connected region. The process of assembling the finite element matrices is also modified in the sense described at length in the previous chapter.
Regardless of the topology of the region, an algorithm can be implemented with 𝒪 (m30) time complexity and 𝒪 (m20) storage where m0 denotes the number of vertices in the finite element discretization. However, in practice this is not useful since for large meshes the cost of finding cuts would become the dominant factor in the magnetic field computation. In order to make cuts worthwhile for problems such as nonlinear or time-varying magnetostatics, or in cases of complicated topology such as braided, knotted, or linked conductor configurations, an implementation of 𝒪 (m20) time complexity and 𝒪 (m0) storage is regarded as ideal. The obstruction to ideal complexity is related to the structure of the fundamental group This chapter describes an algorithm that can be implemented with 𝒪 (m20) time complexity and 𝒪 (m4/30 ) storage complexity given no more topological data than that contained in the finite element connection matrix.
The advent of high speed computers has opened up a whole new range of possibilities for radio. If the RF signal can be adequately represented by a series of samples (at a rate that a computer can handle), standard operations such as mixing, filtering, signal synthesis and demodulation can all be handled as mathematical operations within the computer. Constructing systems that can handle the complex signal processing required by spread spectrum communications, radar and other more exotic RF systems is merely an exercise in computer programming. Since most of the processing will be done inside a computer, we have what is commonly termed software radio. Such radios can be extremely flexible and be instantaneously reconfigured to handle new forms of modulation and/or tasking. All we need is a suitable analogue-to-digital converter (ADC) to interface to the incoming analogue signal and a suitable digital-to-analogue converter (DAC) to produce the outgoing analogue signal. Obviously, any realistic implementation of software radio will involve many constraints, and issues such as sampling rate and quantisation error will need to be addressed. The following chapter introduces the basic ideas of digital RF techniques and their limitations.
The processing of digitised signals
We have already noted the utility of studying RF systems in terms of the complex signal exp(j2π ft). Since cos(θ) = ½ [exp(jθ) + exp(−jθ)], it is clear that the real signal s(t) = S cos(2π ft) will contain, in equal parts, contributions from frequencies f and − f.
The generation of a stable sinusoidal signal is a crucial function in most RF systems. A transmitter will amplify and suitably modulate such a signal in order to produce its required output. In the case of a receiver system, such a signal is fed into the mixer circuits for the purposes of frequency conversion and demodulation. A circuit that generates a repetitive waveform is known as an oscillator. Such circuits usually consist of an amplifier with positive feedback that causes any input, however small, to grow until limited by the non-linearities of the circuit. The feedback will need to be frequency selective in order to control the rate of waveform repetition. This frequency selection is often achieved using combinations of capacitors and inductors, but can also be achieved with resistor and capacitor combinations. In the present chapter, however, we will concentrate on feedback circuits based on capacitor/inductor combinations. We consider a variety of oscillator circuits that are suitable for RF purposes and investigate the conditions under which oscillation occurs. In addition, we consider the issue of oscillator noise since this can often pose a severe limitation upon system performance.
A particularly important class of oscillator is that for which the frequency can be controlled by a d.c. voltage. Such an oscillator is an important element in what is known as a phase locked loop. In such a system, there is a feedback loop that compares the oscillator output with a reference signal and generates a control voltage based upon their phase difference.
The following text evolved out of a series of courses on radio frequency (RF) engineering to undergraduates, postgraduates, government and industry. It was designed to meet the needs of such groups and, in particular, the needs of working engineers attempting to upgrade their skills. Thirty years ago, it appeared as if the fibre optics revolution would relegate wireless to a niche discipline, and universities accordingly downgraded their offerings in RF. In the past 10 years, however, there has been a renaissance in wireless and to a point where it is now a key technology. This has been made possible by the developments in very large-scale integration (VLSI) and CMOS technology in particular. In order to meet the manpower requirements of the wireless industry, there has been a need to upgrade the status of RF training in universities and to provide courses suitable for in-service training. The applications of wireless systems have changed greatly over the past 30 years, as has the available technology. In particular, there is a greater use of digital technologies, and antenna systems can often be of the array variety. The current text has been written with these changes in mind and there has been a culling of some traditional material that is of limited utility in the current age (graphical design methods for example). Material in the book has been carefully chosen to provide a basic training in RF and a springboard for more advanced study.
In propagating through free space, radio waves will suffer a reduction in amplitude as they spread outwards from the source. When a transmission must reach a large number of geographically dispersed receivers, such as in broadcast radio, there is little that can be done about this loss. If it is only required to reach one receiver, however, it is desirable to transmit all the energy to this one device. A structure for achieving this is known as a transmission line. Such structures will normally have a small uniform cross-section and can be constructed so that the loss along the line is extremely small. Transmission lines allow efficient and unobtrusive transmission over long distances. Two of the most common varieties of transmission line are the coaxial cable and the twin parallel wire. This chapter considers a simple lumped circuit model of such transmission lines and describes some important applications of these structures. The study of transmission lines leads very naturally to the concept of the reflection coefficient, a concept that provides an alternative description of impedances. Reflection coefficients generalise to the concept of scattering matrices which themselves provide an alternative means of describing multiport networks. The present chapter introduces the basic idea of the scattering matrix and shows howit can be applied to the design of small signal amplifiers at high frequencies.
The transmission line model
Figure 6.1 shows the construction of two important transmission lines, the coaxial cable and twin parallel wire.
Broadly speaking, radio frequency (RF) technology, or wireless as it is sometimes known, is the exploitation of electromagnetic wave phenomena in that part of the spectrum between 3 Hz and 300 GHz. It is arguably one of the most important technologies in modern society. The possibility of electromagnetic waves was first postulated by James Maxwell in 1864 and their existence was verified by Heinrich Hertz in 1887. By 1895, Guglielmo Marconi had demonstrated radio as an effective communications technology. With the development of the thermionic valve at the end of the nineteenth century, radio technology developed into a mass communication and entertainment medium. The first half of the twentieth century saw developments such as radar and television, which further extended the scope of this technology. In the second half of the twentieth century, major breakthroughs came with the development of semiconductor devices and integrated circuits. These advances made possible the extremely compact and portable communications devices that resulted in the mobile communications revolution. The size of the electronics continues to fall and, as a consequence, whole new areas have opened up. In particular, spread spectrum communications at gigahertz frequencies are increasingly used to replace cabling and other systems that provide local connectivity.
The purpose of this text is to introduce the important ideas and techniques of radio technology. It is assumed that the reader has a basic grounding in electromagnetic theory and electronics.
Active devices are important elements in RF systems where they perform functions such as amplification, mixing and rectification. For mixing and rectification, the non-linear properties of the device are of paramount importance. In the case of amplification, however, the non-linear properties can have a damaging effect upon performance. This chapter concentrates on small signal amplifiers for which it is possible to select conditions such that there is, effectively, linear amplification. Amplifiers based on both bipolar junction and field effect transistors are considered and the chapter includes some revision concerning their characteristics and biasing. The high frequency performance of transistor amplifiers is limited by what is known as the Miller effect and a large part of the chapter is devoted to techniques for overcoming this phenomenon.
The semiconductor diode
The semiconductor diode is a device that allows a current flow, but for which the magnitude of the flow depends in a non-linear fashion upon the applied voltage. Many varieties of diode are manufactured by forming junctions out of p- and n-type semiconductors. There are, however, several important diode varieties that have other types of junction (the semiconductor to metal junction of a Schottky diode for example). A p-type semiconductor can be formed by introducing a small amount of indium into silicon. This creates a structure that allows electrons to flow at energy levels slightly above those of the bound electrons. An n-type semiconductor can be formed by introducing a small amount of arsenic into the silicon.
Semiconductor devices will always have some form of non-linearity in their characteristics and this can be both an advantage and a disadvantage. For amplifiers, non-linearity is clearly a disadvantage in that the amplified signal will not be a faithful reproduction of the input signal. Mixers, however, provide an example of the advantages of non-linear behaviour in semiconductor devices. An ideal mixer is a device for which the output is the product of two input signals. For purely sinusoidal inputs, this will imply outputs at the sum and difference frequencies. Mixers are essential for operations such as frequency translation, modulation and detection. As a consequence, they are an important building block for both transmitters and receivers. The current chapter considers the operation of a variety of mixers and their application to modulation and demodulation. In addition, the chapter considers some modulation and demodulation processes that do not involve mixing.
Diode mixers
Diode mixers are important because of their low noise characteristics. Whilst single diode mixers are not normally used at frequencies below 1 GHz, their analysis provides some useful insight into the operation of mixers in general. It should be noted that diode capacitance can have a detrimental effect upon mixer performance and low capacitance devices, such as the Schottky barrier diode, will normally be required for operation at higher frequencies.
The circuit in Figure 4.1 shows a single diode mixer that converts an RF input signal at frequency ωRF into an intermediate frequency (IF) signal at frequency ωIF = |ωRF − ωLO| by mixing it with a local oscillator (LO) signal at frequency ωLO.
Antennas are the means by which electromagnetic wave energy is fed into, and extracted from, the propagation medium. They are a key element in RF systems and their design and analysis constitutes a very important area of RF engineering. The problems of antenna design are many and varied. Modern spread spectrum systems will require antennas that are capable of operating over a wide range of frequencies. Mobile communications have a requirement for small efficient antennas that radiate over a wide arc. Radar, on the other hand, requires antennas that illuminate only a narrow arc, but can be steered over a wide region. In many systems, the requirements turn out to be conflicting and it is important for the designer to understand the practical constraints in order to achieve the best compromise solution. The present chapter seeks to introduce the most fundamental concepts of antenna engineering and to describe some important varieties of antenna.
Dipole antennas
Broadly speaking, an antenna is a device that transforms wave propagation down a transmission line (a physically narrow channel) into wave propagation through free space (a physically wide channel) and vice versa. If we consider a parallel wire transmission line, we could conceive of opening it out at its end to better couple the waves into free space. The opened out section would then correspond to a dipole antenna. In its unopened state, the transmission line will reflect back to the source almost all of the energy that reaches its end.
RF signals will often need to be transmitted with considerable power if they are to survive propagation with adequate signal level. As a consequence, we will need to consider amplifiers that can operate at large signal levels. Up to this point, we have concentrated on small signal amplifiers for which efficiency and linearity have not been a major problem. These aspects, however, require careful consideration in the case of RF amplifiers operating at large signal levels. Small signal amplifiers are typically of the class A variety and highly linear. Whilst class A amplifiers are sometimes used at high power levels, they do not represent an efficient use of the d.c. energy that is supplied to the amplifier. Class B, AB, C and E amplifiers are far more efficient, but have the disadvantage that they are highly non-linear and hence create considerable harmonics. These harmonics can be troublesome and require specialised techniques, or filtering, for them to be brought down to an acceptable level. The following chapter considers power amplifiers in the class range from A to E. It concentrates on BJT amplifiers, but the same principles can be applied to FET amplifiers.
Class A
Class A amplifiers attempt to operate over that part of the transistor characteristic for which there is linear translation of the input signal to the output. For a BJT, a typical configuration is shown in Figure 7.1.