To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In analog electronics, voltage is a continuous variable. This is useful because most physical quantities we encounter are continuous: sound levels, light intensity, temperature, pressure, etc. Digital electronics, in contrast, is characterized by only two distinguishable voltages. These two states are called by various names: on/off, true/false, high/low, and 1/0. In practice, these two states are defined by the circuit voltage being above or below a certain value. For example, in TTL logic circuits, a high state corresponds to a voltage above 2.0 V, while a low state is defined as a voltage below 0.8 V.
The virtue of this system is illustrated in Fig. 8.1. We plot the voltage level versus time for some electronic signal. If this was part of an analog circuit, we would say that the voltage was averaging about 3 V, but that it had, roughly, a 20% noise level, rather large for most applications and thus unacceptable. For a TTL digital circuit, however, this signal is always above 2.0 V and is thus always in the high state. There is no uncertainty about the digital state of this voltage, so the digital signal has zero noise. This is the primary advantage of digital electronics: it is relatively immune to the noise that is ubiquitous in electronic circuits. Of course, if the fluctuations in Fig. 8.1 became so large that the voltage dipped below 2.0 V, then even a digital circuit would have problems.
The silicon controlled rectifier introduced in the last chapter was the first device we have seen that offered some measure of electronic control: the gate current determined the details of the I–V characteristic. This control, however, was fairly limited. In the examples we considered, the gate current determined the time at which the SCR switched to its on-state. Once the SCR was turned on, however, its behavior was no longer related to the magnitude of the gate current, and removing the gate current altogether would not return the SCR to its off-state.
We now turn to a device with a greater measure of electronic control: the transistor. Like the SCR, the transistor allows the user to control a large current through the device with a smaller control signal. But with the transistor, one can have proportional control; that is, the amount of current through the device is proportional to the control signal. This allows one to amplify signals, which is fundamental to all types of electronic communication.
Transistors come in two basic types: bipolar junction transistors (BJTs) and field-effect transistors (FETs). This chapter will cover the fundamentals of BJTs and also introduce some common terminology for transistor amplifiers. FETs are addressed in Chapter 5.
Bipolar transistor fundamentals
A bipolar transistor can be thought of as a sandwich of n-type and p-type semiconductors.
A fundamental result from basic modern physics is that atoms are characterized by discrete energy levels. Each of these energy levels can accept up to two electrons. When “building” an atom, we start from the lowest level, fill in two electrons, and then move up to the next energy level and fill it with electrons. This continues until we have placed all the atom's electrons in energy levels. We also know that if an atom absorbs energy from the outside (for example, by absorbing a photon), an electron can be promoted to a higher energy level. Conversely, an electron that falls from a higher to a lower energy level emits a photon.
What happens to this energy level model when we assemble many atoms together into a solid? As the atoms get closer together, we must start to talk about the energy levels of the solid as a whole rather than those of the individual atoms. Rather than doing quantum mechanics for an isolated potential (the atom), we do it for a periodic array of atoms that exhibits a periodic potential. The net result of this is that, during the assembly of N atoms, the individual atomic levels split into N levels. This is shown schematically in Fig. 3.1. Thus when the solid is assembled and the atoms are at their final equilibrium spacing, the solid is characterized by a series of energy bands consisting of a large number of closely spaced allowed energy levels.
The op-amp astable oscillator covered in Section 6.5 was our first example of an oscillator – a circuit that produces a periodic output signal without an input signal. These types of circuits have some kind of feedback mechanism that allows them to oscillate spontaneously. We can categorize oscillators into two broad groups: relaxation oscillators and sinusoidal oscillators. Each of these groups has common features. The relaxation oscillators are characterized by non-sinusoidal output waveforms, timing that is set by capacitor charging and discharging, and the non-linear operation of its active components. The analysis of relaxation oscillator circuits is done in the time domain (i.e., by determining the circuit response as a function of time). For example, our op-amp astable oscillator has a square wave output with a period set by the charging/discharging of capacitor C through resistor Rf, and the op-amp is operating non-linearly, switching back and forth between its saturation voltages. On the other hand, sinusoidal oscillators, as the name implies, have sinusoidal output waveforms and linear operation of the active components, and the analysis is done in the frequency domain (i.e., by considering how the circuit responds to different frequencies). We will now examine examples of each type of oscillator.
Relaxation oscillators
SCR sawtooth oscillator
Our first relaxation oscillator is shown in Fig. 7.1. It uses two components we have studied previously: the SCR and the bipolar transistor.
In this chapter we introduce the second major type of transistor: the field-effect transistor. Like the bipolar junction transistors (BJTs) we studied in Chapter 4, field-effect transistors (FETs) allow the user to control a current with another signal. The key difference is that the FET control signal is a voltage while the BJT control signal is a current. Also, the FET control input (called the gate) has a much higher input impedance than the base of a BJT. Indeed, the DC gate impedance for FETs varies from a few megaohms to astounding values in excess of 1014 Ω. High input impedance is a highly desirable feature that greatly simplifies circuit analysis.
The BJT has three connections: the collector, base, and emitter. The corresponding connections on an FET are called the drain, gate, and source. Some versions of the FET have a fourth connection called the bulk connection. Bipolar transistors come in just two types with opposite polarities: the npn and the pnp. Field-effect transistors have greater variety. In addition to the polarity pairs (termed n-channel and p-channel), there are differences in gate construction (junction and metal oxide), and doping (depletion and enhancement). In terms of analysis, however, they are all very similar, so we will not have to consider each variety separately. Also, as we did with the bipolar transistor, we will focus on one of the polarities (n-channel) since the other polarity simply involves swapping the labels for n and p and changing the sign of the voltages and the direction of the currents.
We start our study of electronics with definitions and the basic laws that apply to all circuits. This is followed by an introduction to our first circuit element – the resistor.
In electronics, we are interested in keeping track of two basic quantities: the currents and voltages in a circuit. If you can make these quantities behave like you want, you have succeeded.
Current measures the flow of charge past a point in the circuit. The units of current are thus coulombs per second or amperes, abbreviated as A. In this text we will use the symbol I or i for current.
As charges move in circuits, they undergo collisions with atoms and lose some of their energy. It thus takes some work to move charges around a circuit. The work per unit charge required to move some charge between two points is called the voltage between those points. (In physics, this work per unit charge is equivalent to the difference in electrostatic potential between the two points, so the term potential difference is sometimes used for voltage.) The units of voltage are thus joules per coulomb or volts, abbreviated V. In this text we will use the symbol V or v for voltage.
In a circuit, there are sources and sinks of energy. Some sources of energy (or voltage) include batteries (which convert chemical energy to electrical energy), generators (mechanical to electrical energy), solar cells (radiant to electrical energy), and power supplies and signal generators (electrical to electrical energy).
We now turn to an examination of the properties and uses of the operational amplifier or op-amp. A detailed analysis of this multi-stage amplifier circuit is beyond the scope of this text, so we will treat it as a black box device as we did earlier with the voltage regulator. Thus, to use the device, we need only learn and apply some simple rules and, later, the real-world limitations of the device.
In current usage, the operational amplifier is usually packaged as an integrated circuit with multiple leads or pins. While there are hundreds of different op-amps with different specifications, they all follow the same usage rules. To be specific, we will focus on a “classic” version: the 741 op-amp.
The circuit symbol for the op-amp is shown in Fig. 6.1. There are inputs for two power supply voltages (one positive and one negative relative to ground, labeled V+cc and V-cc, respectively). There are also two signal inputs: the inverting input, labeled with a minus sign, and the non-inverting input, labeled with a plus sign. Happily, there is only one output.
As we know, voltages are always between two points, but our description of the op-amp inputs seems to refer to voltages at one point, the various input pins. It is thus important to note that all of the voltages for the op-amp are referenced to ground (i.e., the second point is ground).
It is normal practice when starting the mathematical investigation of a physical problem to assign algebraic symbols to the quantity or quantities whose values are sought, either numerically or as explicit algebraic expressions. For the sake of definiteness, in this chapter, our discussion will be in terms of a single quantity, which we will denote by x most of the time. The extension to two or more quantities is straightforward in principle, but usually entails much longer calculations, or a significant increase in complexity when graphical methods are used.
Once the sought-for quantity x has been identified and named, subsequent steps in the analysis involve applying a combination of known laws, consistency conditions and (possibly) given constraints to derive one or more equations satisfied by x. These equations may take many forms, ranging from a simple polynomial equation to, say, a partial differential equation with several boundary conditions. Some of the more complicated possibilities are treated in the later chapters of this book, but for the present we will be concerned with techniques for the solution of relatively straightforward algebraic equations.
When algebraic equations are to be solved, it is nearly always useful to be able to make plots showing how the functions, fi(x), involved in the problem change as their argument x is varied; here i is simply a label that identifies which particular function is being considered.
In Chapter 6 we discussed how complicated functions f(x) may be expressed as power series. Although they were not presented as such, the power series could all be viewed as linear superpositions of the monomial basic set of functions, namely 1, x, x2, x3, … xn, … Natural though this set may seem, they are in many ways far from ideal: for example they possess no mutual orthogonality properties, a characteristic that is generally of great value when it comes to determining, for any particular function, the multiplying constant for each basic function in the sum. Moreover, this particular set of basic functions can only be used to represent continuous functions.
In the case of original functions f(t) that are periodic, some improvement on this situation can be made by using, as the basic set, sine and cosine functions. For a function with period T, say, the set of sine and cosine functions with arguments 2πnt/T, for all n ≥ 0, form a suitable basic set for expressing f as a series; such a representation is called a Fourier series. One great advantage they possess over the monomial functions is that they are mutually orthogonal when integrated over any continuous period of length T, i.e. the integral from t0 to t0 + T of the product of any sine and any cosine, or of two sines or cosines with different values of n, is equal to zero.