To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
It [Moore's law] can't continue forever. The nature of exponentials is that you push them out and eventually disaster happens.
Gordon Moore, 2005
In this chapter, we discuss how to model effects that arise in transistors with short channel lengths. Such short-channel transistors have correspondingly thin gate oxides, shallow source/drain junctions, high doping, and small operational supply and threshold voltages. The models are accurate for transistor lengths that range from 1 μm to 0.01 μm (10 nm), where semi-classical approximations of transistor function are still valid. We shall allude to some quantum phenomena as well. We shall begin by first describing the EKV model in a long-channel transistor. The EKV model is an insightful charge-based model that captures subthreshold and above-threshold operation in a simple analytic fashion. It is named after its originators, Enz, Krummenacher, and Vittoz. Then, based on a short-channel model in [2], we shall modify the long-channel EKV model to describe a short-channel effect that is important in above-threshold operation, namely velocity saturation. As the lateral electric field in a transistor increases with increasing drain-to-source voltage, the drift velocity of electrons in short-channel transistors begins to saturate and limit at a maximum velocity vsat; the drift velocity is then no longer related to the lateral electric field via the mobility proportionality constant as it is in long-channel transistors.
Nature uses only the longest threads to weave her patterns, so that each small piece of her fabric reveals the organization of the entire tapestry.
Richard P. Feynman
Energy surrounds us, is within us, and is created by us. In this chapter, we shall discuss how systems can harvest energy in their environments and thus function without needing to constantly carry their own energy source. The potential benefits of an energy-harvesting strategy are that the lifetime of the low-power system is then not limited by the finite lifetime of its energy source, and that the weight and volume of the system can be reduced if the size of the energy-harvester is itself small. The challenges of an energy-harvesting strategy are that many energy sources are intermittent, can be hard to efficiently harvest, and provide relatively low power per unit area. Thus, energy-harvesting systems are usually practical only if the system that they power operates with relatively low power consumption.
We shall begin by discussing energy-harvesting strategies that have been explored for low-power biomedical and portable applications. First, we discuss the use of strategies that function by converting mechanical body motions into electricity. A circuit model developed for describing energy transfer in inductive links in Chapter 16 is extremely similar to a circuit model that accurately characterizes how such mechanical energy harvesters function. Thus, tradeoffs on maximizing energy efficiency or energy transfer are also similar.
All difficult things have their origin in that which is easy, and great things in that which is small.
Lao Tzu
In many systems, certain important transistors in an architecture that determine most of its performance are operated such that the current through them has a large-signal dc bias component around which there are small-signal ac deviations. The voltages of these transistors correspondingly also have a dc large-signal operating point and small-signal ac deviations. If the ac deviations are sufficiently small, the transistor may be characterized as a linear system in its ac small-signal variables with the parameters of the linear system being determined by the dc large-signal variables. In this chapter, we will focus on the small-signal properties of the transistor.
Given that the transistor is a highly nonlinear device, it may be surprising that we are interested in its linear small-signal behavior. However, the transistor's linear small-signal behavior is most important in determining its behavior in analog feedback loops that are intentionally designed to have a linear input-output relationship in spite of nonlinear devices in the architecture. For example, most operational amplifier (opamp) circuits are architected such that negative feedback inherent in the topology ensures that the input terminal voltages of the opamp, v+ and v−, will be very near each other and, therefore, that the small-signal properties of the transistors in the opamp determine its stability and convergence in most situations.
When one door of happiness closes another opens; but we often look so long at the closed one that we do not see the one which has opened for us.
Helen Keller
Implantable electronics refers to electronics that may be partially or fully implanted inside the body. Several implanted electronic systems today have revolutionized patients' lives. For example, more than 130,000 profoundly deaf people in the world today have a cochlear implant in their inner ear or cochlea that allows them to hear almost normally. Some cochlear-implant subjects have word-error recognition rates in clean speech that are better than those of normal hearing subjects, and they understand telephone speech easily. Cochlear implants electrically stimulate the auditory nerve, the nerve that conducts electrical impulses from the ear to the brain, with an ac current. Cochlear implant subjects are so profoundly deaf that even the feedback-limited ~1000× gain of a hearing aid is not large enough to help them hear. Therefore, an implant that directly stimulates their auditory nerve is necessary. Cochlear implants today are partially implanted systems: the electrodes and a wireless receiver are implanted inside the body while a microphone, processor, and wireless transmitter are placed outside the body.
Patients with Parkinson's disease have had their quality of life significantly improve because of a deep brain stimulator (DBS) that has been fully implanted inside their bodies. This stimulator provides ac electrical current stimulation to a highly specific region in their brain, which is most commonly the subthalamic nucleus (STN).
If you shut the door to all errors, truth will be shut out.
Rabindranath Tagore
In this chapter, we present ten general principles for ultra-low-power analog and mixed-signal design. We shall begin by comparing the paradigms of analog computation and digital computation intuitively and then quantitatively. The quantitative analysis will be based on fundamental relationships that dictate the reduction of noise and offset with the use of power, area, and time resources in any computation. It shall reveal important tradeoffs in how the power and area resources needed for a computation scale with the precision of computation in analog versus digital systems. From these results, we shall discuss why, from power considerations, there is an optimum amount of analog preprocessing that must be performed before a signal is digitized. If digitization is performed early and at high speed and high precision, as is often the case, the power costs of analog-to-digital conversion and digital processing become large; if digitization is performed too late, the costs of maintaining precision in the analog preprocessing become large; at the optimum, there is a balance between the two forms of processing that minimizes power.
There are detailed similarities between power-saving principles in analog and digital paradigms because they are both concerned with how to represent, process, and transform information with low levels of energy. We shall itemize and discuss several of these similarities.
It has long been an axiom of mine that the little things are infinitely the most important.
Sir Arthur Conan Doyle
Noninvasive medical electronics refers to electronics for medical instruments that do not invade or penetrate the body. The sensors in these instruments can and often do contact the body. Examples of such sensing include:
Electrocardiogram (EKG or ECG) measurements of heart function.
Photoplethysmographic (PPG) measurements of blood-oxygen saturation, or pulse oximetry.
Phonocardiogram (PCG) measurements of heart sounds.
Electroencephalogram (EEG) measurements of brain function.
Magnetoencephalogram (MEG) measurements of brain function.
Electromyogram (EMG) measurements of muscle function.
Electrooculogram (EOG) measurements of eye motion.
Electrical impedance tomography (EIT): measurements to infer composition of the body's tissues. Impedance cardiography (ICG) is a further specialization within the field.
Temperature measurements.
Blood-pressure (BP) measurements.
Pulmonary auscultation (lung-sound) measurements.
Biomolecular detection of small molecules, DNA, proteins, cells, viruses, or microorganisms for point-of-care or lab-on-a-chip applications, which often exploit BioMEMS (Bio Micro Electro Mechanical Systems) and microfluidic technologies, mostly in a noninvasive fashion thus far.
When such sensing is done chronically, for example as heart tags on patients with a high risk for myocardial infarction (MI), i.e., a heart attack, it is often called wearable electronics. The various sensors on the body may form a body sensor network (BSN) or body area network (BAN) that communicate with each other and/or the patient's cell phone, or with an RF-ID, Bluetooth, Zigbee, MICS, UWB, or other wireless receiver in the home, hospital, or battlefield.
In this chapter, we shall review important principles for ultra-low-power digital circuit and system design. We shall focus on operation with extremely low power-supply voltages and on subthreshold operation, although we shall provide some analysis of moderate-inversion and strong-inversion operation with the EKV model as well. As Chapter 6 on deep submicron effects in transistors discussed, because threshold voltages scale significantly less strongly than power-supply voltages, subthreshold operation is an increasingly dominant fraction of the voltage operating range. Subthreshold operation has become and will continue to get increasingly fast such that ultra-low-power operation in this regime does not sacrifice bandwidth in many applications. In biomedical and bioelectronic applications, subthreshold operation is ideal since bandwidth requirements are typically modest while energy efficiency is of paramount importance. An insightful paper by Meindl, that was way ahead of its time, pioneered subthreshold digital design. An analysis by Burr and Peterson analyzed the optimal energy efficiency of ultra-low-power subthreshold circuits. A more recent publication by Vittoz, the pioneer of subthreshold analog design, has analyzed issues in subthreshold digital design using his EKV model. Through such pioneering and other work, subthreshold digital design has been revived and is an active field of research in several academic and industrial institutions.
We shall begin by discussing the operation of a subthreshold CMOS inverter. Operation in the subthreshold regime is highly subject to transistor mismatch.
It is presumed that there exists a great unity in nature, in respect of the adequacy of a single cause to account for many different kinds of consequences.
Immanuel Kant
Devices that are hooked to each other at various terminals create a circuit. Almost all nontrivial circuits comprise topologies where output terminal(s) are directly or indirectly coupled back to input terminal(s), thus forming a feedback circuit. When the output feeds back to reduce the effects of the input, the feedback is termed negative feedback, and when the output feeds back to increase the effects of the input, the feedback is termed positive feedback. Purely feed-forward circuits are usually simple to build and easy to analyze, even when nonlinear, such that most of the complexity and richness in circuits arises from the feedback embedded within them.
Negative-feedback circuits function by creating forces within the circuit that attempt to restore its signals to a desired equilibrium point if these signals deviate away from this point. Negative-feedback circuits often serve regulatory functions improving the precision of the output to that provided by a precise equilibrium-setting reference input and/or that of a precise sensor or feedback network. Such precision is achieved in spite of imprecision in an actuator or feed-forward network in the circuit and/or disturbances present at the output of the circuit.
Positive-feedback circuits function by creating forces within the circuit that attempt to move its signals further away from a point if these signals deviate from that point.
I do not think there is any thrill that can go through the human heart like that felt by the inventor as he sees some creation of the brain unfolding to success … Such emotions make a man forget food, sleep, friends, love, everything.
Nikola Tesla
Implanted medical devices are rapidly becoming ubiquitous. They are used in a wide variety of medical conditions such as pacemakers for cardiac arrhythmia, cochlear implants for deafness, deep-brain stimulators for Parkinson's disease, spinal-cord stimulators for the control of pain, and preliminary retinal implants for blindness. They are being actively researched in brain-machine interfaces for paralysis, epilepsy, stroke, and blindness. In the future, there will undoubtedly be electronically controlled drug-releasing implants for a wide variety of hormonal, autoimmune, and carcinogenic disorders. All such devices need to be small and operate with low power to make chronic and portable medical implants possible. They are most often powered by inductive radio-frequency (RF) links to avoid the need for implanted batteries, which can potentially lose all their charge or necessitate re-surgery if they need to be replaced. Even when such devices have implanted batteries or ultra-capacitors, an increasing trend in upcoming fully implanted systems, wireless recharging of the battery or ultra-capacitor through RF links is periodically necessary.
Figure 16.1 shows the basic structure of an inductive power link system for an example implant. An RF power amplifier drives a primary RF coil which sends power inductively across the skin of the patient to a secondary RF coil.
But the real glory of science is that we can find a way of thinking such that the law is evident.
Richard P. Feynman
Noise ultimately limits the performance of all systems. For example, the maximum gain of an amplifier is limited to VDD/vn where VDD is the power-supply voltage and vn is the noise floor at the input of the amplifier. Gains higher than this limiting value will simply amplify noise to saturating power-supply values and leave no output dynamic range available for discerning input signals.
Since power is the product of voltage and current, low-power systems have low voltages and/or low current signal levels. Hence, they are more prone to the effects of small signals such as noise. A deep understanding of noise is essential in order to design architectures that are immune to it, in order to efficiently allocate power, area, and averaging-time resources to reduce it, and in order to exploit it. We will begin our study of noise in physical devices from a first-principles view of some of the fundamental concepts and mathematics behind it.
The mathematics of noise
We pretend that macroscopic current is the flow of a smooth continuous fluid. However, the current is actually made up of tiny microscopic discrete charged particles that flow in a semi-orderly fashion. The random disorderly portion of the charged-particle motion manifests itself in the macroscopic current as noise. Figure 7.1 reveals how macroscopic current is actually made up of tiny fluctuations around its mean value which constitute current noise.
Intuition will tell the thinking mind where to look next.
Jonas Salk
To deeply understand any electronic circuit, whether it is low power or not, it is essential to have a good mastery of the devices from which that circuit is made. In this chapter, we will begin our study of device physics with the metal oxide semiconductor (MOS) transistor, the most important active device in electronics today. The MOS transistor is a field effect transistor (FET) and MOSFETs are abbreviated as nFETs if their current is due to electron flow and as pFETs if their current is due to hole flow. In this chapter, we shall focus on fundamental principles and on exact mathematical descriptions that are applicable to transistors built in technologies with relatively long dimensions. In later chapters, we shall study practical approximations needed to simplify these exact mathematical descriptions (Chapter 4), study small-signal dynamic models of the MOS transistor (Chapter 5), and discuss effects observed in deep submicron transistors with relatively short dimensions (Chapter 6).
Figure 3.1 shows a zoomed-in view of an n-channel FET or nFET built in a standard bulk complementary metal oxide semiconductor (CMOS) process. There are four terminals referred to as the gate (G), source (S), drain (D), and bulk (B), respectively. The control terminal, the metal-like polysilicon gate, is insulated from the silicon bulk via a silicon dioxide insulator; the source and drain terminals are created with n+ regions in the p-type silicon bulk.
You don't understand anything until you learn it more than one way.
Marvin Minsky
In this chapter, we shall discuss a feedback technique for analyzing linear circuits, invented by Hendrik W. Bode in his landmark book, Network Analysis and Feedback Amplifier Design published in 1945. The technique, known as return-ratio analysis, allows one to compute the return ratio of an active dependent generator or passive impedance in a linear circuit as a function of its dependent gain or of its passive impedance, respectively. The return ratio is a quantity analogous to the loop transmission in a feedback loop: just as we can use the loop transmission in a feedback loop to analyze how the dynamics of the loop changes as we vary its dc gain, we can use the return ratio of an element to analyze how transfer functions in the circuit change as we vary the dependent gain or passive impedance of the element. The return ratio also gives us a measure of the robustness of the circuit to changes in the gain or impedance of the element in the same manner that the loop transmission gives us a measure of the robustness of a feedback loop to changes in its feedforward gain. The return ratio explicitly realizes that circuits are composed of bidirectional elements and loading such that the creation of unidirectional block diagrams with feedforward gain a(s) and feedback gain f(s) to analyze them is not unique and sometimes cumbersome.
Information is represented by the states of physical devices. It costs energy to transform or maintain the states of these physical devices. Thus, energy and information are deeply linked. It is this deep link that allows us to articulate information-based principles for ultra-low-power design that apply to biology or to electronics, to analog or to digital systems, to electrical or to non-electrical systems, at small scales or at large scales. The graphical languages of circuits and feedback serve as powerful unifying tools to understand or to design low-power systems that range from molecular networks in cells to biomedical implants in the brain to energy-efficient cars.
A vision that this book has attempted to paint in the context of the fields of ultra-low-power electronics and bioelectronics is shown in the figure below. Engineering can aid biology through analysis, instrumentation, design, and repair (medicine). Biology can aid engineering through bio-inspired design. The positive-feedback loop created by this two-way interaction can amplify and speed progress in both disciplines and shed insight into both. It is my hope that this book will bring appreciation to the beauty, art, and practicality of such synergy and that it will inspire the building of more connections in one or both directions in the future.
When we tug on a single thing in nature, we find it attached to everything else.
John Muir
Devices that dissipate energy, such as resistors and transistors, always generate noise. This noise can be modeled by the inclusion of current-noise generators in the small-signal models of these devices. When several such devices interact together in a circuit, the noise from each of these generators contributes to the total current or total voltage noise of a particular signal in a circuit. In this chapter, we will understand how to compute the total noise in a circuit signal due to noise contributions from several devices in it. We shall begin by discussing simple examples of an RC circuit and of a subthreshold photoreceptor circuit. We shall see that the noise of both of these circuits behaves in a similar way. We shall discuss the equipartition theorem, an important theorem from statistical mechanics, which sheds insight into the similar noise behavior of circuits in all physical systems. We shall then outline a general procedure for computing noise in circuits and apply it to the example of a simple transconductance amplifier and its use in a lowpass filter circuit. We will then be armed with the tools needed to understand and predict the noise of complicated circuits, and to design ultra-low-noise circuits.
We shall conclude by presenting an example of an ultra-low-noise micro-electro-mechanical system (MEMS), a capacitance-sensing system capable of sensing a 0.125 parts-per-million (ppm) change in a small MEMS capacitance (23-bit precision in sensing).