To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces two major notions: sets and functions. We are all familiar with real functions, for example, f (x) = 2x + 1 and g(x) = sin(x). Here the approach is somewhat different. The first difference is that we do not limit the discussion to the set of real numbers; instead, we consider arbitrary sets and are mostly interested in sets that contain only a finite number of elements. The second difference is that we do not define a “rule” for assigning a value for each x; instead, a function is simply a list of pairs (x, y), where y denotes the value of the function when the argument equals x. The definition of functions relies on the definitions of sets and relations over sets. That is why we need to define various operations over sets such as union, intersection, complement, and Cartesian product.
The focus of this book is Boolean functions. Boolean functions are a special family of functions. Their arguments and values are finite sequences of 0 and 1 (also called bits). In this chapter, we show how to represent a Boolean function by a truth table and multiplication tables. Other representations presented later in the book are Boolean formulas and combinational circuits.
SETS
A set is a collection of objects. When we deal with sets, we usually have a universal set that contains all the possible objects. In this section, we denote the universal set by U.
A long-standing source of confusion is the order of bits in binary strings. This issue is very important when strings of bits are serially communicated or stored in memories. Consider the following two scenarios.
In the first setting, Alice wishes to send to Bob a binary string a[n − 1 : 0]. The channel that Alice and Bob use for communication is a serial channel. This means that Alice can only send one bit at a time. Now Alice has two natural choices:
She can send a[n − 1] first and a[0] last; namely, she can send the bits in descending index order. This order is often referred to as most significant bit first or just MSB first.
She can send a[0] first and a[n − 1] last; namely, she can send the bits in ascending index order. This order is often referred to as least significant bit first or just LSB first.
In the second setting, computer words are stored in memory. A memory is a vector of storage places. We denote this vector by M[0], M[1],. … Suppose that each storage place is capable of storing a byte (i.e., 8 bits). The typical word size in modern computers is 32 bits (and even 64 bits). This means that a word is stored in four memory slots. The question is, how do we store a word a[31 :0]in four memory slots?
In this chapter, we study the rate of growth of positive sequences. We introduce a formal definition that enables us to say that one sequence does not grow faster than another sequence. Suppose we have two sequences and. We could say that xi does not grow faster than yi if xi ≤ yi for every i. However, such a restricted definition is rather limited, as suggested by the following examples:
The sequence xi is constant: xi = 1000 for every i, while the sequence yi is defined by yi = i. Clearly we would like to say that yi grows faster than xi even though y100 < x100.
The sequences satisfy xi = yi + 5 or xi = 2 · yi for every i. In this case, we would like to say that the two sequences grow at the same rate even though xi > yi.
Thus we are looking for a definition that is insensitive to the values of finite prefixes of the sequence. In addition, we are looking for a definition that is insensitive to addition or multiplication by constants. This definition is called the asymptotic behavior of a sequence.
ORDER OF GROWTH RATES
Consider the Fibonacci sequence. The exact value of g(n), or an analytic equation for g(n), is interesting but sometimes all we need to know is how fast does g(n)grow?
In Chapter 6, we used Boolean formulas to represent Boolean functions. The idea was to write a Boolean formula over a set of n variables and then assign 0−1 values to each variable. This assignment induces a truth value to the formula, and thus we have a Boolean function over n bits. In fact, any Boolean function can be represented by a Boolean formula if the set of connectives is complete. In Section 6.6, we proved that the set {¬, and, or} is a complete set of connectives.
In this chapter, we consider special representations of functions that are often called normal forms. Boolean formulas in a normal form are restricted forms of formulas.
Given a Boolean function, one may want to find a shortest representation of the function by a Boolean formula. This question is not well defined because one needs to specify how a Boolean function is represented. Suppose the function is described by its truth table. In this case, the truth table has 2n entries, where n denotes the number of bits in the domain of the function. Obviously, we can only read or write truth tables for rather small values of n. If n ≥ 100, then all the atoms in the universe would not suffice!
Nevertheless, we present a method by Quine and McCluskey to find a shortest representation of a function by a Boolean formula in a normal form called a sum of products.
In this chapter, we present circuits that serve as memory modules. The first type of circuit, called a parallel load register, is simply built of identical copies of clock-enabled flip-flops. The second type, called a shift register, remembers the input from k clock cycles ago. Such a shift register is build from k clock-enabled flip-flops connected in chain.
We also consider memory circuits. The first circuit, called a Random Access Memory (RAM), is capable of storing and reading values. It is like a blackboard divided into many cells. We can write a value in each cell, and we can read the value written in a cell. When we write in a cell, it erases the previous value so that only the new value is written in the cell. The second memory circuit is called a Read-Only Memory (ROM). It is like a blackboard on which a value has been written in each cell with permanent ink. The contents of each cell cannot be erased or written over, and we can only read values stored in the cells.
The functionality of a flip-flop is complicated and following the timing is tedious. Instead, we propose an abstract model called the zero delay model. In this simplified model, all transitions are instantaneous. The zero delay model enables us to separate between functionality and timing so that we can focus on functionality.
This book is an introductory textbook on the design and analysis of digital logic circuits. It has been written after 15 years of teaching hardware design courses in the School of Electrical Engineering at Tel Aviv University. The main motivation for writing a new textbook was the desire to teach hardware design rigorously. By rigorously, we mean that mathematical language and exposition are used to define the model, to specify functionality, to describe designs, to prove correctness, and to analyze cost and delay. We believe that students who study formal courses such as algebra and calculus can cope well with a rigorous approach. Moreover, they are likely to benefit from this approach in many ways.
The book covers the material of an introductory course in digital logic design, including an introduction to discrete mathematics. It is self-contained; it begins with basic gates and ends with the specification and implementation of a simple microprocessor. The goal is to turn our students into logic designers within one semester.
The rest of this preface deals with the rationale, structure, and audience of the book. We conclude with a list of the book's highlights, some of which are new to a hardware design text.
HOW TO ACQUIRE INTUITION
It is not fashionable these days to emphasize mathematical rigor. Mathematical rigor is perceived as an alienating form that dries out the passion for learning and understanding. Common teaching tactics avoid rigor (i.e., the holy definition–theorem–proof) and resort to examples.
This chapter presents two very powerful techniques for defining infinite sequences (recursion) and proving properties of infinite sequences (induction). The sequences in which we are interested are not only sequences of numbers (e.g., even positive integers) but also sequences of more elaborate objects (e.g., digital circuits).
Suppose we wish to define the even numbers. Typically, one could write 0, 2, 4, … This informal description assumes that the reader can guess how the sequence continues and how to generate the next number in the sequence. (The next number is 6!) A more systematic way to describe a sequence x0, x1, x2, … is to build a “device” that when input an element xn of the sequence, outputs the next element xn+1. In the case of the sequence of even numbers, this device simply adds +2, that is, xn+1 = xn + 2. Of course, we need to define the first element x0 in the sequence to be zero. Once we have defined x0 and the device for determining xn+1 based on xn, the sequence is well defined. This, in a nutshell, is recursion. In this book, we will use recursion to define sequences of circuits. In the meantime, we establish the topic of recursion on sequences of numbers. Suppose we wish to prove that each number in the sequence defined recursively by x0 = 0 and xn+1 = xn + 2 is divisible by 2.
In this chapter, we define and study combinational circuits. The underlying graph of a combinational circuit is more general than the underlying graph of a Boolean formula. In a formula, the underlying graph is a rooted tree. However, in a combinational circuit, the underlying graph is a directed acyclic graph.
We focus on the representation of Boolean functions by combinational circuits, a representation that is different from tables and formulas. Our goal is to prove two theorems: (i) every Boolean function can be implemented by a combinational circuit and (ii) every combinational circuit implements a Boolean function.
We introduce an efficient algorithm for simulating a combinational circuit. Simulation means that we can determine the value of the outputs if we are given the values of the inputs. In addition, we analyze the time that elapses until the outputs of a combinational circuit stabilize.
We measure the quality of a combinational circuit using two criteria: cost and delay. Cost refers to the number of gates in a circuit. Delay refers to the speed of the circuit. Obviously, we prefer cheap and fast circuits over costly and slow circuits.
COMBINATIONAL GATES: AN ANALOG APPROACH
By Definition 10.1, a gate is a device whose static functionality is specified by a static transfer function. This means that the output is a function of the inputs, provided that the input values do not change for a sufficiently long amount of time.
So far we have dealt with the representation of nonnegative integers by binary strings. Wealso designed combinational circuits that perform addition for nonnegative numbers represented by binary strings. How are negative integers represented? Can we add and subtract negative integers?
We refer to integers that are either positive, zero, or negative as signed integers. In this chapter, we deal with the representation of signed integers by binary strings. We focus on a representation that is called two's complement. We present combinational circuits for adding and subtracting signed numbers that are represented in two's complement representation. Although the designs are obtained by very minor changes of a binary adder designs, the theory behind these changes requires some effort.
REPRESENTATION OF NEGATIVE INTEGERS
We use binary representation to represent nonnegative integers. We now address the issue of representing positive and negative integers. Following programming languages, we refer to nonnegative integers as unsigned numbers and to negative and positive numbers as signed numbers.
There are three common methods for representing signed numbers: signmagnitude, one's complement, and two's complement.
In this chapter, we deal with synchronous circuits. We begin with a formal definition that builds on the definition of combinational circuits. This definition is syntactic, and we must prove that a circuit that satisfies this definition does what we expect it to do. But how do we define what it should do? Namely, how do we specify functionality, and how do we specify timing?
We begin with a simple form of synchronous circuits that we call the canonic form. In the canonic form, it is clear what the flip-flops do, where the output is computed, and where we compute the inputs of the flip-flops. We begin by analyzing the timing of a synchronous circuit in canonic form. We show that stability during the critical segments of the flip-flops can be achieved if the clock period is sufficiently long. We also address the painful issue of initialization. The functionality of a synchronous circuit in canonic form is specified using an abstract model called a finite state machine.
We then proceed with the timing analysis of synchronous circuits in general. We present two algorithms for timing analysis. The first algorithm, FEAS, tells us if the timing constraints of the circuit are feasible. The second algorithm, Min- −Φ, finds the minimum clock period. We also present an algorithm for simulating a synchronous circuit.
Two tasks are often associated with synchronous circuits. The first task, called analysis, is to find the finite state machine that specifies the functionality of a given synchronous circuit.