To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
IDDQ testing refers to detection of defects in integrated circuits through the use of supply current monitoring. This is specially suited to CMOS circuits in which the quiescent supply current is normally very low. Therefore, an abnormally high current indicates the presence of a defect. In order to achieve high quality, it is now well-established that integrated circuits need to be tested with logic, delay as well as IDDQ tests.
In this chapter, we first give an introduction to the types of fault models that IDDQ testing is applicable to, and the advantages and disadvantages of this type of testing. We then present test generation and fault simulation methods for detecting such faults in combinational as well as sequential circuits. We also show how the IDDQ test sets can be compacted.
We look at techniques for IDDQ measurement based fault diagnosis. We derive diagnostic test sets, give methods for diagnosis and evaluate the diagnostic capability of given test sets.
In order to speed up and facilitate IDDQ testing, various built-in current sensor designs have been presented. We look at one of these designs.
We next present some interesting variants of current sensing techniques that hold promise.
Finally, we discuss the economics of IDDQ testing.
Introduction
In the quiescent state, CMOS circuits just draw leakage current. Therefore, if a fault results in a drastic increase in the current drawn by the circuit, it can be detected through the monitoring of the quiescent power supply current, IDDQ.
Delay fault testing exposes temporal defects in an integrated circuit. Even when a circuit performs its logic operations correctly, it may be too slow to propagate signals through certain paths or gates. In such cases, incorrect logic values may get latched at the circuit outputs.
In this chapter, we first describe the basic concepts in delay fault testing, such as clocking schemes, testability classification, and delay fault coverage.
Next, we present test generation and fault simulation methods for path, gate and segment delay faults in combinational as well as sequential circuits. We also cover test compaction and fault diagnosis methods for combinational circuits. Under sequential test generation, we look at non-scan designs. Scan designs are addressed in Chapter 11.
We then discuss some pitfalls that have been pointed out in delay fault testing, and some initial attempts to correct these problems.
Finally, we discuss some unconventional delay fault testing methods, which include waveform analysis and digital oscillation testing.
Introduction
Delay fault (DF) testing determines the operational correctness of a circuit at its specified speed. Even if the steady-state behavior of a circuit is correct, it may not be reached in the allotted time. DF testing exposes such circuit malfunctions. In Chapter 2 (Section 2.2.6), we presented various DF models, testing for which can ensure that a circuit is free of DFs. These fault models include the gate delay fault (GDF) model and the path delay fault (PDF) model.
In this chapter, we discuss logic and fault simulation methods for combinational circuits.
We begin by defining what constitutes a test for a fault and defining the main objectives of fault simulation algorithms. We then define some basic concepts and describe the notation used to represent the behavior of fault-free as well as faulty versions of a circuit.
We then describe logic simulation algorithms, including event-driven and parallel algorithms.
Next, we present a simple fault simulation algorithm and some basic procedures used by most fault simulation algorithms to decrease their average run-time complexity. This is followed by a description of the five fault simulation paradigms: parallel fault, parallel-pattern single-fault, deductive, concurrent, and critical path tracing.
Finally, we present some low complexity approaches for obtaining an approximate value of fault coverage for a given set of vectors.
Introduction
The objectives of fault simulation include (i) determination of the quality of given tests, and (ii) generation of information required for fault diagnosis (i.e., location of faults in a chip). In this chapter, we describe fault simulation techniques for combinational circuits. While most practical circuits are sequential, they often incorporate the fullscan design-for-testability (DFT) feature (see Chapter 11). The use of full-scan enables test development and evaluation using only the combinational parts of a sequential circuit, obtained by removing all flip-flops and considering all inputs and outputs of each combinational logic block as primary inputs and outputs, respectively.
In this chapter, we describe functional testing methods which start with a functional description of the circuit and make sure that the circuit's operation corresponds to its description. Since functional testing is not always based on a detailed structural description of the circuit, the test generation complexity can, in general, be substantially reduced. Functional tests can also detect design errors, which testing methods based on the structural fault model cannot.
We first describe methods for deriving universal test sets from the functional description. These test sets are applicable to any implementation of the function from a restricted class of networks.
We then discuss pseudoexhaustive testing of circuits where cones or segments of logic are tested by the set of all possible input vectors for that cone or segment.
Finally, we see how iterative logic arrays can be tested, and how simple design for testability schemes can make such testing easy. We introduce a graph labeling method for this purpose and apply it to adders, multipliers and dividers.
Universal test sets
Suppose the description of a function is given in some form, say a truth table. Consider the case where a fault in the circuit can change the truth table in an arbitrary way. How do we detect all such faults? One obvious way is to apply all 2n vectors to it, where n is the number of inputs.
In this chapter, we discuss built-in self-test (BIST) of digital circuits. We begin with a description of the commonly used test pattern generators, namely linear feedback shift-registers and cellular automata, and the properties of sequences they generate. This is followed by an analysis of test length vs. fault coverage for testing using random and pseudo-random sequences. Two alternative approaches are then presented to achieve the desired fault coverage for circuits under test (CUTs) for which the above test pattern generators fail to provide adequate coverage under given constraints on test length. We then discuss various test response compression techniques, followed by an analysis of the effectiveness of commonly used linear compression techniques.
The second part of the chapter focuses on issues involved in making a large digital circuit self-testable. We begin with a discussion of some of the key issues and descriptions of reconfigurable circuitry used to make circuits self-testable in an economical fashion. We then discuss two main types of self-test methodologies, in-situ BIST and scan-based BIST, followed by more detailed descriptions of the two methodologies in the last two sections.
The third part of the chapter contains description of BIST techniques for delay fault testing as well as for testing with reduced switching activity.
Introduction
Built-in self-test refers to techniques and circuit configurations that enable a chip to test itself. In this methodology, test patterns are generated and test responses are analyzed on-chip.
In this chapter, first we describe the full-scan methodology, including example designs of scan flip-flops and latches, organization of scan chains, generation of test vectors for full-scan circuits, application of vectors via scan, and the costs and benefits of scan. This is followed by a description of partial scan techniques that can provide many of the benefits of full scan at lower costs. Techniques to design scan chains and generate and apply vectors so as to reduce the high cost of test application are then presented.
We then present the boundary scan architecture for testing and diagnosis of inter-chip interconnects on printed circuit boards and multi-chip modules.
Finally, we present design for testability techniques that facilitate delay fault testing as well as techniques to generate and apply tests via scan that minimize switching activity in the circuit during test application.
Introduction
The difficulty of testing a digital circuit can be quantified in terms of cost of test development, cost of test application, and costs associated with test escapes. Test development spans circuit modeling, test generation (automatic and/or manual), and fault simulation. Upon completion, test development provides test vectors to be applied to the circuit and the corresponding fault coverage. Test application includes the process of accessing appropriate circuit lines, pads, or pins, followed by application of test vectors and comparison of the captured responses with those expected.
A modern workstation may have as much as 256 Mbytes of DRAM memory. In terms of equivalent transistors (assuming one transistor per bit) this amounts to 2 × 109 transistors which is about two orders of magnitude more than the number of transistors used in the rest of the system. Given the importance of system test, memory testing is, therefore, very important.
We start this chapter with a motivation for efficient memory tests, based on the allowable test cost as a function of the number of bits per chip. Thereafter, we give a model of a memory chip, consisting of a functional model and an electrical model.
Because of the nature of memories, which is very different from combinational logic, we define a new set of functional faults (of which the stuck-at faults are a subset) for the different blocks of the functional model.
We describe a set of four traditional tests, which have been used extensively in the past, together with their fault coverage. We next describe march tests, which are more efficient than the traditional tests, together with proofs for completeness and irredundancy.
Finally, we introduce the concept of pseudo-random memory tests, which are well suited for built-in self-test (BIST), together with a computation of the test length as a function of the escape probability.
Motivation for testing memories
Ongoing developments in semiconductor memories result in a continually increasing density of memory chips (Inoue et al., 1993).
In this chapter, we discuss how CMOS circuits can be tested under various fault models, such as stuck-at, stuck-open and stuck-on. We consider both dynamic and static CMOS circuits. We present test generation techniques based on the gate-level model of CMOS circuits, as well as the switch-level implementation.
Under dynamic CMOS circuits, we look at two popular techniques: domino CMOS and differential cascode voltage switch (DCVS) logic. We consider both single and multiple fault testability of domino CMOS circuits. For DCVS circuits, we also present an error-checker based scheme which facilitates testing.
Under static CMOS circuits, we consider both robust and non-robust test generation. A robust test is one which is not invalidated by arbitrary delays and timing skews. We first show how test invalidation can occur. We then discuss fault collapsing techniques and test generation techniques at the gate level and switch level.
Finally, we show how robustly testable static CMOS designs can be obtained.
Testing of dynamic CMOS circuits
Dynamic CMOS circuits form an important class of CMOS circuits. A dynamic CMOS circuit is distinguished from a static CMOS circuit by the fact that each dynamic CMOS gate is fed by a clock which determines whether it operates in the precharge phase or the evaluation phase. There are two basic types of dynamic CMOS circuits: domino CMOS (Krambeck et al., 1982) and DCVS logic (Heller et al., 1984).
In this chapter, we discuss methods for diagnosing digital circuits. We begin by identifying the main objectives of diagnosis and defining the notions of response, error response, and failing vectors for a circuit under test (CUT), the fault-free version of the circuit, and each circuit version with a distinct target fault.
We then describe the purpose of fault models for diagnosis and describe the fault models considered in this chapter. The cause–effect diagnosis methodologies follow. In these methodologies, each faulty version of the circuit is simulated, implicitly or explicitly, and its response determined and compared with that of the CUT being diagnosed. We first describe post-test diagnostic fault simulation approaches where fault simulation is performed after the CUT response to the given vectors is captured. Subsequently, we describe fault-dictionary approaches where fault simulation is performed and the response of each faulty version stored in the form of a fault dictionary, before diagnosis is performed for any CUT.
Next, we present effect–cause approaches for diagnosis which start with the CUT response and deduce the presence or absence of a fault at each circuit line.
Finally, we present methods for generating test vectors for diagnosis.
Introduction
Diagnosis is the process of locating the faults present within a given fabricated copy of a circuit. For some digital systems, each fabricated copy is diagnosed to identify the faults so as to make decisions about repair.
In order to alleviate the test generation complexity, one needs to model the actual defects that may occur in a chip with fault models at higher levels of abstraction. This process of fault modeling considerably reduces the burden of testing because it obviates the need for deriving tests for each possible defect. This is made possible by the fact that many physical defects map to a single fault at the higher level. This, in general, also makes the fault model more independent of the technology.
We begin this chapter with a description of the various levels of abstraction at which fault modeling is traditionally done. These levels are: behavioral, functional, structural, switch-level and geometric.
We present various fault models at the different levels of the design hierarchy and discuss their advantages and disadvantages. We illustrate the working of these fault models with many examples.
There is currently a lot of interest in verifying not only that the logical behavior of the circuit is correct, but that its temporal behavior is also correct. Problems in the temporal behavior of a circuit are modeled through delay faults. We discuss the main delay fault models.
We discuss a popular fault modeling method called inductive fault analysis next. It uses statistical data from the fabrication process to generate physical defects and extract circuit-level faults from them. It then classifies the circuit-level faults based on how likely they are to occur.
In this chapter, we concentrate on the register-transfer level (RTL) and behavior level of the design hierarchy.
We first discuss different RTL test generation methods: hierarchical, symbolic, functional, and those dealing with functional fault models. We then discuss a symbolic RTL fault simulation method.
Next, we discuss RTL design for testability (DFT) methods. The first such method is based on extracting and analyzing the control/data flow of the RTL circuit. The second method uses regular expressions for symbolic testability analysis and test insertion. These are followed by high-level and orthogonal scan methods.
Under RTL built-in self-test (BIST), we show that some of the symbolic testability analysis methods used for RTL DFT can also be extended to BIST. Then we discuss a method called arithmetic BIST, and a method to derive native-mode self-test programs for processors.
At the behavior level, we first show how behavioral modifications can be made to improve testability. We also present three types of behavioral synthesis for testability techniques. The first type targets ease of subsequent gate-level sequential test generation. The second type deals with ease of symbolic testability using precomputed test sets of different RTL modules in the circuit. The third type is geared towards BIST.
Introduction
High-level test synthesis refers to an area in which test generation, fault simulation, DFT, synthesis for testability, and BIST are automatically performed at the higher levels, i.e., register-transfer and behavior levels, of the design hierarchy.
Synthesis for testability refers to an area in which testability considerations are incorporated during the synthesis process itself. There are two major sub-areas: synthesis for full testability and synthesis for easy testability. In the former, one tries to remove all redundancies from the circuit so that it becomes completely testable. In the latter, one tries to synthesize the circuit in order to achieve one or more of the following: less test generation time, less test application time, and high fault coverage. Of course, one would ideally like to achieve both full and easy testability. Synthesis for easy testability also has the potential for realizing circuits with less hardware and delay overhead than design for testability techniques. However, in practice, this potential is not always easy to achieve.
In this chapter, we look at synthesis for testability techniques applied at the logic level. We discuss synthesis for easy testability as well as synthesis for full testability.
We consider both the stuck-at and delay fault models, and consider both combinational and sequential circuits. Under the stuck-at fault (SAF) model, we look at single as well as multiple faults. Under the delay fault model, we consider both gate delay faults (GDFs) and path delay faults (PDFs).
The fraction of the industrial semiconductor budget that manufacturing-time testing consumes continues to rise steadily. It has been known for quite some time that tackling the problems associated with testing semiconductor circuits at earlier design levels significantly reduces testing costs. Thus, it is important for hardware designers to be exposed to the concepts in testing which can help them design a better product. In this era of system-on-a-chip, it is not only important to address the testing issues at the gate level, as was traditionally done, but also at all other levels of the integrated circuit design hierarchy.
This textbook is intended for senior undergraduate or beginning graduate levels. Because of its comprehensive treatment of digital circuit testing techniques, it can also be gainfully used by practicing engineers in the semiconductor industry. Its comprehensive nature stems from its coverage of the transistor, gate, register-transfer, behavior and system levels of the design hierarchy. In addition to test generation techniques, it also covers design for testability, synthesis for testability and built-in self-test techniques in detail. The emphasis of the text is on providing a thorough understanding of the basic concepts; access to more advanced concepts is provided through a list of additional reading material at the end of the chapter.
In this chapter, we discuss automatic test pattern generation (ATPG) for combinational circuits. We begin by introducing preliminary concepts including circuit elements, ways of representing behaviors of their fault-free as well as faulty versions, and various value systems.
Next, we give an informal description of test generation algorithms to introduce some of the test generation terminology. We then describe direct as well as indirect implication techniques.
We discuss a generic structural test generation algorithm and some of its key components. We then describe specific structural test generation paradigms, followed by their comparison and techniques for improvement.
We proceed to some non-structural test generation algorithms. We describe test generation systems that use test generation algorithms in conjunction with other tools to efficiently generate tests.
Finally, we present ATPG techniques that reduce heat dissipated and noise during test application.
Introduction
While most practical circuits are sequential, they often incorporate the full-scan design for testability (DFT) feature (see Chapter 11). The use of full-scan enables tests to be generated using a combinational test generator. The input to the test generator is only the combinational part of the circuit under test (CUT), obtained by removing all the flip-flops and considering all the inputs and outputs of the combinational circuit as primary inputs and outputs, respectively. If the generated tests are applied using the full-scan DFT features and the test application scheme described in Chapter 11, the fault coverage reported by the combinational test generator is achieved.
We introduce some basic concepts in testing in this chapter. We first discuss the terms fault, error and failure and classify faults according to the way they behave over time into permanent and non-permanent faults.
We give a statistical analysis of faults, introducing the terms failure rate and mean time to failure. We show how the failure rate varies over the lifetime of a product and how the failure rates of series and parallel systems can be computed. We also describe the physical and electrical causes for faults, called failure mechanisms.
We classify tests according to the technology they are designed for, the parameters they measure, the purpose for which the test results are used, and the test application method.
We next describe the relationship between the yield of the chip manufacturing process, the fault coverage of a test (which is the fraction of the total number of faults detected by a given test) and the defect level (the fraction of bad parts that pass the test). It can be used to compute the amount of testing required for a certain product quality level.
Finally, we cover the economics of testing in terms of time-to-market, revenue, costs of test development and maintenance cost.
Faults and their manifestation
This section starts by defining the terms failure, error and fault; followed by an overview of how faults can manifest themselves in time.
In this chapter, we discuss test generation and design for testability methods for a system-on-a-chip. There are three main issues that need to be discussed: generation of precomputed test sets for the cores, providing access to cores embedded in a system-on-a-chip, and providing an interface between the cores and the chip through a test wrapper.
We first briefly discuss how cores can be tested. This is just a summary of the many techniques discussed in the previous chapters which are applicable in this context.
We then present various core test access methods: macro test, core transparency, direct parallel access, test bus, boundary scan, partial isolation ring, modification of user-defined logic, low power parallel scan, testshell and testrail, and the advanced microcontroller bus architecture.
We finally wrap this chapter up with a brief discussion of core test wrappers.
Introduction
Spurred by an ever-increasing density of chips, and demand for reduced time-to-market and system costs, system-level integration is emerging as a new paradigm in system design. This allows an entire system to be implemented on a single chip, leading to a system-on-a-chip (SOC). The key constituents of SOCs are functional blocks called cores (also called intellectual property). Cores can be either soft, firm or hard. A soft core is a synthesizable high-level or behavioral description that lacks full implementation details. A firm core is also synthesizable, but is structurally and topologically optimized for performance and size through floorplanning (it does not include routing).
A concept of equation morphism is introduced for every endofuctor $F$ of a cocomplete category $\Ce$. Equationally defined classes of $F$-algebras for which free algebras exist are called varieties. Every variety is proved to be monadic over $\Ce$, and, conversely, every monadic category is equivalent to a variety. The Birkhoff Variety Theorem is also proved for $`{\sf Set}\hbox{-like}'$ categories.
By dualising, we arrive at a concept of coequation such that covarieties, that is, coequationally specified classes of coalgebras with cofree objects, correspond precisely to comonadic categories. Natural examples of covarieties are presented.