1486 results in Circuits and systems
11 - Formal Verification
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 210-244
-
- Chapter
- Export citation
-
Summary
…program testing can be a very effective way to show the presence of bugs, but is hopelessly inadequate for showing their absence. The only effective way to raise the confidence level of a program significantly is to give a convincing proof of its correctness…
—Edsger W. Dijkstra, “The humble programmer,” ACM Turing Lecture, 1972
A design undergoes several changes during a design flow. These changes are expected to meet or preserve the specified functionality. Nevertheless, sometimes the design functionality can deviate from the given specification due to various reasons.1 We need to detect and fix these problems as soon as possible.
In general, verification takes considerable manual and computational effort [1–3]. Consequently, various types of verification techniques have evolved over the last few decades with different resource usage, manual intervention requirement, and rigor. Among them, formal verification is more rigorous, typically requires more computational resource, and is now routinely employed in a design flow. In this chapter, we will explain the basics of formal verification techniques.
LIMITATIONS OF SIMULATION-BASED VERIFICATION
The most commonly employed functional verification technique is simulation.2 In this approach, we simulate a design for a set of test vectors and compare the output response with the expected response. If these two responses agree, then the design is considered to be functionally correct.
A simulation-based verification is fast and straightforward. It can efficiently find functional problems in a design and is especially useful for quickly detecting bugs and fixing functional problems in the early phases of design implementation. However, the biggest problem of a simulation-based verification is its non-exhaustiveness. A huge number of test vectors are possible for a given design, and we cannot simulate all of them.
A and B can independently take one of the 232 possible values. Consequently, the number of possible test vectors is 232 × 232 = 264. Simulation time required is 264 × 1 × 10â6 seconds ≈ 0.5 million years.
Thus, simulation-based exhaustive verification is not feasible for real-world designs.
In practice, we simulate a design for a subset of all possible test vectors. Typically, we provide those test vectors that can discover some anticipated bugs.
Acknowledgments
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp xxiii-xxiv
-
- Chapter
- Export citation
Chapter 8 - Transistor Power and Multistage Amplifiers
- Barun Raychaudhuri, Presidency University, Kolkata
- Coming soon
-
- Book:
- Electronics
- Print publication:
- 15 June 2023, pp 239-275
-
- Chapter
- Export citation
-
Summary
The essential conditions for a bipolar junction transistor to work as power amplifier are illustrated in this chapter. Sometimes the output of an amplifier is used as the input of another amplifier in order to produce larger amplification. Such cascading of two or more amplifier stages is called coupling, which may be done with resistor, capacitor transformer, or with direct connection. Different types of coupling and the resultant multistage amplifiers are discussed. Several classes of amplifier operation, such as A, B, AB and C are introduced. Some specific types of biasing techniques suitable for transistor power amplification, such as push–pull and tuned amplifier are explained.
Need for Power Amplification
In Chapters 5 through 7, we have come across different types of transistor configurations and biasing circuits acting as voltage or current amplifiers. The amplifier converts a portion of the electrical energy obtained from the dc power supply into the energy obtained at the output in proportion to the input. The input signal just controls the mode of conversion. There are wide varieties of amplifier depending on the requirement of
• ac or dc amplification,
• voltage, current, or power amplification,
• amplification over a wide range of frequency of the input signal (wideband amplifier), and
• amplification around only a fixed frequency of the input signal (narrowband or tuned amplifier) and many others.
The loudspeaker in a public address system is a very common example where high level amplification is required for a weak electrical signal. Servomechanisms, such as the movement of the motor in a printer connected to a computer and signal transmissions in radio/television broadcasting are other popular examples where high level amplifications are compulsory. Such large extent of amplification cannot be achieved with the BJT amplifiers discussed in Chapters 6 and 7 because of the following constraints.
(i) Achieving voltage gain is not possible for large signals because the ositive and negative swings of a large (≥ 0.7 V) ac input would drive the Q-point to saturation and cutoff, respectively.
(ii) The current gain can still be achieved because it is the fundamental property of the transistor.
(iii) The nonlinearity in the transistor transfer characteristic becomes predominant for large input signals.
Chapter 4 - Diode Applications
- Barun Raychaudhuri, Presidency University, Kolkata
- Coming soon
-
- Book:
- Electronics
- Print publication:
- 15 June 2023, pp 101-140
-
- Chapter
- Export citation
-
Summary
The most significant property of the p–n junction diode, as introduced in Chapter 3, is rectification or converting alternating current (ac) to direct current (dc) because the diode conducts under forward bias only. This property is utilized for fabricating rectifier circuits. The three major categories of rectifier namely half-wave, full-wave, and bridge are illustrated in this chapter. The rectifying property of a diode is also implemented to the transmission of a portion of an alternating voltage waveform. Such circuits, known as the clippers are elaborated. Two other important circuits, namely the clamper and the voltage multiplier that use diodes with capacitors are brought together.
Piecewise Linear Model
The diode is, of course, a nonlinear device because the current through it undergoes nonlinear change with voltage. Yet the diode can be approximated as a linear element part-by-part under certain conditions. The concept of such ‘linearizing the diode’, as explained with Figures 4.1(a) and 4.1(b), is quite useful to the analysis of the circuits containing diodes. Both the diagrams symbolize the forward current–voltage characteristic curve of a typical diode but at different scales. Figure 4.1(a) represents the enlarged view for the condition just after cut in. The current is now determined by the junction property and the nonlinearity of the current–voltage curve is quite prominent. The same diode at a forward voltage much higher than the cut-in voltage behaves like that of Figure 4.1(b) where the current is dominated by the bulk resistance of the semiconductor, the nonlinearity below 0.7 V gets squeezed into a small region of the characteristic curve and the major portion of the curve becomes linear.
The above demonstration implies that when the bias voltage becomes much larger than the forward cut-in voltage of the diode, the eBipolar Junction Transistorquivalent circuit can be represented by the combination of the following three pieces of linear circuit elements in series.
•An on/off switch representing the forward/reverse biased condition of the diode.
•A voltage source in lieu of the voltage drop across the diode.
•A resistor denoting the semiconductor bulk resistance.
Therefore, such an equivalent circuit of the diode is termed as the piecewise linear model of the diode. Figure 4.2 illustrates the action of this model. The switch (S) is the proxy of the forward and reverse-biased conditions of the diode.
Part Three - Design for Testability (DFT)
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 447-448
-
- Chapter
- Export citation
-
Summary
Defects can creep into an IC while fabricating, despite tight process control and sophisticated fabrication technology. The primary purpose of testing is to detect such defects and prevent a defective IC from reaching the end-user.
Though testing is carried out primarily after fabrication, we must consider several aspects of testing during the design phase. We carry out some tasks during designing that simplify the testing process and make it economical. We collectively refer to test-driven design practices as Design For Testability (DFT). This part of the book covers essential aspects of DFT.
In Chapter 20, we will describe the basic concepts of DFT and structural testing. In Chapter 21, the scan design technique, the most popular implementation of structured testing, will be explained. In Chapter 22, a methodology to generate test patterns for the most common fault model, i.e., stuckat fault model, will be presented. Chapter 23 will describe the basic concepts of Built-in Self-Test (BIST), highlighting the advantages and disadvantages of self-testing. Before going through this part of the book, we suggest that readers become familiar with the basic testing concepts discussed in Chapter 6 (“Testing Techniques”).
It is worthy to point out that, in this book, we explain only the essential concepts and principles of testing. To gain further insight into IC testing, we advise readers to go through dedicated books on testing such as [1–4]. Further note that the focus area of this book is the testing of digital circuits. To understand design practices explicitly employed for testing of memories and analog/mixed-signal circuits, readers should refer to books such as [2–4].
Chapter 1 - Origin of Electronics
- Barun Raychaudhuri, Presidency University, Kolkata
- Coming soon
-
- Book:
- Electronics
- Print publication:
- 15 June 2023, pp 1-19
-
- Chapter
- Export citation
-
Summary
Electronics is a subject cultivated at different academic levels of undergraduate and postgraduate science and engineering curriculum. Beyond the classes, it has diversified applications in modern science, technology, economy, society, and daily life. The development of electronics throughout the last century may be treated as a distinct step along the progress of human civilization. This chapter sketches a brief outline of the background, evolution, and widespread applications of electronics. This also introduces the arrangement and relevance of topics in this book.
What Is Electronics
It is understood from our everyday experience that electronics is somehow related to the use of electricity. However, electricity is found in nature also, whereas electronic devices and the use of electricity in those devices are totally man-made. The techniques of electronic devices established several novel aspects in the use of electricity, which were never experienced earlier. Some salient features of electronics are mentioned in the following section.
•Electrical Power Amplification: An electronic device, such as a transistor can be made to amplify voltage and current simultaneously that cannot be achieved with other electrical gadgets, such as a transformer.
•Nonlinear Current–Voltage Relationship: According to Ohm's law, the steady current through a resistor, capacitor, or inductor varies linearly with voltage at a constant temperature. However, the current through electronic devices, such as a diode or a transistor undergoes nonlinear variation with voltage.
•Impedance Transformation: The same electronic device may exhibit different resistances across the input and the output terminals.
All the above characteristics were first realized with triode, a vacuum tube device invented by Lee de Forest in 1906. Therefore the invention of triode may be regarded as the foundation of electronics and we may feel that electronics has been a reliable companion of mankind for more than a century. The continuous research and development throughout this long period has enriched human civilization with innumerable equipment and gadgets like television, mobile phone, satellite communication, Internet, and a metamorphosis of computer from mechanical to electrically operated instrument. The researches on electronics and allied subjects have contributed to other branches of science and technology, and have given rise to new interdisciplinary fields.
9 - Simulation-based Verification
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 147-174
-
- Chapter
- Export citation
-
Summary
…I am not bound to please thee with my answers…
—William Shakespeare, The Merchant of Venice, Act 4, Scene 1, 1596We model the behavior of a digital circuit in hardware description languages (HDLs), such as Verilog and VHDL, as described in Chapter 8 (“Modeling hardware using Verilog”). Subsequently, we need to ensure that the logical functionality of the HDL model matches the given specification. This task is accomplished by functional verification. The goal of functional verification is to ensure that the logic implementation of the digital circuit is correct, i.e., it produces the right output bit sequence for a given input bit sequence. We can initially ignore the delay of combinational circuit elements during functional verification or make some simplistic assumptions about them. As more details are added to a design, we perform verification tasks related to delay, power dissipation, and correctness of layout later in the design flow. By segregating functional verification from other types of verification tasks, we are able to simplify the overall design verification process.
We can broadly categorize functional verification techniques into two classes: simulation-based techniques and formal methods. Simulating a hardware model is analogous to running a program written in a traditional programming language and ensuring the correctness of the program by observing its output. Therefore, simulation-based techniques are easy to use and can quickly discover bugs in a hardware model, especially in the early stages of design implementation. In practice, simulation-based techniques provide a foundation for functional verification. Subsequently, we augment and fill gaps in the simulation-based verification with more rigorous formal methods.
In this chapter, we will explain the simulation-based techniques for functional verification. In Chapter 11 (“Formal verification”), we will discuss techniques based on formal methods.
BASICS OF SIMULATION
A typical simulation framework is shown in Figure 9.1. It involves applying stimulus to the design under verification (DUV) and observing its response for correctness. We commonly refer to the verification environment created for applying stimuli and observing responses as a testbench [1]. A testbench interacts with a tool, known as simulator, to produce verification results and debugging information.
Introduction to VLSI Design Flow
- Sneh Saurabh
-
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023
-
- Textbook
- Export citation
-
Chip designing is a complex task that requires an in-depth understanding of VLSI design flow, skills to employ sophisticated design tools, and keeping pace with the bleeding-edge semiconductor technologies. This lucid textbook is focused on fulfilling these requirements for students, as well as a refresher for professionals in the industry. It helps the user develop a holistic view of the design flow through a well-sequenced set of chapters on logic synthesis, verification, physical design, and testing. Illustrations and pictorial representations have been used liberally to simplify the explanation. Additionally, each chapter has a set of activities that can be performed using freely available tools and provide hands-on experience with the design tools. Review questions and problems are given at the end of each chapter to revise the concepts. Recent trends and references are listed at the end of each chapter for further reading.
Preface
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp xix-xxii
-
- Chapter
- Export citation
-
Summary
Modern integrated circuits are very complex and can have billions of transistors crammed onto a few square centimeters of silicon. To ensure that the manufactured chips produce the desired results, a designer needs to perform tasks related to design implementation, verification, and testing. These tasks consist of several individual design steps that are accomplished with the help of sophisticated electronic design automation (EDA) tools. Besides knowing the details of these design steps, a designer should be familiar with the capabilities, assumptions, limitations, and framework of various EDA tools employed in a design flow. Moreover, these design steps are interrelated. Therefore, a designer must develop a holistic view of a design flow to utilize EDA tools efficiently and obtain a high-quality design.
Most books on very large scale integration (VLSI) focus on specific aspects of design flow, such as register transfer level (RTL) modeling, logic synthesis, formal verification, physical design, or testing. These books provide excellent study material and references for individual design tasks. However, they become overwhelming for undergraduate and postgraduate students. An undergraduate or postgraduate student needs to develop concepts on VLSI design flow in a limited time frame, typically one or two semesters. The existing books on these topics allow students to cover only a few design tasks in great detail while leaving out many others. We believe a student should devote limited academic time to acquiring the broadest base of knowledge rather than prematurely specializing in one domain. This textbook is written to fulfill this objective. It explains the fundamental aspects of VLSI design flow for the senior undergraduate and postgraduate students. It covers the design flow extensively, taking a holistic view of the VLSI design and technology. Further, it emphasizes the basic principles of design flow that are expected to be helpful in the longterm for a VLSI design engineer.
OVERVIEW OF THE BOOK
We have divided this book into four parts:
1 Overview of VLSI design flow: It introduces integrated circuit concepts and gives an overview of the design, verification, and test methods employed in a typical design flow.
2 Logic design: It explains RTL modeling using Verilog. Further, it describes the logic synthesis, technology libraries, and timing constraints along with logic, power, and timing optimization techniques. It also explains verification steps such as simulation, static timing analysis, and formal methods.
Preface
- Barun Raychaudhuri, Presidency University, Kolkata
- Coming soon
-
- Book:
- Electronics
- Print publication:
- 15 June 2023, pp xvii-xx
-
- Chapter
- Export citation
-
Summary
‘Electronics’ is quite a familiar word in our daily life. We see around us innumerable electronic gadgets for domestic use and many scientific, engineering, technical, and vocational teaching courses of different academic levels related to electronics. Actually electronics evolved as a part of modern physics almost a century ago and depicted significant contribution to modern science, technology, economy and society.
There is no dearth of excellent reference books on different aspects of electronics. Nevertheless, the University Grants Commission (UGC) guidelines of Choice-Based Credit System (CBCS) curriculum and Learning Outcomes based Curriculum Framework (LOCF), prescribing two full papers of electronics in the core course of undergraduate physics motivated this author to write something on electronics exclusively for young students who have just passed the high school and entered the higher studies. The outcome is this book; Electronics: Analog and Digital, prepared mainly for physics honours/equivalent courses.
The topics of electronic devices, circuits and systems are broadly classified into two categories: analog and digital, as is also specified in the CBCS curriculum. Maintaining the recommended topics, the subject matters are reorganized so as to facilitate the students.
The first introductory chapter outlines briefly the evolution, significance and widespread applications of electronics. Since semiconductors take major part in the fabrication of electronic devices, the subject learning starts with the basic properties and types of semiconductors, electrons and holes, concepts of energy band and effective mass and current transport phenomena (Chapter 2).
A fundamental structure in electronics is p–n junction. Chapter 3 explains its rectification property, forward and reverse biasing and the corresponding energy band diagrams. It also elucidates how the same p–n junction with constructional changes can give rise to different devices, such as rectifier diode, Zener diode and light-emitting diode. The important applications of diode as half-wave, full-wave and bridge rectifier, clipper and clamper are presented in Chapter 4.
The bipolar junction transistor (BJT) is a very remarkable device in electronics. It is the basic building block for many analog and digital circuits. So the book dedicates four chapters on different aspects of transistor. Chapter 5 contains the construction and working principle of n–p–n and p–n–p transistors, the amplifying action of transistor and detailed explanations of common-emitter, common-base and common-collector configurations.
19 - Power-driven Optimizations
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 422-446
-
- Chapter
- Export citation
-
Summary
It is impossible to enjoy idling thoroughly unless one has plenty of work to do … Idleness … to be sweet must be stolen.
—Jerome K. Jerome, “On being idle,” Idle Thoughts of an Idle Fellow, 1886Power-driven optimization is an integral part of a design flow. Various design steps, such as systemlevel design, logic synthesis, and physical design, consider reducing power dissipation as one of their objectives. We carry out power-driven optimization and include power reduction techniques throughout the flow. Nevertheless, for easy readability, we present a consolidated view of these techniques in this chapter.
MOTIVATION
We need to reduce the power dissipated by an integrated circuit (IC) due to the following reasons:
1. The energy that a circuit draws from the power source gets stored internally or gets dissipated to the environment through packaging and heat sinks [1]. We can relax the cooling requirement of an IC by reducing its power dissipation. Thus, it allows us to use simpler and less costly packaging and heat sinks.
2. An IC draws power from a battery, especially in portable devices such as mobiles and laptops. For a given battery, we can reduce the frequency of recharges by reducing its energy consumption. To an approximation, we can view a fully charged battery as delivering a fixed amount of energy. Therefore, we need to reduce the average power or total energy dissipated by the circuit to reduce the frequency of recharges [2]. Alternatively, we can reduce the battery weight by reducing the average power dissipation for a given recharging frequency. From the environmental perspective too, consuming less power is desirable.
3. The power dissipated in an IC gets manifested as an increase in its temperature. When the temperature of an IC increases, some device failure mechanisms exacerbate. By reducing power dissipation in an IC, we can avoid significant temperature increases and the associated reliability issues.
When we reduce the power dissipation in an IC, we often sacrifice other figures of merit, such as performance and area. Therefore, as a designer, power reduction is never our sole objective. We intelligently trade-off other metrics such as performance while achieving power reduction.
Frontmatter
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp i-iv
-
- Chapter
- Export citation
Answers
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 669-677
-
- Chapter
- Export citation
14 - Static Timing Analysis
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 301-334
-
- Chapter
- Export citation
-
Summary
…any real body must have extension in four directions: it must have Length, Breadth, Thickness, and—Duration … It is only another way of looking at Time … For instance, here is a portrait of a man at eight years old, another at fifteen, another at seventeen … All these are evidently sections, as it were, Three-Dimensional representations of his Four-Dimensioned being…
—H. G. Wells, The Time Machine, Introduction, 1895In a synchronous circuit, the clock signal synchronizes the operation of various circuit elements. For deterministic circuit operation, certain timing constraints must be satisfied between the data signal relative to the clock signal. If these constraints are violated then the circuit can go into a metastable state or an invalid state. Therefore, we need to verify that these constraints are indeed satisfied in a synchronous circuit [1–4]. We use static timing analysis (STA) for this purpose. The simplicity and computational efficiency of STA have made synchronous design methodology the de facto standard for complicated digital circuits.
An STA tool verifies that a given synchronous circuit operates deterministically and remains in a valid state for a given frequency even in the worst-case scenario. Note that the purpose of STA is not to find a frequency at which a circuit can operate. Its purpose is just to ascertain whether a given circuit can safely operate at a given frequency and operating conditions. Therefore, STA needs to examine only the worst-case behavior for the circuit. It greatly simplifies the problem for an STA tool. It need not evaluate the circuit response to all possible input stimuli because most of them do not contribute to the worst-case behavior. Hence, STA employs methods that do not require applying stimuli and observing their dynamic responses. In this sense, STA is a static verification technique. It employs efficient stimuli-independent techniques to examine the worst-case behavior of a circuit.
In this chapter, we first describe the expected behavior of a synchronous circuit. Then, we derive the timing constraints that must be met to achieve this synchronous behavior. Next, we explain the techniques employed by the STA tools to ensure that these constraints are satisfied. We also highlight the assumptions used by the STA tools and point out the merits and demerits of these assumptions. Finally, we explain some of the popular techniques to account for variations in STA.
18 - Power Analysis
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 401-421
-
- Chapter
- Export citation
-
Summary
There are few facts in science more interesting than those which establish a connexion between heat and electricity.
—James Prescott Joule, “On the heat evolved by metallic conductors of electricity, and in the cells of a battery during electrolysis,” The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 1841The power–performance trade-off has now become a key ingredient in VLSI design flows. The increased power dissipation in integrated circuits (ICs) and the ubiquitous use of battery-powered devices have made incorporating power-saving techniques essential to the design flows. Since powersaving strategies are more effective early in the flow, we adopt low-power design methodologies right from the pre-RTL stages. Subsequently, the power-related tasks permeate throughout the design flow.
We can broadly classify the power-related tasks as: power analysis and power-driven optimizations. In this chapter, we will explain power analysis methods. In the next chapter (“Powerdriven optimization”), we will discuss power optimization techniques.
COMPONENTS OF POWER DISSIPATION
There are two components of power dissipation in a CMOS circuit: dynamic power dissipation and static power dissipation. The dynamic power dissipation occurs when a circuit performs computation actively, i.e., a signal or the output of a logic gate changes its value. The static power dissipation occurs when the circuit is powered on (supply voltages are applied), but it does not perform active computation. Let us understand these components of power dissipation in more detail.
Dynamic Power Dissipation
Consider a CMOS inverter II, shown in Figure 18.1(a). The output pin drives the input of other logic gates through wires. The total load due to these M input pins is CI = ΣM i=1 Ci, where Ci is the capacitance of the ith input pin. The wires offer load capacitance Cw. Additionally, the driving pin Z has parasitic capacitances Cd due to the drain diffusion regions of the transistors. Thus, an output pin of a CMOS logic gate has total load capacitance:
We can use the circuit shown in Figure 18.1(b) for understanding dynamic power dissipation. Assume that the inverter undergoes a transition from 0→1 or 1→0. These transitions lead to power dissipation due to switching of capacitors and due to drawing short-circuit current from the power supply. We discuss these components of power dissipation in the following paragraphs.
Part Two - Logic Design
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 115-116
-
- Chapter
- Export citation
-
Summary
Earlier, we discussed dividing the RTL to GDS implementation flow into two parts: logic synthesis and physical design. In this part of the book, we will discuss logic synthesis. In Part IV, we will discuss physical design.
Logic design involves transforming a high-level functional description to a netlist of standard cells and macros. It takes a design from the functional domain to the structural domain. The primary task of logic design is to decide the logic elements that will deliver the required functionality. Additionally, we need to ensure that the design metrics such as area, performance, power, and testability meet the given requirements.
To ensure that the logic design meets the above requirements, we interleave implementation and verification tasks in a design flow. We have arranged implementation and verification-related chapters similarly. However, note that we carry out some of the verification tasks, such as combinational equivalence checking, timing analysis, and power analysis, multiple times in a design flow.
We will explain implementation of the logical design in Chapter 8 (“Modeling Hardware using Verilog”), Chapter 10 (“RTL Synthesis”), Chapter 12 (“Logic Optimization”), Chapter 16 (“Technology Mapping”), Chapter 17 (“Timing-driven Optimization”), and Chapter 19 (“Powerdriven Optimization”). We will discuss verification aspects for a design in Chapter 9 (“Simulationbased Verification”), Chapter 11 (“Formal Verification”), Chapter 14 (“Static Timing Analysis”), and Chapter 18 (“Power Anlaysis”). We will present the information that is used both in implementation and verification in Chapter 13 (“Library”) and Chapter 15 (“Constraints”).
It is worthy to point out that the primary objective of these chapters is to build a foundation for logic design. Therefore, we explain essential concepts and principles governing it. We have attempted to provide explanations not based on any specific logic synthesis tool or proprietary data format. Therefore, a reader can apply these concepts to any tool s/he chooses for logic design. Additionally, note that these chapters build a foundation for logic design. To gain more depth on these topics, we encourage readers to refer to standard textbooks on them. We have provided references to those textbooks at appropriate places in each chapter.
23 - Built-in Self-test
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 487-504
-
- Chapter
- Export citation
-
Summary
I went in search of evil outside. I could not find any. Once I started searching my own heart, I found that none is as big evil as me.
—Kabir Das, 15th century (translated from vernacular Hindi)In the previous chapters, we have discussed wafer-level testing that uses automatic test equipment (ATE). It allows a high degree of automation, achieves good fault coverage, and is cost-effective. Therefore, ATE-based testing is popular and widely adopted for industrial designs. However, ATEbased testing has the following drawbacks or limitations:
1. It requires expensive ATE. Moreover, it requires sophisticated facilities equipped with an ATE. We cannot perform it outside the production testing environment. However, sometimes testing, diagnosis, and repair become necessary for a chip that is already integrated into a system.
2. It employs voluminous test patterns that increase the test time and the cost of testing. Moreover, the number of test patterns increases with the advancement in technology due to increased circuit complexity. Consequently, the cost of testing increases with the advancement in technology.
3. We cannot carry out at-speed testing for high-performance integrated circuits (ICs) due to the impedances associated with the probes of an ATE [1]. However, some types of faults, such as delay faults, can only be detected by at-speed tests [2]. It makes ATE-based techniques inadequate in some situations.
Built-in self-test (BIST) is another testing methodology that addresses the above drawbacks and has become quite popular [3–6]. In this chapter, we will discuss BIST in detail.
We can implement BIST in a design in many ways, depending on the design complexity, target test metrics, and allowed overheads. In this chapter, we describe a typical BIST system to elucidate its basic principles.
BASICS
BIST is a testing technique in which we incorporate additional hardware and software in a chip that enables it to self-test. During self-testing, an IC can test its own operation, both functionally and parametrically. The self-testing does not require an external test pattern. Thus, we eliminate the dependence on an ATE. It allows us to carry out BIST-based testing in the field and even after system integration. Additionally, we can perform at-speed testing because no external signal interfacing is required.
Part One - Overview of VLSI Design Flow
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 1-2
-
- Chapter
- Export citation
-
Summary
In this part of the book, we will introduce integrated circuits (ICs) concepts and give a top-level view of the entire VLSI design flow. In the remaining portion of this book, we will discuss each task of the VLSI design flow in detail. We have taken this approach to present the topics because an early introduction to the entire design flow eases building context for individual tasks. It allows us to relate one task with another in the remaining portion of this book. We can appreciate the interaction between them and understand how decisions of one task impact other tasks. As a result, we can easily comprehend challenges and opportunities in optimizations and develop a solid understanding of the entire flow.
In the first chapter (“Foundation”), we will briefly describe the concepts of digital circuits, devices, CMOS inverters, data structures, and algorithms. Since readers would have been exposed to these topics previously, this chapter will serve as a refresher. Additionally, this chapter will be a handy reference if a reader needs to recall some of these concepts. In the second chapter (“Introduction to Integrated Circuits”), we will introduce the concepts related to ICs, their history, fabrication, economics, and figures of merit. In the third (“Pre-RTL Methodologies”) and fourth (“RTL to GDS Implementation Flow”) chapters, we will describe the implementation of a design from concept to the layout. In the fifth (“Verification Techniques”) and sixth (“Testing Techniques”) chapters, we will discuss basic concepts related to the verification and testing of ICs. In the seventh chapter (“Post-GDS processes”), we will explain processes involved in fabricating chips from a given layout.
7 - Post-GDS Processes
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 104-114
-
- Chapter
- Export citation
-
Summary
Good sentences, and well pronounced.
They would be better if well followed.
—William Shakespeare, The Merchant of Venice, Act 1, Scene 2, 1596In the previous chapters, we discussed the tasks required for designing an IC. Once we have obtained the final layout, the design process is complete. Subsequently, the GDS file corresponding to the final layout is employed for making a chip using tasks such as mask preparation, wafer fabrication, testing, and packaging. In this book, we have grouped all chip-making tasks carried out after obtaining the GDS file as post-GDS processes.
Though post-GDS processes are not directly related to the design flow, we need to understand them to appreciate the challenges of fabrication, and possibly address some of these challenges during the design phase. Therefore, we briefly explain post-GDS processes in this chapter. For a detailed understanding of post-GDS processes, readers can refer to dedicated books on these topics such as [1–5].
MASK FABRICATION
We have discussed in Chapter 2 (“Introduction to integrated circuits”) that an essential step in IC fabrication technology is photolithography. It involves transferring the patterns in a layout for a given layer to the silicon wafer. We carry out photolithography separately for each layer.
To start with, we create a replica of the pattern of a given layer on a substrate such as glass. This replica of the pattern is known as mask or reticle. After creating a mask, we use it many times for carrying out photolithography during high-volume manufacturing [4].
We can fabricate a mask using several techniques. Nevertheless, a typical mask fabrication flow consists of the following steps:
1. Data preparation
2. Mask writing and chemical processing
3. Quality checks and adding protections
We explain these steps briefly in the following paragraphs.
Data Preparation
First, we prepare the given layout data for mask writing. We translate the GDS-specified mask information to a format comprehended by a mask writing tool. It involves converting complicated polygons to simpler rectangles and trapeziums. This process is popularly known as fracturing. It simplifies the task for the mask writing hardware. Additionally, data preparation involves augmenting the mask data to enhance the resolution. We will describe some of these techniques later in this chapter.
20 - Basics of DFT
- Sneh Saurabh, Indraprastha Institute of Information Technology, Delhi
-
- Book:
- Introduction to VLSI Design Flow
- Published online:
- 04 April 2024
- Print publication:
- 15 June 2023, pp 449-460
-
- Chapter
- Export citation
-
Summary
One should not abandon duties born of one's nature, even if one sees defects in them. Indeed, all endeavors are veiled by some evil, as fire is by smoke.
—Bhagavad Gita, Chapter 18, verse 48In this chapter, we will discuss some of the basic concepts of design for testability (DFT). First, we will introduce the idea of structural testing and how it is different from functional testing. Then, we will explain fault model and its significance in testing. We will discuss single stuck-at fault model in detail. We will highlight the role of structural testing and the single stuck-at fault model in simplifying testing and making it cost-effective. We will also elucidate the problems of controlling and observing signals in a sequential circuit that are encountered during structural testing.
FUNCTIONAL TESTING VERSUS STRUCTURAL TESTING
Let us assume that we need to test a fabricated circuit that implements a Boolean function with N inputs. We can apply all possible 2N input combinations and check whether the outputs match the corresponding entries in the truth table. This is known as functional testing. However, when N is large (such as 50 or 100), the number of possible input combinations becomes dramatically large, and exhaustive functional testing of a circuit becomes infeasible. Therefore, we employ an alternative testing strategy known as structural testing for testing a fabricated circuit [1].
In structural testing, we test the components or hardware that implements a logic function rather than testing the input–output functionality of the implementation [1]. We can choose components at various abstraction levels such as transistors, switches, logic gates, standard cells, and macros (adders, multipliers, and arithmetic logical units (ALUs)). Nevertheless, we perform structural testing often at the level of logic gates. This testing is known as structural testing since it considers the topology and interconnections of logic gates in the implementation. The paradigm of structural testing is widely employed because it reduces the number of test patterns required for good test quality [2].
If we perform functional testing, we need to apply 216=65536 input combinations and compare the obtained circuit response with the correct response.