To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Functional programming – the use and evaluation of functions as a programming paradigm – has a long and productive history in the world of programming languages. Lisp came about in the 1950s in the search for a convenient language to represent mathematical concepts in programs, borrowing from the lambda calculus of the logician Alonzo Church. More recent languages have in turn embraced many aspects of Lisp – in addition to Lisp's offspring such as Scheme and haskell, you will find elements of functional constructs in Java, Python, Ruby, and Perl. Mathematicaitself has clear bloodlines to Lisp, including the ability to operate on data structures such as lists as single objects and in its representation of mathematical properties through rules.
Mathematica functions, unlike those in many other languages, are considered “first-class” objects, meaning that they can be used as arguments to other functions, they can be returned as values, and they can be part of many other kinds of data objects such as arrays. In addition, you can create and use functions at runtime, that is, when you evaluate an expression. This functionalstyle of programming distinguishes Mathematicafrom traditional procedural languages like C and Fortran. Facility with functional programming is therefore essential for taking full advantage of the Mathematica language to solve your computational tasks.
We start with some of the most useful functional programming constructs – higher-order functions such as Map, Apply, Thread, Select, and Outer. We then introduce iteration, a mechanism by which the output of one computation is fed as input into the next.
The tremendous increase of bandwidth-craving mobile applications (e.g., video streaming, video chatting, and online gaming) has posed enormous challenges to the design of future wireless networks. Deploying small cells (e.g., pico, micro, and femto) has been shown to be an efficient and cost-effective solution to support this constantly rising demand since the smaller cell size can provide higher link quality and more efficient spatial reuse [1]. Small cells could also deliver some other benefits such as offloading the macro network traffic, providing service to coverage holes and regions with poor signal reception (e.g., macro cell edges). Following this trend, the evolving 5G networks [2] are expected to be composed of hundreds of interconnected heterogeneous small cells.
Figure 11.1 gives an illustration of a heterogeneous network (HetNet) where a macro cell is underlaid with different types of small cells. Different from the cautiously planned traditional network, the architecture of a HetNet is more random and unpredictable due to the increased density of small cells and their impromptu way of deployment. In this case, the manual intervention and centralized control used in traditional network management will be highly inefficient, time consuming, and expensive, and therefore will be not applicable for dense heterogeneous small cell networks. Instead, self-organization has been proposed as an essential feature for future small cell networks [3, 4].
The motivations for enabling self-organization in small cell networks are explained below.
• Numerous network devices with different characteristics are expected to be interconnected in future wireless networks. Also, these devices are expected to have “plug and play” capability. Therefore the initial pre-operational configuration has to be done with minimum expertise involvement.
• With the emergence of small cells, the spatio-temporal dynamics of the networks has become more unpredictable than legacy systems due to the unplanned nature of small cell deployment. Therefore intelligent adaptation of the network nodes is necessary. That is, the self-organizing small cells need to learn from the environment and adapt with the network dynamics to achieve the desired performance.
After reading to this point in the book, you now have the skills to design complex combinational and sequential logic modules. However, if someone were to ask you to design a DVD player, a computer system, or an Internet router you would realize that each of these is not a single finite-state machine (or even a single datapath with associated finite-state controller). Rather, a typical system is a collection of modules, each of which may include several datapaths and finite-state controllers. These systems must first be decomposed into simple modules before the design and analysis skills you have learned in the previous chapters can be applied. However, the problem remains that of how to partition the system to this level where the design becomes manageable. This system-level design is one of the most interesting and challenging aspects of digital systems.
SYSTEM DESIGN PROCESS
The design of a system involves the following steps.
Specification The most important step in designing any system is deciding – and clearly specifying in writing – what you are going to build. We discuss specifications in more detail in Section 21.2.
Partitioning Once the system has been specified, the main task in system design is dividing the system into manageable subsystems or modules. This is a process of divide and conquer. The overall system is divided into subsystems that can then be designed (conquered) separately. At each stage, the subsystems should be specified to the same level of detail as the overall system was during our first step. As described in Section 21.3, we can partition a system by state, task, or interface.
Interface specification It is particularly important that the interfaces between subsystems be described in detail. With good interface specifications, individual modules can be developed and verified independently. When possible, interfaces should be independent of module internals – allowing modules to be modified without affecting the interface, or the design of neighboring modules.
Timing design Early in the design of a system, it is important to describe the timing and sequencing of operations. In particular, as work flows between modules, the sequencing of which module does a particular task on a particular cycle must be worked out to ensure that the right data come together at the correct place and time. This timing design also drives the performance tuning step described below.
In its brief history, the world of programming has undergone a remarkable evolution. Those of us old enough to remember boxes of punch cards and batch jobs couldn't be happier about some of these changes. One could argue that the limitations, physical and conceptual,of the early programming environments helped to focus that world in a very singular manner. Eventually, efforts to overcome those limitations led to a very visible and broad transformation of the world of computer programming. We now have a plethora of languages, paradigms, and environments to choose from. At times this embarrassment of riches can be a bit overwhelming, but I think most would agree that we are fortunate to have such variety in programming languages with which to do our work.
I learned about Mathematica as I suspect many people have – after using several languages over the years, a colleague introduced me to a new and very different tool, Mathematica. I soon realized that it was going to help me in my work in ways that previous languages could not. Perhaps the most notable feature was how quickly I could translate the statement of a problem to a working program. This was no doubt due to having a functional style of programming at my fingertips but also being able to think in terms of rules and patterns seemed to fit well with my background in mathematics.
Well, Mathematica is no longer a young up-start in the programming world. It has been around now for over 25 years, making it, if not an elder statesman, certainly a mature and familiar player. And one that is used by people in fields as varied as linguistics, bioinformatics, engineering, and information theory. Like myself, many people are first introduced to it in an academic setting. Many more are introduced through a colleague at work. Still others have seen it mentioned in various media and are curious as to what it is all about. After using it to do basic or more advanced computation, most users soon find the need to extend the default set of tools that come with Mathematica.Programming is the ticket.
So what makes Mathematica such a useful programming tool? First, it is a well-designed language, one whose internal logic will be quite apparent as you get to know it.
This book is intended to teach an undergraduate student to understand and design digital systems. It teaches the skills needed for current industrial digital system design using a hardware description language (VHDL) and modern CAD tools. Particular attention is paid to systemlevel issues, including factoring and partitioning digital systems, interface design, and interface timing. Topics needed for a deep understanding of digital circuits, such as timing analysis, metastability, and synchronization, are also covered. Of course, we cover the manual design of combinational and sequential logic circuits. However, we do not dwell on these topics because there is far more to digital system design than designing such simple modules.
Upon completion of a course using this book, students should be prepared to practice digital design in industry. They will lack experience, but they will have all of the tools they need for contemporary practice of this noble art. The experience will come with time.
This book has grown out of more than 25 years of teaching digital design to undergraduates (CS181 at Caltech, 6.004 at MIT, EE121 and EE108A at Stanford). It is also motivated by 35 years of experience designing digital systems in industry (Bell Labs, Digital Equipment, Cray, Avici, Velio Communications, Stream Processors, and NVIDIA). It combines these two experiences to teach what students need to know to function in industry in a manner that has been proven to work on generations of students. The VHDL guide in Appendix B is informed by nearly a decade of teaching VHDL to undergraduates at UBC (EECE 353 and EECE 259).
We wrote this book because we were unable to find a book that covered the system-level aspects of digital design. The vast majority of textbooks on this topic teach the manual design of combinational and sequential logic circuits and stop. While most texts today use a hardware description language, the vast majority teach a TTL-esque design style that, while appropriate in the era of 7400 quad NAND gate parts (the 1970s), does not prepare a student to work on the design of a three-billion-transistor GPU. Today's students need to understand how to factor a state machine, partition a design, and construct an interface with correct timing. We cover these topics in a simple way that conveys insight without getting bogged down in details.
Memory is widely used in digital systems for many different purposes. In a processor, SDDR DRAM chips are used for main memory and SRAM arrays are used to implement caches, translation lookaside buffers, branch prediction tables, and other internal storage. In an Internet router (Figure 23.3(b)), memory is used for packet buffers, for routing tables, to hold per-flow data, and to collect statistics. In a cellphone SoC, memory is used to buffer video and audio streams.
A memory is characterized by three key parameters: its capacity, its latency, and its throughput. Capacity is the amount of data stored, latency is the amount of time taken to access data, and throughput is the number of accesses that can be done in a fixed amount of time.
A memory in a system, e.g., the packet buffer in a router, is often composed of multiple memory primitives: on-chip SRAM arrays or external DRAM chips. The number of primitives needed to realize a memory is governed by its capacity and its throughput. If one primitive does not have sufficient capacity to realize the memory, multiple primitives must be used – with just one primitive accessed at a time. Similarly, if one primitive does not have sufficient bandwidth to provide the required throughput, multiple primitives must be used in parallel – via duplication or interleaving.
MEMORY PRIMITIVES
The vast majority of all memories in digital systems are implemented from two basic primitives: on-chip SRAM arrays and external SDDR DRAM chips. Here we will consider these memory primitives as black boxes, discussing their properties and how to interface to them. It is beyond the scope of this book to look inside the box and study their implementation.
SRAM arrays
On-chip SRAM arrays are useful for building small, fast, dedicated memories integrated near the logic that produces and consumes the data they store. While the total capacity of the SRAM that can be realized on one chip (about 400 Mb) is small compared with a single 4 Gb DRAM chip, these arrays can be accessed in a single clock cycle, compared with 25 cycles or more for a DRAM access.
In this appendix we present a few guidelines for VHDL coding style. These guidelines grew out of experience with both VHDL and Verilog description languages. These guidelines developed over many years of teaching students digital design and managing design projects in industry, where the guidelines have been proven to reduce effort, to produce better final designs, and to produce designs that are more readable and maintainable. The many examples of VHDL throughout this book serve as examples of this style. The style presented here is intended for synthesizable designs – VHDL design entities that ultimately map to real hardware. A very different style is used in testbenches. This section is not intended to be a reference manual for VHDL. The following appendix provides a brief summary of the VHDL syntax used in this book and many other references are available online. Rather, this section gives a set of principles and style rules that help designers write correct, maintainable code. A reference manual explains what legal VHDL code is. This appendix explains what good VHDL code is. We give examples of good code and bad code, all of which are legal.
BASIC PRINCIPLES
We start with a few basic principles on which our VHDL style is based. The style presented is essentially a VHDL-2008 equivalent to a set of Verilog style guidelines based upon our experience over many years of teaching students digital design and managing design projects in industry combined with nearly a decade of experience teaching earlier versions of VHDL.
Know where your state is Every bit of state in your design should be explicitly declared. In our style, all state is in explicit flip-flop or register components, and all other portions are purely combinational. This approach avoids a host of problems that arise when writing sequential statements directly within an “if rising\_edge(clk) then” statement inside a process. It also makes it much easier to detect inferred latches that occur when not all signals are assigned in all branches of a conditional statement.
Understand what your design entities will synthesize to When you write a design entity, you should have a good idea what logic will be generated. If your design entity is described structurally, wiring together other components, the outcome is very predictable. Small behavioral design entities and arithmetic blocks are also very predictable.
Verification and test are engineering processes that complement design. Verification is the task of ensuring that a design meets its specification. On a typical digital systems project, more effort is expended on verification than on the design itself. Because of the high cost and long delays involved in fabricating a chip, thorough verification is essential to ensure that the chip works the first time. A design error that is not caught during verification would result in costly delays and retooling.
Testing is performed to ensure that a particular instantiation of a design functions properly. When a chip is fabricated, some transistors, wires, or contacts may be faulty. A manufacturing test is performed to detect these faults so the device can be repaired or discarded.
DESIGN VERIFICATION
Simulation is the primary tool used to verify that a design meets its specification. The design is simulated using a number of tests that provide stimulus to the unit being tested and check that the design produces correct outputs. The VHDL testbenches we have seen throughout this book are examples of such tests.
Verification coverage
The verification challenge amounts to ensuring that the set of test patterns, the test suite, written to verify a design is complete. We measure the degree of completion of a test suite by its coverage of the specification and of the implementation. We typically insist on 100% coverage of both specification features and implementation lines or edges to consider the design verified.
The specification coverage of a set of tests is measured by determining the fraction of features in the specification that are exercised and checked by the tests. For example, suppose you have developed a digital clock chip that includes a day/date and an alarm function. Table 20.1 gives a partial list of features to be tested. Even for something as simple as a digital clock, the list of features can easily run into the hundreds. For a complex chip it is not unusual to have 105 or more features. Each test verifies one or more features. As tests are written, the features covered by each test are checked off.
Teaching functional programming as a second programming paradigm is often difficult as students can have strong preconceptions about programming. When most of these preconceived ideas fail to be confirmed, functional programming may be seen as an unnecessarily difficult topic. A typical topic that causes such difficulties is the language of types employed by many modern functional languages. In this paper, we focus on addressing this difficulty through the use of step-by-step calculations of type expressions. The outcome of the study is an elaboration of a worked example format and a methodical approach for teaching types to beginner functional programmers.
This chapter introduces novel approaches in heterogeneous networks (HetNets) where both large and small cells are deployed in a mixed manner to satisfy the increasing traffic demand and, at the same time, to improve the energy efficiency (EE) of future cellular networks.
In recent years, there has been a tremendous increase in the number of mobile handsets, in particular smart phones, supporting a wide range of applications, such as image and video transfer, cloud services, and cloud storage. The average smart phone usage rate has nearly been tripled and the overall amount of mobile data traffic demand grew 2.3 times in 2011 [1]. Furthermore, the amount of mobile data traffic is expected to increase dramatically in the coming years; recent forecasts are expecting the data traffic to increase more than 500 times in the next ten years [2, 3]. The current cellular systems would not be able to cope with the expected traffic demand increase. This huge amount of traffic demand leads to the need for further densification of the networks, for example in hotspot areas where traffic demand is concentrated as seen in Figure 18.1.
However, traffic load varies from time to time because of the typical night–day behavior due to the users’ daily activities in offices and being back to residential areas during the night [4]. In the current cellular networks, the power consumption of the radio access network (RAN) does not effectively scale with the traffic variations as shown in Figure 18.2. The traffic variations create the opportunities for the design of an adaptive network paradigm that can dynamically scale its power consumption according to the traffic variations.
Generally speaking, the power consumption of the RAN scales with the number of deployed base stations (BSs), each with offset power consumption. In cellular networks, only 10% of the overall power consumption stems from the user equipments (UEs) whereas nearly 90% of power consumption is incurred by the operator networks [5]. Figure 18.3 gives an idea on how the power consumption is distributed across the different parts of a typical cellular network. It is obvious that the RAN and the operation of data centers that provide computations, storage, applications, and data transfer are the most energy intensive parts of the entire network.
The output of sequential logic depends not only on its input, but also on its state, which may reflect the history of the input. We form a sequential logic circuit via feedback – feeding state variables computed by a block of combinational logic back to its input. General sequential logic, with asynchronous feedback, can become complex to design and analyze due to multiple state bits changing at different times. We simplify our design and analysis tasks in this chapter by restricting ourselves to synchronous sequential logic, in which the state variables are held in a register and updated on each rising edge of a clock signal (clk).
The behavior of a synchronous sequential logic circuit, or finite-state machine (FSM), is completely described by two logic functions: one that computes its next state as a function of its input and present state, and one that computes its output – also as a function of its input and present state. We describe these two functions by means of a state table, or graphically with a state diagram. If states are specified symbolically, a state assignment maps the symbolic states onto a set of bit vectors – both binary and one-hot state assignments are commonly used.
Given a state table (or state diagram) and a state assignment, the task of implementing a finite-state machine is a simple one of synthesizing the next-state and output logic functions. For a one-hot state encoding, the synthesis is particularly simple because each state maps to a separate flip-flop and all edges in the state diagram leading to a state map into a logic function on the input of that flip-flop. For binary encodings, Karnaugh maps for each bit of the state vector are written and reduced to logic equations.
Finite-state machines can be implemented in VHDL by creating a state register to hold the current state, and describing the next-state and output functions with combinational logic descriptions, such as case statements as described in Chapter 7. State assignments should be specified using constants to allow them to be changed without altering the machine description itself. Special attention should be given to resetting the FSM to a known state at startup.
Multi-tier, heterogeneous networks (HetNets) using small cells (e.g., pico and femto cells) are an important part of operators’ strategy to add low-cost network capacity through aggressive reuse of the cellular spectrum. In the near term, a number of operators have also relied on un-licensed WiFi networks as a readily available means to offload traffic demand. However, the use of WiFi is expected to remain an integral part of operators’ long-term strategy to address future capacity needs, as licensed spectrum continues to be scarce and expensive. Efficient integration of cellular HetNets with alternate radio access technologies (RATs), such as WiFi, is therefore essential for next-generation networks.
This chapter describes several WiFi-based multi-RAT HetNet deployments and architectures, and evaluates the associated performance benefits. In particular, we consider deployments featuring integrated multi-RAT small cells with co-located WiFi and LTE interfaces, where tighter coordination across the two radio links becomes feasible. Integrated multi-RAT small cells are an emerging industry trend toward leveraging common infrastructure and lowering deployment costs when the footprints of WiFi and cellular networks overlap. Several techniques for cross-RAT coordination and radio resource management are reviewed and system performance results showing significant capacity and quality service gains are presented.
Introduction
Multi-tier HetNets based on small cells (e.g. pico cells, femto cells, relay cells, WiFi APs, etc.) are considered to be a fundamental technology for cellular operators to address capacity and coverage demands of future 5G networks. Typical HetNet deployment architectures comprise an overlay of a macro cell network with additional tiers of densely deployed cells with smaller footprints, such as picos, femtos, relay nodes, WiFi access points, etc. Figure 2.1 illustrates the various deployment options in a multi-radio HetNet.
HetNets allow for greater flexibility in adapting the network infrastructure according to the capacity, coverage, and cost needs of a given deployment. As shown, the macro base station tier may be used for providing wide area coverage and seamless mobility, across large geographic areas, while smaller inexpensive low-powered small cells may be deployed, as needed, to improve coverage by moving infrastructure closer to the clients (such as for indoor deployments), as well as to add capacity in areas with higher traffic demand. Conceptually, mobile clients with direct client-to-client communication may also be considered as one of the tiers within this hierarchical deployment, wherein the clients can cooperate with other clients to locally improve access in an inexpensive manner.
In a synchronous system, we can avoid putting our flip-flops in illegal or metastable states by always obeying the setup- and hold-time constraints. When sampling asynchronous signals or crossing between different clock domains, however, we cannot guarantee that these constraints will be met. In these cases, we design a synchronizer that, through a combination of waiting for metastable states to decay and isolation, reduces the probability of synchronization failure.
A brute-force synchronizer consisting of two back-to-back flip-flops is commonly used to synchronize single-bit signals. The first flip-flop samples the asynchronous signal and the second flip-flop isolates the possibly bad output of the first flip-flop until any illegal states are likely to have decayed. Such a brute-force synchronizer cannot be used on multi-bit signals unless they are encoded with a Gray code. If multiple bits are in transition when sampled by the synchronizer, they are independently resolved, possibly resulting in incorrect codes, with some bits sampled before the transition and some after the transition. We can safely synchronize multi-bit signals with a FIFO (first-in first-out) synchronizer. A FIFO serves both to synchronize the signals and to provide flow control, ensuring that each datum produced by a transmitter in one clock domain is sampled exactly once by a receiver in another clock domain – even when the clocks have different frequencies.
WHERE ARE SYNCHRONIZERS USED?
Synchronizers are used in two distinct applications, as shown in Figure 29.1. First, when signals are coming from a truly asynchronous source, they must be synchronized before being input to a synchronous digital system. For example, a push-button switch pressed by a human produces an asynchronous signal. This signal can transition at any time, and so must be synchronized before it can be input to a synchronous circuit. Numerous physical detectors also generate truly asynchronous inputs. Photodetectors, temperature sensors, pressure sensors, etc. all produce outputs with transitions that are gated by physical processes, not a clock.
In Chapter 10 we introduced the basics of computer arithmetic: adding, subtracting, multiplying, and dividing binary integers. In this chapter we continue our exploration of computer arithmetic by looking at number representation in more detail. Often integers do not suffice for our needs. For example, suppose we wish to represent a pressure that varies between 0 (vacuum) and 0.9 atmospheres with an error of at most 0.001 atmospheres. Integers don't help us much when we need to distinguish 0.899 from 0.9. For this task we will introduce the notion of a binary point (similar to a decimal point) and use fixed-point binary numbers.
In some cases, we need to represent data with a very large dynamic range. For example, suppose we need to represent time intervals ranging from 1 ps (10−12 s) to one century (about 3 × 109 s) with an accuracy of 1%. To span this range with a fixed-point number would require 72 bits. However, if we use a floating-point number – in which we allow the position of the binary point to vary – we can get by with 13 bits: six bits to represent the number and seven bits to encode the position of the binary point.
REPRESENTATION ERROR: ACCURACY, PRECISION, AND RESOLUTION
With digital electronics, we represent a number, x, as a string of bits, b. Many different number Systems are used in digital systems. A number system can be thought of as two functions R and V. The representation function R maps a number x from some set of numbers (e.g., real numbers, integers, etc.) into a bit string b: b = R(x). The value function V returns the number (from the same set) represented by a particular bit string: y = V(b).
Consider mapping to and from the set of real numbers in some range. Because there are more possible real numbers than there are bit strings of a given length, many real numbers necessarily map to the same bit string. Thus, if we map a real number to a bit string with R and then back with V we will almost always get a slightly different real number than we started with. That is, if we compute y = V(R(x)) then y and x will differ.
How many small cell (SC) access points (APs) are required to guarantee a chosen quality of service in a heterogeneous network? In this chapter, we answer this question considering two different network models. The first is the downlink of a finite-area SC network where the locations of APs within the chosen area are uniformly distributed. A key step in obtaining the closed-form expressions is to generalize the well-accepted moment matching approximation for the linear combination of lognormal random variables. For the second model, we focus on a two-layer downlink heterogeneous network with frequency reuse-1 hexagonal macro cells (MCs), and SC APs that are placed at locations that do not meet a chosen quality of service from macro base stations (BSs). An important property of this model is that the SC AP locations are coupled with the MC coverage. Here, simple bounds for the average total interference within an MC makes the formulation possible for the percentage of MC area in outage, as well as the required average number of SCs (per MC) to overcome outage, assuming isolated SCs.
Introduction
Heterogeneous cellular networks (HCNs) are being considered as an efficient way to improve system capacity as well as effectively enhance network coverage [1, 2]. Comprising multiple layers of access points (APs), HCNs encompass a conventional macro cellular network (first layer) overlaid with a diverse set of small cells (SCs) (higher layers). Cell deployment is an important problem in heterogeneous networks, both in terms of the number and positioning of the SCs.
Traditional network models are either impractically simple (such as the Wyner model [3]) or excessively complex (e.g., the general case of random user locations in a hexagonal lattice network [4]) to accurately model SC networks. A useful mathematical model that accounts for the randomness in SC locations and irregularity in the cells uses spatial point processes, such as Poisson point process (PPP), to model the location of SCs in the network [5–10]. The independent placement of SCs from the MC layer, has the advantage of analytical tractability and leads to many useful SINR and/or rate expressions. However, even assuming that wireless providers would deploy SCs to support mobile broadband services, the dominant assumption remains that SCs are deployed randomly and independent of the MC layer [11].
In Chapter 6 we saw how to synthesize combinational logic circuits manually from a specification. In this chapter we show how to describe combinational circuits in the VHDL hardware description language, building on our discussion of Boolean expressions in VHDL (Section 3.6) and the initial discussion of VHDL (Section 1.5). Once the function has been described in VHDL, it can be automatically synthesized, eliminating the need for manual synthesis.
Because all optimization is done by the synthesizer, the main goal in writing synthesizable VHDL is to make it easily readable and maintainable. For this reason, descriptions that are close to the function of a design (e.g., a truth table specified with a case statement) are preferable to those that are close to the implementation (e.g., equations using a concurrent assignment statement, or a structural description using gates). Descriptions that specify just the function tend to be easier to read and maintain than those that reflect a manual implementation of the function.
To verify that a VHDL design entity is correct, we write a testbench. A testbench is a piece of VHDL code that is used during simulation to instantiate the design entity to be tested, generate input stimulus, and check the design entity's outputs. While design entities must be coded in a strict synthesizable subset of VHDL, testbenches, which are not synthesized, can use the full VHDL language, including looping constructs. In a typical modern digital design project, at least as much effort goes into design verification (writing testbenches) as goes into doing the design itself.
THE PRIME NUMBER CIRCUIT IN VHDL
In describing combinational logic using VHDL we restrict our use of the language to constructs that can easily be synthesized into logic circuits.
Specifically, we restrict combinational circuits to be described using only concurrent signal assignment statements, case statements, if statements, or by the structural composition of other combinational design entities.
In this section we look at four ways of implementing the prime number (plus 1) circuit we introduced in Chapter 6 as combinational VHDL.