To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The earlier chapters of this book discussed the design and use of a simulator for use in the development of digital electronic systems. The discussion has been widened to include some aspects of testing and design for testability, since application of good practice in these areas leads to better use of costly resources in what is probably the largest part of the design procedure. It is now of value to review the extent to which the aims of simulation can be achieved; to discuss several topics related to the use of simulator; to introduce some enhancements to simulators; and to attempt to look into the future.
Desirable features of a simulator
Some years ago the author wrote down a list of the features he would like to find in a simulator.
A simulator is required to give an accurate prediction of the behaviour of a good network.
A simulator is required to recognise and give warning of a faulty network.
The basic simulator should be independent of technology but recognise the distinctive features of known technologies. Thus devices of any technology might be simulated.
The simulator should be capable of handling modes at several levels of abstraction and in the same run (Harding 1989).
There is no point in simulating a design in 1 s if it takes a day to diagnose a fault, modify and recompile the network. Hence, associated with the simulator, there must be means to assist the user to find the source of ‘wrong’ results, correct them and recompile quickly. That is, the simulation cycle must be given serious attention (the detail is not within the scope of this book).
A few years ago a well known company stated that the size of silicon chip that could be designed and built would be limited by ‘engineer blow-out’ – what a single engineer could hold in his mind without going crazy. To overcome that limitation, techniques for ‘managing complexity’ have been developed. These have included methods for manipulating data in different ways. The computer can handle large quantities of data without becoming crazed and without error, leaving only the interesting and intelligent work to the engineer.
Computer aids are not limited to chip design. It is not difficult today to produce a chip which works first time according to its specification. But was the specification correct? Thus there is no point in designing a 10 million gate chip which works perfectly to specification if the specification is wrong. In the late 1980s, estimates varied in the region 10% to 50% that the chip would work within its intended system (Harding 1989, Hodge 1990). This was clearly unsatisfactory, so there has been increasing emphasis on the need for system design rather than purely chip design.
One of the problems with building hardware is that, once built, it is not easily changed. In the case of designing on silicon, change is impossible. It is estimated that the relative cost of finding faults at design time, chip-test time, printed circuit board construction time, or in the finished machine in the field is 1:10:100:1000 (Section 3.1.1).
When a complete system is being simulated there are relatively few external inputs – a start key, a break key and possibly some inputs from various peripherals such as tape drives, sensors etc. Similarly, there are few outputs. In these cases the arrangements to pass ‘test data’ to the simulation can be very crude.
The main purpose of simulation is to find errors in a design. To simulate a complete system and expect to find detailed errors is very difficult, especially as different parts of the design may be at different stages of development. It is important to be able to simulate sub-units independently of the total design in order to get the majority of problems solved before trying to integrate the complete system (Fig. 1.4). Some means of supplying and controlling test vectors is required.
In this form of testing there will be copious output, which comes in two forms. The first is the values of primary output signals – the product of a multiplier, for example. These values can be compared with a set of ‘expected’ values to check the overall operation. If the results of simulation are different to the expected values then it is necessary to
check the expected values – Note!!!
if the expected values seem correct, trace back through the logic to find where the error occurred and hence find the design error.
Designing any piece of equipment (not necessarily electronic) is not a straight path from start through to product, even when no mistakes are made. At each stage problems occur which may require previous decisions to be reviewed. For example, it may be found on simulation that a particular part of the design will not operate fast enough to meet the specification. That part of the design, and possibly others, will have to be reviewed until the criteria are met or proved to be impossible. In the latter case it will be necessary to reconsider the specification. Fig. 2.1 is a summary of the process, and the reader should refer to it from time to time whilst reading the rest of this chapter.
Specification
Every design begins with a customer specification, which describes the function the design is intended to perform. This may include not only the logical properties but the speed of operation, the output power driving capability, the capability of circuits that drive the design, the power supply available, perhaps limits on power dissipation and electromagnetic radiation, operation in the presence of external radiation etc. Sometimes it is difficult to satisfy all the criteria. Higher speeds will require higher power dissipation and faster clocks. The latter increases the electromagnetic radiation. Some criteria may not be important, or only marginally so. With certain pieces of logic the power dissipation may be minimal so that power limits are not important.
Simulate for functional correctness, ignoring all timing considerations, and then use a timing verifier to check that time constraints are met.
Simulate in an environment in which the models include timing. As one can never guarantee that all paths through the logic have been exercised, it may still be advisable to use a timing verifier.
The simplest approach to simulation is to have a separate procedure for every logical element in the network being simulated, and the connections between the elements are then mirrored in the structure of the machine code of the program. The entire structure of the network is thus mirrored in the store of the machine doing the simulation. This takes up a great deal of storage space, but is very fast in running, since there are no lengthy lists to be searched and manipulated.
The amount of storage can be reduced by having only one procedure for each element type, and a small amount of storage for every element holding the element-specific data. In the previous description there is a copy of the procedure for every element which uses it and hence no procedure entry and exit as such. With only one copy each procedure requires a call. Procedure calls need machine states to be saved temporarily and restored on exit. This is expensive in CPU resources.
The first objective of any set of tests must be to show that the system does what it was designed to do. This means
does it perform the necessary functions correctly?
does it perform its function within the required time specification?
are there any circumstances, normal or unusual, under which it can get into a forbidden state from which it can only recover by drastic action (e.g. system reset)?
The above presumes that any fault or group of faults which could possibly occur would affect the operation of the system. If a fault or faults do not affect the operation then there must be some logic which is redundant. It will be assumed that none of the logic is redundant.
On the further assumption that the design is good, a second objective is to be capable of detecting any fault or group of faults within that system. However, there is some debate as to the level of detail into which it is necessary to go and this will be the subject of further discussion. The distinction between simulation testing and testing the manufactured hardware was made in Section 3.1.2.
Modelling faults
Faults in the design, as opposed to those which occur in production, cannot be ‘modelled’ in the usual sense of the word. A network description is entered into the simulator and this is a model of the network. If, as a result of simulation, outputs are obtained which are different from those expected from considering the specification, then the network as described to the simulator is fault.
Although the event driven simulator allows timing to be included in a simulation, it is extremely difficult to devise a set of tests that would show up all possible timing problems. Such a set of tests would have to analyse the network structure to find where two paths from the same signal converge later in the network. One of these would have to be assigned maximum delay and the other minimum. Such a situation was shown in Fig. 1.1 given that the two inputs were related and is known as reconvergent fan-out. The four-gate not equivalence example has five cases of reconvergent fan-out. A procedure is needed to find unwanted short pulses. It requires that all associated signals have the relevant states, which is why it can be difficult to drive. Having found a potential short pulse, it must be decided whether it matters. At the input of another gate, it does not. At the asynchronous input to a flip-flop, it most certainly does.
The second problem with timing is to be sure that the longest path, often known as the critical path, through a combinational network has been activated in order to ensure that the logic can operate within the design time specified. In particular, with synchronous logic, it is necessary to check that the logic works within the specified clock periods. The naive analysis of the four-gate not equivalence circuit designed earlier indicated the dangers of pattern sensitivity. That analysis was by no means complete (Section 6.4, last paragraph).
In attempting to come to grips with the problem of designing a simulator the author found very little in the way of overall descriptions of what a simulator is, what it does or how it works. The required information can be winkled out from many different sources, but not all are easily available. This book is an attempt to bring together in one place a comprehensive introduction to all aspects of simulation in the design of digital electronic systems.
The text begins with an introduction to the purpose of simulation, types of simulation and some of the problems that are encountered in the use and design of simulators. It continues with a brief review of computer aided design suites in order to set simulation within its overall context.
In order to use a simulator it is necessary to prepare test information. To get the best out of the simulator it is necessary to adopt good design techniques. Hence the next two chapters give an introduction to design for testability and to test program generation. These are followed by a brief description of the preparation of test programs using the VHPIC high level design language (VHDL). These three chapters are just an introduction for completeness in the book as a whole, and the reader is referred to much more comprehensive texts for a proper treatment.
Chapters 6 to 9 are the meat of this work. Chapter 6 describes the two main types of straightforward simulator and gives some examples of their use.
Most electronic system designers will never need to design a simulator. They will merely need to use one. An understanding of how the simulator works will enable it to be used more effectively, and avoids investing it with powers that it does not possess. However, most designers will have to write test sequences which the simulator will use to exercise the logic. They will also need to write programs for the equipment test rigs for exercising the real logic. These two activities overlap to some extent. However, checking that the system performs its specified functions is a design phase procedure and is used primarily in the simulator. Once the design is accepted as adequate it is necessary to check that any possible manufacturing fault can be detected during testing. The latter set of tests does not need to be ‘understandable’ in terms of the normal operation of the system since testing during manufacture is mainly on a go/no go basis. Developing and assessing the value of these tests is a major task and requires much further simulation. It is for this reason that the main chapters of this book begin with a look at the problems of writing test sequences.
The importance of careful testing of a design is illustrated by the costs involved. For the sake of example, let the cost of simulation be ‘one’ in whatever unit is appropriate.
An electrical circuit comprises an arrangement of elements for the conversion, transmission and storage of energy. Energy enters a circuit via one or more sources and leaves via one or more sinks. In the sources energy is converted from mechanical, thermal, chemical or electromagnetic form into electrical form; in the sinks the reverse process takes place. Sources and sinks are linked by elements capable of transmitting and storing electrical energy. The familiar battery-operated flashlamp serves as a reminder of the energy flow processes in a circuit. In this device, energy is converted from chemical to electrical form in the battery and transmitted along wires to the lamp where most of the energy is converted into heat. A small but useful portion is emitted in the form of electromagnetic radiation in the visible part of the spectrum.
In an electrical circuit energy is conveyed through the agency of electrical charge and through the medium of electric and magnetic fields. An essential feature of any circuit, therefore, is the provision of conducting paths for the conveyance of charge. As indicated in fig. 1.1, sources and sinks are operative only when charge flows through them. The rate at which charge flows is referred to as the current; the greater the current the greater the energy transmitted between sources and sinks.
Charge is set in motion by the action of the electric field established throughout the circuit by the sources.