To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The definition of particle electromechanics offered in Chapter 1, Section 1.2, is very broad, precluding any possibility of definitive treatment in a single volume. Accordingly the scope of this book is restricted primarily to field–particle interactions involving (i) uncharged, lossy, dielectric and electrically conductive particles with AC and DC electric fields and (ii) magnetizable, electrically conductive particles with AC and DC magnetic fields. The particle electromechanics of interest here are a consequence of either the field-induced polarization of dielectric particles or the field-induced magnetization of magnetic particles. The forces and torques governing particle behavior result from the interaction of the dipole and higher-order moments with the field.
Electromechanics of particles
Two distinct types of electromechanical interactions may be identified: imposed field and mutual particle interactions. Imposed field interactions reign when a single particle, or an ensemble of noninteracting particles, is influenced by an externally imposed field. Examples include the dielectrophoretic force or the alignment torque exerted on an isolated particle. Here, it is customary to assume that the particle does not influence the field, though such an assumption is not always justified. Mutual particle interactions occur where particles are so closely spaced that the local field of a particle influences its neighbors. For particles in close mechanical contact, mutual interactions can be very strong, leading to significant changes in the equilibrium structure of particle ensembles (e.g., chain formation and cooperative electrorotation), as well as strong cohesive forces.
For the purpose of convenience in presentation, this monograph is organized into sections on imposed field interactions (Chapters 2 through 5) and mutual interactions (Chapters 6 and 7).
The earlier chapters of this book discussed the design and use of a simulator for use in the development of digital electronic systems. The discussion has been widened to include some aspects of testing and design for testability, since application of good practice in these areas leads to better use of costly resources in what is probably the largest part of the design procedure. It is now of value to review the extent to which the aims of simulation can be achieved; to discuss several topics related to the use of simulator; to introduce some enhancements to simulators; and to attempt to look into the future.
Desirable features of a simulator
Some years ago the author wrote down a list of the features he would like to find in a simulator.
A simulator is required to give an accurate prediction of the behaviour of a good network.
A simulator is required to recognise and give warning of a faulty network.
The basic simulator should be independent of technology but recognise the distinctive features of known technologies. Thus devices of any technology might be simulated.
The simulator should be capable of handling modes at several levels of abstraction and in the same run (Harding 1989).
There is no point in simulating a design in 1 s if it takes a day to diagnose a fault, modify and recompile the network. Hence, associated with the simulator, there must be means to assist the user to find the source of ‘wrong’ results, correct them and recompile quickly. That is, the simulation cycle must be given serious attention (the detail is not within the scope of this book).
A few years ago a well known company stated that the size of silicon chip that could be designed and built would be limited by ‘engineer blow-out’ – what a single engineer could hold in his mind without going crazy. To overcome that limitation, techniques for ‘managing complexity’ have been developed. These have included methods for manipulating data in different ways. The computer can handle large quantities of data without becoming crazed and without error, leaving only the interesting and intelligent work to the engineer.
Computer aids are not limited to chip design. It is not difficult today to produce a chip which works first time according to its specification. But was the specification correct? Thus there is no point in designing a 10 million gate chip which works perfectly to specification if the specification is wrong. In the late 1980s, estimates varied in the region 10% to 50% that the chip would work within its intended system (Harding 1989, Hodge 1990). This was clearly unsatisfactory, so there has been increasing emphasis on the need for system design rather than purely chip design.
One of the problems with building hardware is that, once built, it is not easily changed. In the case of designing on silicon, change is impossible. It is estimated that the relative cost of finding faults at design time, chip-test time, printed circuit board construction time, or in the finished machine in the field is 1:10:100:1000 (Section 3.1.1).
When a complete system is being simulated there are relatively few external inputs – a start key, a break key and possibly some inputs from various peripherals such as tape drives, sensors etc. Similarly, there are few outputs. In these cases the arrangements to pass ‘test data’ to the simulation can be very crude.
The main purpose of simulation is to find errors in a design. To simulate a complete system and expect to find detailed errors is very difficult, especially as different parts of the design may be at different stages of development. It is important to be able to simulate sub-units independently of the total design in order to get the majority of problems solved before trying to integrate the complete system (Fig. 1.4). Some means of supplying and controlling test vectors is required.
In this form of testing there will be copious output, which comes in two forms. The first is the values of primary output signals – the product of a multiplier, for example. These values can be compared with a set of ‘expected’ values to check the overall operation. If the results of simulation are different to the expected values then it is necessary to
check the expected values – Note!!!
if the expected values seem correct, trace back through the logic to find where the error occurred and hence find the design error.
Designing any piece of equipment (not necessarily electronic) is not a straight path from start through to product, even when no mistakes are made. At each stage problems occur which may require previous decisions to be reviewed. For example, it may be found on simulation that a particular part of the design will not operate fast enough to meet the specification. That part of the design, and possibly others, will have to be reviewed until the criteria are met or proved to be impossible. In the latter case it will be necessary to reconsider the specification. Fig. 2.1 is a summary of the process, and the reader should refer to it from time to time whilst reading the rest of this chapter.
Specification
Every design begins with a customer specification, which describes the function the design is intended to perform. This may include not only the logical properties but the speed of operation, the output power driving capability, the capability of circuits that drive the design, the power supply available, perhaps limits on power dissipation and electromagnetic radiation, operation in the presence of external radiation etc. Sometimes it is difficult to satisfy all the criteria. Higher speeds will require higher power dissipation and faster clocks. The latter increases the electromagnetic radiation. Some criteria may not be important, or only marginally so. With certain pieces of logic the power dissipation may be minimal so that power limits are not important.
Simulate for functional correctness, ignoring all timing considerations, and then use a timing verifier to check that time constraints are met.
Simulate in an environment in which the models include timing. As one can never guarantee that all paths through the logic have been exercised, it may still be advisable to use a timing verifier.
The simplest approach to simulation is to have a separate procedure for every logical element in the network being simulated, and the connections between the elements are then mirrored in the structure of the machine code of the program. The entire structure of the network is thus mirrored in the store of the machine doing the simulation. This takes up a great deal of storage space, but is very fast in running, since there are no lengthy lists to be searched and manipulated.
The amount of storage can be reduced by having only one procedure for each element type, and a small amount of storage for every element holding the element-specific data. In the previous description there is a copy of the procedure for every element which uses it and hence no procedure entry and exit as such. With only one copy each procedure requires a call. Procedure calls need machine states to be saved temporarily and restored on exit. This is expensive in CPU resources.