We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The purpose of this chapter is to give an overview of some currently active topics in automata and language theory. The overview is by no means intended to be exhaustive: Some topics have been entirely omitted, and the material within the topics presented has been chosen to give only a general idea of most representative notions and results. As the title of this chapter indicates, the attention is restricted to topics in automata and language theory.
The style of presentation in this chapter is somewhat different from that used in the previous chapters. Most of the proofs are either omitted or only outlined. Sometimes notions are introduced in a not entirely rigorous manner, and results are presented in a descriptive way rather than in the form of precise mathematical statements. We begin with a discussion of Petri nets.
In a customary model for computing, the notion of a state is quite essential. This is certainly true of most of the models discussed earlier. The notion of a state introduces, at least implicitly, a specific discrete linear time scale for all considerations involved: Time instants can be identified with the current states. Such a linear time scale is not desirable in all considerations. For instance, we might want to model systems where many processors operate independently and in parallel and where some partial computations depend (perhaps in a complicated way) on the outcome of some other computations.
The basic question in the theory of computing can be formulated in any of the following ways: What is computable? For which problems can we construct effective mechanical procedures that solve every instance of the problem? Which problems possess algorithms for their solutions?
Fundamental developments in mathematical logic during the 1930s showed the existence of unsolvable problems: No algorithm can possibly exist for the solution of the problem. Thus, the existence of such an algorithm is a logical impossibility—its nonexistence has nothing to do with our ignorance. This state of affairs led to the present formulation of the basic question in the theory of computing. Previously, people always tried to construct an algorithm for every precisely formulated problem until (if ever) the correct algorithm was found. The basic question is of definite practical significance: One should not try to construct algorithms for an unsolvable problem. (There are some notorious examples of such attempts in the past.)
A model of computation is necessary for establishing unsolvability. If one wants to show that no algorithm for a specific problem exists, one must have a precise definition of an algorithm. The situation is different in establishing solvability: It suffices to exhibit some particular procedure that is effective in the intuitive sense. (We use the terms algorithm and effective procedure synonymously.
The last twenty years have witnessed most vigorous growth in areas of mathematical study connected with computers and computer science. The enormous development of computers and the resulting profound changes in scientific methodology have opened new horizons for the science of mathematics at a speed without parallel during the long history of mathematics.
The following two observations should be kept in mind when reading the present monograph. First, various developments in mathematics have directly initiated the “beginning” of computers and computer science. Second, advances in computer science have induced very vigorous developments in certain branches of mathematics. More specifically, the second of these observations refers to the growing importance of discrete mathematics—and we are now witnessing only the very beginning of the influence of discrete mathematics.
Because of reasons outlined above, mathematics plays a central role in the foundations of computer science. A number of significant research areas can be listed in this connection. It is interesting to notice that these areas also reflect the historical development of computer science.
1. The classical computability theory initiated by the work of Gödel, Tarski, Church, Post, Turing, and Kleene occupies a central role. This area is rooted in mathematical logic.
2. In the classical formal language and automata theory the central notions are those of an automaton, a grammar, and a language.
The classification of mathematical problems into decidable and undecidable ones has been discussed quite extensively in previous chapters. Indeed, from the mathematical point of view, this classification is the most fundamental one. It is also of definite practical significance in discouraging attempts to design too-general systems, such as systems for deciding the equivalence of programs or the halting of a program. There are, in fact, instances of such attempts in the past!
However, this classification is too coarse in many respects. In Chapter 4 we considered a finer classification of undecidable problems in terms of reducibilities and degrees. We shall now discuss a finer classification of decidable problems in terms of their complexity. Two problems might both be decidable and yet one might be enormously more difficult to compute, which in practice might make this problem resemble an undecidable one. For example, if instances of reasonable size of one problem take only a few seconds to compute, whereas instances of comparable size of the other problem take millions of years (even if best computers are available), it is clear that the latter problem should be considered intractable compared with the former one. Hence, having established the decidability of a particular problem, we should definitely study its complexity—that is, how difficult it is to settle specific instances of the problem.
Having established the framework for the phased development of a system we now go on to look at the techniques that are available to you during those phases.
The purpose of this chapter is not to present all known techniques in an encyclopaedic way. Rather, it is to present sufficient information on a representative selection to allow you to make a first selection of a subset for your own project. As you read each précis, you can decide which of the techniques presented are not of interest and those that might be.
In the subsequent phase-orientated chapters each technique is presented with further detail in the context of the phase or phases where it is of use. Thus techniques applicable to, say, the System Definition Phase can be read and compared. Should you still feel that a particular technique is or may be of use, you can refer for complete information to the source material listed in the bibliography.
Similarly, all of the descriptions of the techniques in later chapters are necessarily simplifications. Remember that they are given not so that you can learn how to use the technique – for that you must turn to the authoritative references and suppliers. The purpose is solely to give you sufficient flavour of each technique for you to judge whether further evaluation could be worthwhile.
For each candidate technique you should aim to adopt, adapt or reject it.
The system resulting from the Production Phase must now be tested against the Functional Specification to your client's satisfaction. There are two principal reasons for giving the System Acceptance process a phase of its own:
firstly, to emphasise the importance of careful preparation for it and proper orientation of the preceding phases towards it;
secondly, because possibly for the first time since System Definition and the production of the Functional Specification, your client will once more play a major role.
Formal acceptance of the system by your client – in other words, a signature – is as important as his formal acceptance of the Functional Specification. It records his agreement that you have carried out your commitment to build his system.
It is likely that, if you have a commercial contract with your client, payment of some portion of the contract price will be dependent on the system's successfully passing an agreed Acceptance Test.
Throughout the preceding phases we have emphasised the need to prepare for Acceptance well ahead of the date of handover. The earlier the Acceptance Test is defined, for instance, the earlier the system can be checked out against it by its implementers. Above all, however, acceptance will not always be simply a matter of running some tests on the system and walking away. You may well be involved in a major handover operation requiring the conversion of the user's organisation and operational procedures, cutover from an existing system, conversion of existing databases to new formats, periods of parallel running, training of staff and so on.
The cost of software development is now the dominating cost in computer systems. As hardware costs decline, improved efficiency of software production becomes the key target if further reductions in computing costs are to be achieved. Accordingly the Alvey Committee identified software engineering as one of the four underlying technologies for the UK national programme of cooperative research in information technology.
Coming as it does at the start of the Alvey programme, I welcome this book by Martyn Ould and Nick Birrell, for it presents a picture of software engineering techniques that are already in existence and can therefore be exploited now. If the techniques set out in the book are widely applied we can look forward to a very significant improvement in the efficiency of software production.