To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Having matured over the years, formal design verification methods, such as theorem proving, property and model checking, and equivalence checking, have found increasing application in industry. Canonical graph-based representations, such as binary decision diagrams (BDDs), [1] binary moment diagrams (BMDs), [2] and their variants, play an important role in the development of software tools for verification. While these techniques are quite mature at the structural level, the high-level verification models are only now being developed. The main difficulty is that such verification must span several levels of design abstraction. Verification of arithmetic designs is particularly difficult because of the disparity in the representations on the different design levels and the complexity of logic involved.
This chapter addresses verification based on canonical data structures. It presents several canonical, graph-based representations that are used in formal verification, and, in particular, in equivalence checking of combinational designs specified at different levels of abstraction. These representations are commonly known as decision diagrams, even though not all of them are actually decision-based forms. They are graph based structures whose nodes represent the variables and whose directed edges represent the result of the decomposition of the function with respect to the individual variables. Particular attention is given to arithmetic and word-level representations.
An important common feature of all these representations is canonicity, which is essential in combinational equivalence checking. A form is canonical if the representation of a function in that form is unique. Canonical graph-based representations make it possible to check whether two combinational functions are equivalent by checking whether their graph-based representations are isomorphic.
The aim of this chapter is to show how systems with a “persistent state” can be modelled. On completion of the chapter, the reader should be able to develop models of systems which contain persistent state components. The difference between this style and the functional modelling style used so far in this book will be highlighted by revisiting the explosives controller and the trusted gateway examples.
Introduction
Using formal modelling techniques, computing systems may be described at many different levels of abstraction. The models presented so far in the book have been set at a relatively high level of abstraction. This is reflected both in the data types used and in the way functionality is described through mathematical functions that take some data representing the system as input parameters and return a result which describes the system after the computation has been performed. In some cases, the function can be described without explicitly constructing its result (e.g. see Subsection 6.4.3).
This functional modelling style has its limitations. Few computing systems are actually implemented via pure functions. More often, they have variables that hold data which are modified by operations invoked by some outside user. These variables are persistent in the sense that they continue to hold data between operation invocations. If the purpose of a model is to document design decisions about the split between ordinary parameters to functions and persistent variables it is necessary to use operations in VDM-SL. These operations take inputs and return results, but they also have some effect (often called a side-effect) on the persistent variables.
The aim of this chapter is to show how relationships between data can be modelled as mappings. The mapping type constructor and operators in VDMSL are introduced through an example from the nuclear industry. On completing this chapter, the reader should be confident in modelling and analysing systems involving mappings.
Introduction
Computing systems frequently centre on relationships between sets of values. For example, a database might link a set of customer identifiers to detailed information. Such relationships can often be modelled as mappings from elements of one set, known as the domain, to elements of the other set, known as the range. Mappings can be thought of as tables in which one can look up the domain element and read across to see the range element to which it is related. We will say that each domain element maps to the corresponding range element. Each line of the table, being a small part of the mapping, is called a maplet. Each domain element can have only one maplet in a mapping, so there is no ambiguity about which range element it points to. For example, the following table represents a mapping from names (strings of characters) to bank balances (integers).
The goal of this chapter is to illustrate the practical applicability of the simulation-based validation concepts in the book by applying them to a design example. We will use both System Verilog [1] and Vera [2] as hardware-verification languages (HVLs) in which we will implement the entire validation framework for the design example. Simulation is the most widely used technique for verification of design models. The design to be verified is described in a hardware-description language (HDL) and is referred to as the design under verification (DUV). This provides an executable model or models of the DUV. These models could be developed at different levels of abstraction.
A high-level design specification is then analyzed to produce stimulus or input test vectors. The input test vectors are applied to the models. The inputs are propagated through the model by a simulator and finally the outputs are generated. A monitor is used to check the output of the DUV against expected outputs for each input test vector. It is constructed based on an interpretation of the expected design behavior from the specification. If there is any observed deviation from the expected output, a design error is considered to have been found, and debugging tools are used to trace back and diagnose the source of the problem. The problem usually arises from either incorrectly modeled design or incorrectly modeled timing. Once the problem source is identified, it is fixed and the new model is simulated. In an ideal world, the model should be tested for all possible scenarios.
This final chapter concerns the use of VDM in industrial practice. We aim to equip the reader to apply VDM technology cost-effectively in industrial software development processes, and to stay abreast of the state of the art in VDM and formal modelling. We aim to introduce the contribution that formal modelling can make to the tasks that are at the core of commercial development processes. We will illustrate this with several real industrial applications of VDM. Finally, we aim to provide information on the recent extensions to VDM and VDMTools, and how to gain the most from the VDM and formal methods communities.
Introduction
Modelling in a formal language is not a panacea for every problem in system and software development but, if used thoughtfully, it can yield significant benefits. The deciding factor in using VDM technology (the combination of VDM-SL and VDMTools) has to be cost-effectiveness. The cost of developing a system model during the early stages of design should be recouped when the improved understanding of system functionality reduces the reworking required to deal with defects that are uncovered during later activities such as testing and maintenance. In this chapter we discuss a range of software development activities, some of the problems that can arise during their execution and the ways in which the use of VDM can address some of these (Sections 13.2 to 13.4). Some hints on how to start using VDM are presented (Section 13.5). We illustrate the approach by describing recent industrial applications of VDM and its extended forms (Sections 13.6 and 13.7).
In this chapter, we aim to provide an awareness of the issues involved in constructing and analysing large-scale models. We will introduce modular structuring facilities in VDM-SL as an illustration of the features required in a structuring mechanism. On completion of this chapter, the reader should be able to exploit modular structuring and the potential for re-use in large models.
Introduction
In any course on modelling, one is naturally limited in the size of model which can be developed and presented. However, in any realistic application of modelling technology, questions of scale must be addressed. How can one manage the complexity of developing and analysing a model which contains many related parts?
Before answering this question, it is worth remembering that models should be kept as simple as possible while still capturing the aspects of the system which are felt to be relevant to the analysis. Careful use of abstraction means that many systems can be usefully modelled without encountering problems of scale. However, for some applications, particularly where the product is safety-related, a formally defined language such as VDM-SL must be applied to the production of a substantial model, and so the management of the model's size and complexity becomes a significant issue.
In programming languages, the management of complexity has led to the adoption of modular structuring mechanisms for programs, and this approach has also been applied to VDM-SL. All the models presented so far have been flat in the sense that they have consisted of a series of definitions in a single document.
Digital circuits are usually produced following a multi-step development process composed of several intermediate design phases. Each one is concluded by the delivery of a model that describes the digital circuit in increasing detail and with different abstraction levels. The first design step usually produces the highest abstraction level model, which describes the general behavior of the circuit leaving internal details out; whereas the last steps provide lower-level descriptions, with more detail and closer to the actual implementation of the circuit. Clearly, the lower the abstraction level, the higher the complexity of the resulting model.
In the following, some of the main characteristics of the most commonly adopted design abstraction levels as well as the main features of the delivered models at each level will be sketched. It is important to note that levels of abstraction higher or lower than those described here could also exist in a design cycle; but we only focus on the most commonly adopted ones.
• Architectural level
This is often the highest abstraction level: the circuit model delivered here is used as a reference since it contains few implementation details. The main goal at the architectural level is to provide a block architecture of the circuit implementing the basic functional specifications. The delivered model is usually exploited to evaluate the basic operations of the design and the interactions among the components within the system. At this design level, a complete simulatable model may be built in some high-level language; typically, these models do not contain timing information.
The aim of this chapter is to introduce the use of logic for stating the properties of data and functions in system models. The logic used in VDM-SL models is introduced via a temperature monitor example. On reaching the end of the chapter, the reader should be able to state and analyse logical expressions in VDMTools Lite.
Introduction
An important advantage of building a model of a computing system is that it allows for analysis, uncovering misunderstandings and inconsistencies at an early stage in the development process. The discovery of a possible failure of the Expert To Page function in the previous chapter was the result of just such an analysis. The ability to reason about the types and functions in a model depends on having a logic (a language of logical expressions) in which to describe the properties of the system being modelled and in which to conduct arguments about whether those properties hold or not.
This chapter introduces the language of logical expressions used in VDMSL, based on Predicate Logic. It begins by introducing the idea of a predicate, then examines the basic operators which allow logical expressions to be built up from simpler expressions. Finally, we examine the mechanisms for dealing with mis-application of operators and functions in the logic of VDM-SL.
The temperature monitor
The example running through this chapter continues the chemical plant theme. Suppose we are asked to develop the software for a temperature monitor for a reactor vessel in the plant. The monitor is connected to a temperature sensor inside the vessel from which it receives a reading (in degrees Celsius) every minute.
Recursive structures are common in many applications, notably in computer language processing. They present particular modelling challenges and so we devote a chapter to them. The aim here is to show how recursive data structures such as trees and graphs are defined and used through recursive traversal. We do not introduce new VDM language concepts at this stage, but consolidate the reader's knowledge and experience by showing how VDM copes with this important class of system. We add further abstraction lessons by considering the executability of functions.
Recursive data structures: trees
Recursion arises in many significant computing applications. In Chapter 7 we introduced recursive functions as a means of traversing collections of values, and illustrated their use on sequences. However, recursion is also central to an understanding of other data structures, including trees and graphs, that arise in many significant computing applications. This chapter explores the modelling of such recursive structures further. We begin by examining tree structures and illustrate their use by examining abstract syntax trees – an application area that underpins many applications in design and programming support environments, including VDMTools. We go on to examine more general graph structures, using an application from machine code optimisation to illustrate recursive traversal. The abstraction lesson in this chapter concerns the costs and benefits of executable models.
We begin by considering a common data structure: a tree. A tree is a collection of points (termed nodes) connected to each other by links (termed arcs).
Model-based verification has been the bedrock of electronic design automation. Over the past several years, system modeling has evolved to keep up with improvements in process technology fueled by Moore's law. Modeling has also evolved to keep up with the complexity of applications resulting in various levels of abstraction. The design automation industry has evolved from transistor-level modeling to gate level and eventually to register-transfer level (RTL). These models have been used for simulation-based verification, formal verification, and semi-formal verification.
With the advent of embedded systems, the software content in most modern designs is growing rapidly. The increasing software content, along with the size, complexity, and heterogeneity of modern systems, makes RTL simulation extremely slow for any reasonably sized system. This has made system verification the most serious obstacle to time to market.
The root of the problem is the signal-based communication modeling in RTL. In any large design there are hundreds of signals that change their values frequently during the execution of the RTL model. Every signal toggle causes the simulator to stop and re-evaluate the state of the system. Therefore, RTL simulation becomes painfully slow. To overcome this problem, designers are increasingly resorting to modeling such complex systems at higher levels of abstraction than RTL.
In this chapter, we present transaction-level models (TLMs) of embedded systems that replace the traditional signal toggling model of system communication with function calls, thereby increasing simulation speed. We discuss essential issues in TLM definition and explore different classifications as well as cases for TLMs. We will also provide an understanding of the basic building blocks of TLMs.
This chapter aims to introduce the reader to the most basic kinds of data value available to the modeller and to show how values can be manipulated through operators and functions. These are introduced using a traffic light kernel control example. On completing this chapter the reader should be able to recognise and use all the basic data types of VDM-SL.
Introduction
A functional model of a system is composed of definitions of types which represent the kinds of data values under consideration and definitions of functions which describe the computations performed on the data. In order to develop a formal model, we therefore require a means of defining types and values, and ways to construct logical expressions which state the properties of values. This chapter illustrates these features in VDM-SL and introduces the basic types available in VDM-SL using an example based on traffic light control. A data type (or simply type) in VDM-SL is a collection of values called the elements or members of the type. For example, the type of natural numbers consists of infinitely many elements, from zero upwards. To make use of a type, we will need
a symbol to represent the type, e.g. nat;
a way of writing down the type's elements, e.g. 3, “John”;
value operators to permit the construction of more sophisticated expressions that represent elements of the type, e.g. + to represent addition; and
comparison operators, e.g. <, to allow expressions of elements of the type to be compared.
Boolean satisfiability (SAT) is a widely used modeling framework for solving combinatorial problems. It is also a well-known decision problem in theoretical computer science, being the first problem to be shown to be NP-complete. [11] Since SAT is NP-complete, and unless P=NP, all SAT algorithms require worst-case exponential time. However, modern SAT algorithms are extremely effective at coping with large search spaces, by exploiting the problem's structure when it exists. [2–4] The performance improvements made to SAT solvers since the mid 1990s motivated their application to a wide range of practical applications, from cross-talk noise prediction in integrated circuits [5] to termination analysis in term-rewrite systems. [6] In some applications, the use of SAT provides remarkable performance improvements. Examples include model-checking of finite-state systems, [7–9] design debugging, [10] AI planning, [11,12] and haplotype inference in bioinformatics. [13] Additional successful examples of practical applications of SAT include termination analysis in term-rewrite systems, [6] knowledge-compilation, [4] software-model checking, [15,16] software testing, [17] package management in software distributions, [18] checking of pedigree consistency, [19] verification of pipelined processors, [20–21] symbolic-trajectory evaluation, [22] test-pattern generation in digital systems, [23] design debugging and diagnosis, [10] identification of functional dependencies in Boolean functions, [24] technology-mapping in logic synthesis, [25] circuit-delay computation, [26] and cross-talk-noise prediction. [5] However, this list is incomplete, as the number of applications of SAT has been on the rise in recent years. [18,19,24]
Besides practical applications, SAT has also influenced a number of related decision and optimization problems, which will be referred to as extensions of SAT. Most extensions of SAT either use the same algorithmic techniques as used in SAT, or use SAT as a core engine.