To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The aim of this chapter is to introduce VDMTools Lite, the development environment for VDM-SL models. This is done by providing a “hands-on” tour of the tool's functionality using the alarm example introduced in Chapter 2. This chapter should enable the reader to use VDMTools Lite for exercises in the remaining part of this book.
Introduction
Models in VDM are formal in the sense that their semantics are very precisely described. This formality makes it possible to analyse models in order to confirm or refute claims about them. Such an analysis often reveals gaps in the understanding of the system, allowing these to be resolved before an expensive commitment is made to program code. The process of analysing claims about systems modelled in this way is termed validation and is discussed in greater depth in Chapter 10.
Software tools play an important role in supporting validation. This book is accompanied by an educational version of the VDM-SL version of VDMTools, called VDMTools Lite, that provides most of the functionality of the commercial tool.
This chapter introduces VDMTools Lite as a preparation for the examples and exercises of later chapters. It takes the form of a tour through the facilities for performing syntax checking, type checking, integrity checking, testing and debugging of models, using the alarm example which was presented in Chapter 2. The reader is encouraged to use the VDMTools Lite or the full VDM-SL version of VDMTools for all the exercises in this and subsequent chapters.
VDMTools Lite exists for several operating system platforms including Windows, Linux and MacOS.
The aim of this chapter is to provide a motivation for studying the modelling of computing systems by discussing the challenges of developing correct software. On completion of this chapter, the reader should be aware of the main concepts to be presented in the book and know where to find the relevant material in the text.
Software
Software is pervasive, error-prone, expensive to develop and, as an engineering medium, extraordinarily seductive. Its seemingly infinite flexibility, increasing power and the absence of physical characteristics, such as weight, make it an ideal medium in which to express complex models which might not exist at all were it not for software. As a result, software is often developed for applications which are critical either to an enterprise's mission or to the quality of life of those with whom the system interacts.
Challenged by the variety and scale of software applications, the participants in the 1968 NATO Conference on Software Engineering foresaw a discipline of software development with a sound scientific basis [Naur&69]. Over the last 40 years, there is little doubt that enormous advances have been made in our ability to control software development. However, software projects continue to suffer from serious difficulties which can lead to the delivery of faulty goods that are over budget and behind schedule.
The rapid increase in processor power has naturally led to increasing demands being made on software and its developers. Software is almost always developed as part of a larger system involving computing hardware, special systems such as sensors and actuators, human-computer interfaces and human beings.
For developers of computer-based systems, capturing and understanding the complex functional requirements and behaviour of software components has come to represent a considerable challenge. This book aims to equip readers with skills and techniques which will help them to address this challenge. It does so by stressing the value of abstract system models which can be analysed and tested before an expensive commitment is made to a particular design strategy. The book enables the reader to understand the role and nature of abstract models as well as gain practical experience in their creation.
In order to permit machine-supported analysis, system models must be formulated in a well-defined notation. In this text, we use a formally defined language called VDM-SL (the Vienna Development Method Specification Language). The Vienna Development Method is a collection of techniques for developing computing systems from models expressed in the language. Since its origin in an industrial environment, VDM has become one of the most widely used of a class of techniques known as model-oriented formal methods. The language VDM-SL was recently standardised by the International Organization for Standardization (ISO). Although VDM-SL is used as a teaching medium in this text, the principles taught apply equally well to other model-based formal methods such as B, RAISE and Z.
In this book we take a pragmatic approach to the use of formal methods. We aim to illustrate the concepts and techniques used in VDM without overwhelming the reader with mathematics. Unlike most teaching texts on formal methods, this book does not treat formal refinement or formal proof. Instead it focuses on the construction of abstract and formal models for a range of computer systems.
Software engineers produce many descriptions: those of the environment or domain in which a desired computing system software is to exist; descriptions of the requirements put on the software; and descriptions of the software design that implements the requirements. Thus the descriptions span the spectrum from application domain, via requirements and software architecture, program organisation and lower level designs, to executable code. While its concerns may be general, software engineering is unique among engineering disciplines in that its primary products are descriptions that must eventually satisfy the laws of mathematical logic and metamathematics.
Other engineering disciplines have to handle a quantum leap into physical reality – the stuff of natural science. In software engineering there is a different quantum leap: that from description to execution. Software engineering is thus about structuring and relating descriptions.
Abstraction and modelling taken together are the keys to mastering the complexity of environments and systems. Formal specification is employed to express abstractions and to ensure affinity to real domains. Such specifications open up ways to establish the proper relation between domain and requirements models as well as potentially verifying the links between software architecture, requirements models and the stages of design. This increases the chance of achieving a proper fit to the environment, to user expectations and of the correctness of implementation.
Owing to the advances in semiconductor technology, a large and complex system that has a wide variety of functionalities has been integrated on a single chip. It is called system-on-a-chip (SoC) or system LSI, since all of the components in an electronics system are built on a single chip. Designs of SoCs are highly complicated and require many manpower-consuming processes. As a result, it has become increasingly difficult to identify all the design bugs in such a large and complex system before the chips are fabricated. In current designs, the verification time to check whether or not a design is correct can take 80 percent or more of the overall design time. Therefore, the development of verification techniques in each level of abstraction is indispensable.
Logic simulation is a widely used technique for the verification of a design. It simulates the output values for given input patterns. However, because the quality of simulation results deeply depends on given input patterns, there is a possibility that there exist design bugs that cannot be identified during logic simulation. Because the number of required input patterns is exponentially increased when the size of a design is increased, it is clearly impossible to verify the overall design completely by logic simulation. To solve this problem, the development of formal verification techniques is essential. In formal verification, specification and design are translated into mathematical models. Formal verification techniques verify a design by proving its correctness with mathematical reasoning, and, therefore, they can verify the overall design exhaustively.
A model written in VDM-SL is a collection of the following:
modular structuring mechanisms;
data type definitions;
function definitions;
operation definitions;
value (constant) definitions; and
a state definition.
Each collection of definitions is prefixed by a keyword (e.g. types). Individual definitions are separated by semicolons. Such definitions may be structured inside modules.
This appendix provides an overview of the constructs in the subset of VDM-SL treated in this book. Finally there is a BNF grammar for the subset of VDM-SL covered in this book.
Identifiers
The different kinds of definitions which can be made in VDM-SL are named using identifiers. The naming conventions used in this book are as follows:
Functions, operations and types begin with an upper-case letter for each word or abbreviation of which the name is composed.
Constant values begin with lower-case characters and use underscores between words or abbreviations of which these are composed.
Local identifiers are always sequences of lower-case characters and usually rather short names are used.
However, this is only a convention and is not mandatory.
Type definitions
As in traditional programming languages it is possible to define data types in VDM-SL and give them appropriate names. For example,
Amount = nat
Hereby we have defined a data type with the name “Amount” and stated that the values which belong to this type are natural numbers (nat is one of the basic types described below). One general point about the type system of VDM-SL which is worth mentioning at this point is that equality and inequality can be used between any values.
Functional verification is the process of confirming that an implementation has preserved the intent of the design. The intent of the design might be initially captured in an architectural or micro-architectural specification using a natural language, while the implementation might be captured as an RTL model using a hardware description language. During the verification planning process, there are three fundamental issues that must be addressed: what functionality of the design must be checked (observability), how the design is to be checked (input scenarios and stimulus), and when the verification process is complete (which is often defined in terms of a functional or structural coverage model). Although input stimulus generation, coverage measurement, and output checking are tightly coupled conceptually, contemporary simulation test bench infrastructures generally separate these functions into loosely coupled verification components. This chapter discusses response checking, monitors, and assertions as techniques of specifying design intent in a form amenable to verification.
Identifying what to check
Prior to creating response checkers, monitors, or assertions, it is necessary to identify what must be checked, which is generally part of a project's verification planning process. Figure 3.1 illustrates an abstract view of a typical design flow. The flow begins with developing a natural-language requirements document, which we refer to as an architectural specification. Next, we create an architectural model to validate the algorithmic concepts. Once validated, the architectural specification is refined; this shifts the focus from an algorithmic view of the design to a performance and feature view required for implementation. We refer to this as the micro-architectural specification, which partitions the architecture into a number of functional blocks.
An important aspect of the analysis of a model is gaining confidence that the model is an accurate reflection of the informally stated requirements. This chapter aims to provide an awareness of the techniques through which such confidence can be gained. The idea of model validation is introduced: checks for internal consistency are discussed; techniques of visualisation, integration with legacy code, testing and proof are illustrated. On completion of this chapter, the reader should be equipped to choose appropriate validation techniques for a given modelling task.
Introduction
In previous chapters we have introduced a number of different models in VDM-SL. In several cases (e.g. the alarm example in Chapter 2) we started from an informal collection of requirements and built up a model in stages. But just how confident can one be that the formal model really describes the system the customer wanted? This problem arises continually in industry. When the imprecision of the customer requirement is replaced by the precision of a model, how can the modeller be sure that those areas of incompleteness and ambiguity in the original requirement have been resolved in a way that satisfies the customer? There is a further complication: requirement documents often state the client's intentions incorrectly. Such errors can only be resolved by somehow presenting a model of the system to the client in order to obtain feedback which may lead to modifying the model.
Vision-based tracking of an object using perspective projection inherently results in non-linear measurement equations in the Cartesian coordinates. The underlying object kinematics can be modelled by a linear system. In this paper we introduce a measurement conversion technique that analytically transforms the non-linear measurement equations obtained from a stereo-vision system into a system of linear measurement equations. We then design a robust linear filter around the converted measurement system. The state estimation error of the proposed filter is bounded and we provide a rigorous theoretical analysis of this result. The performance of the robust filter developed in this paper is demonstrated via computer simulation and via practical experimentation using a robotic manipulator as a target. The proposed filter is shown to outperform the extended Kalman filter (EKF).
In this paper, to diminish discontinuity points arising in the infinity-norm velocity minimization scheme, a bi-criteria velocity minimization scheme is presented based on a new neural network solver, i.e., an LVI-based primal-dual neural network. Such a kinematic planning scheme of redundant manipulators can incorporate joint physical limits, such as, joint limits and joint velocity limits simultaneously. Moreover, the presented kinematic planning scheme can be reformulated as a quadratic programming (QP) problem. As a real-time QP solver, the LVI-based primal-dual neural network is developed with a simple piecewise linear structure and high computational efficiency. Computer simulations performed based on a PUMA560 manipulator model are presented to illustrate the validity and advantages of such a bi-criteria velocity minimization neural planning scheme for redundant robot arms.
We have developed an underwater robotic fish using a unique three-layer electrostatic film motor. In the robotic fish, the unique motor actuates a flexible caudal fin to propel the robot via an elaborate power transmission system. In the present study, we describe the major disadvantages of the previous prototype of the robotic fish and improvements of the prototype. In addition, we present experimental evaluations related to the control parameters and locomotion performance of the robotic fish. These control parameters include the frequency and initial phase of AC voltage, and the amplitude and period of frequency sweeping. A simple theoretical model concerning the power transmission system of the robotic fish is also analyzed to provide a possible explanation for the unique swimming control. By appropriately adjusting these control parameters, we achieve cruising, emerging, submerging, and turning of the robotic fish even though only the caudal fin is active. Finally, we show smooth human-operated turn-around motion similar to that seen in real fish. Based on these experimental results, we further clarify the relationships between the open-loop motor pattern and motion parameters.
Towards NonBeing (Priest, 2005) gives a noneist account of the semantics of intentional operators and predicates. The semantics for intentional operators are modelled on those for the □ in normal modal logics. In this paper an alternative semantics, modelled on neighborhood semantics for □, is given and assessed.
Recently, the number of ontology matching techniques and systems has increased significantly. This makes the issue of their evaluation and comparison more severe. One of the challenges of the ontology matching evaluation is in building large-scale evaluation datasets. In fact, the number of possible correspondences between two ontologies grows quadratically with respect to the numbers of entities in these ontologies. This often makes the manual construction of the evaluation datasets demanding to the point of being infeasible for large-scale matching tasks. In this paper, we present an ontology matching evaluation dataset composed of thousands of matching tasks, called TaxME2. It was built semi-automatically out of the Google, Yahoo, and Looksmart web directories. We evaluated TaxME2 by exploiting the results of almost two-dozen of state-of-the-art ontology matching systems. The experiments indicate that the dataset possesses the desired key properties, namely it is error-free, incremental, discriminative, monotonic, and hard for the state-of-the-art ontology matching systems.