To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In software development, we have to make choices and take decisions, and these depend on obtaining answers for critical questions, such as the following:
How should an important decision be made when conflicting strategic goals and stakeholders’ desires or quality attributes must be considered?
How can stakeholders be assured that the decision has been made in a sound, rational and fair process that withstands the rigour of an aspect-oriented analysis and design, or a software product line, for example?
In software product line (SPL) development, the answers to these questions are critical, because they require dealing with modelling and implementation of common and variable requirements that can be composed and interact in different ways. Furthermore, they also require decisions that can impact several products at the same time. For example, we may simply want to know which requirements are in conflict and which features are negatively affected – considering different configurations of the software product lines – to choose the best architecture to design and implement the product line and to be able to decide which mandatory or optional features should have implementation priority. Therefore, help is required to support software engineers in making better, informed decisions, by offering them a systematic process for ranking a set of alternatives based on a set of criteria. In requirements engineering, for instance, it is useful to identify conflicting requirements with respect to which negotiations must be carried out and to which trade-offs need to be established (Rashid et al., 2003). A concrete typical use is to offer a ranking of non-functional requirements (NFRs) based on stakeholders’ wishes. This helps to establish early trade-offs between requirements, hence providing support for negotiation and subsequent decision-making among stakeholders. As discussed in Moreira et al. (2005a), having performed a trade-off analysis on the requirements, we are better informed with respect to each important quality attribute the system should satisfy, before making any architectural choices.
Requirements engineering in software product line engineering
Software product line engineering (SPLE) (Clements & Northrop, 2001) has been recognised as one of the foremost techniques for developing reusable and maintainable software within system families (Parnas, 2001a, 2001b). We focus on a feature-oriented form of SPLE, in which the key concern is to break the problem domain down into features, which are system properties, or functionalities, which are relevant to some stakeholders.
Domain and application engineering
Feature-oriented SPLE can be usefully broken down into two core activities: domain engineering and application engineering. The key task of domain engineering is to model the domain itself in order to lay the foundation for deriving individual products, which is the remit of application engineering. The work presented in this chapter belongs to the realm of domain engineering; we seek to aid the requirements engineer in analysing, understanding and modelling the domain by providing a framework for the automated construction of feature models from natural language requirements documents.
In the previous chapters of this book, it has been established that software product lines (SPL) have become one of the most popular means to providing a flexible product portfolio while achieving a short time-to-market. By reusing overlapping functionality, production time and cost of development can be significantly reduced for families of products (Pohl et al., 2005). But this increased flexibility comes at a price as software developers are faced with a considerable increase of complexity when designing the software product line.
Where the development of traditional software systems already requires substantial amounts of information, the development of SPLs involves even larger quantities of information. As an SPL supports a range of products, detailed information on all these products is required for SPL engineering. In addition, information is required on how the variability among these products is to be supported, what the design of the SPL infrastructure will look like and how the SPL will be aligned with market.
This chapter describes our approach for mapping the requirements processed by AMPLE techniques and tools, such as ArborCraft (Chapter 3), VML4RE (Chapter 5) and HAM (Chapter 5), to a product line architecture. In contrast to the implementation-related Chapter 6, which focuses on CaesarJ for implementing configurable software components, this chapter concentrates on a model-driven approach based on variability modelling, domain-specific languages (DSLs), architecture blueprints and templates, and libraries of artefacts (arbitrary software components, configuration and deployment data, etc.).
Model-driven engineering (MDE) is an approach that captures the key features of the system used in models, and develops and refines these models during development until code is finally generated. Models are defined at different conceptual levels, and are combined and transformed from a higher level of abstraction to a more concrete one. By integrating MDE into software product line engineering (SPLE), solution space artefacts can be systematically derived from problem space concepts, leading to a higher automation in application engineering saving cost and time. Models abstract the problem and facilitate rigorous descriptions using terms and concepts that are familiar to people who work in the domain of the problem, rather than in terms only familiar to IT experts. In particular, essential improvements can be achieved by using DSLs to represent the system design with terminology and abstractions of the problem domain, which is easier to understand for problem domain experts.
Traceability practices should help stakeholders with the understanding, capturing, tracking and verification of software artefacts and their relationships. A proper realisation of traceability is a necessary system characteristic, as it supports software management, software evolution, verification and validation. It is fundamental for the definition of the results of many kinds of analysis of software models, such as change impact analysis, variability analysis and separation of concerns analysis.
In software product line engineering (SPLE), traceability is a key practice. It is necessary to support variability management and to keep the goals and the structure of the product line definition consistent, updated and valuable. Traceability information is rarely considered in an isolated way. It is captured, updated and analysed from multiple perspectives, such as domain engineering and application engineering.
The book currently in your hands touches on a wide range of topics in the area of software product line engineering and offers unique solutions to particular problems appearing in the whole development cycle. We show how to semi-automatically derive feature models from requirements documents, dive deeper in modelling variability with a domain-specific language tailored for this purpose, and propose methodologies to develop items in a product-driven as well as in a solution-driven style. We also introduce aspects into core asset development, track changes and decisions in the development process and deal with potential conflicts and uncertainties. However, there is one thread that runs as a common theme through all chapters of this book: all the techniques and methodologies are centred around what we will call conventional software product line engineering. That means that a certain domain is analysed, and a number of components are produced, tested and later on assembled to form actual products, much like in a design–develop–compile–assemble style. It is easy to imagine how software running on modern smart phones, for instance, is developed this way. Other examples following this style can easily be found by looking around. However, the software landscape in which we are living has changed a lot in recent years. Software is no longer produced only by compiling source code, burning the final application onto a CD-ROM and delivering it to a customer. The Internet has opened the door to different styles of product delivery and consumption. Whole applications can be called by clicking a single link and a plethora of web services stands ready for delivering a wide range of functionalities never seen before.
The way of creating an application by consuming and composing services offered by different providers changes the style of application development and, therefore, also affects what we earlier called ‘traditional’ software product line engineering. For this reason new challenges will arise for SPLE that cannot be tackled by traditional solutions.
Many systems of quantified modal logic cannot be characterised by Kripke's well-known possible worlds semantic analysis. This book shows how they can be characterised by a more general 'admissible semantics', using models in which there is a restriction on which sets of worlds count as propositions. This requires a new interpretation of quantifiers that takes into account the admissibility of propositions. The author sheds new light on the celebrated Barcan Formula, whose role becomes that of legitimising the Kripkean interpretation of quantification. The theory is worked out for systems with quantifiers ranging over actual objects, and over all possibilia, and for logics with existence and identity predicates and definite descriptions. The final chapter develops a new admissible 'cover semantics' for propositional and quantified relevant logic, adapting ideas from the Kripke–Joyal semantics for intuitionistic logic in topos theory. This book is for mathematical or philosophical logicians, computer scientists and linguists.
Software product lines provide a systematic means of managing variability in a suite of products. They have many benefits but there are three major barriers that can prevent them from reaching their full potential. First, there is the challenge of scale: a large number of variants may exist in a product line context and the number of interrelationships and dependencies can rise exponentially. Second, variations tend to be systemic by nature in that they affect the whole architecture of the software product line. Third, software product lines often serve different business contexts, each with its own intricacies and complexities. The AMPLE (http://www.ample-project.net/) approach tackles these three challenges by combining advances in aspect-oriented software development and model-driven engineering. The full suite of methods and tools that constitute this approach are discussed in detail in this edited volume and illustrated using three real-world industrial case studies.
Plato was not present on the day that Socrates drank hemlock in the jail at Athens and died. Phædo, who was, later related that day's conversation to Echecrates in the presence of a gathering of Pythagorean philosophers at Phlius. Once again, Plato was not around to hear what was said. Yet he wrote a dialog, “Phædo,” dramatizing Phædo's retelling of the occasion of Socrates' final words and death. In it, Plato presents to us Phædo and Echecrates' conversation, though what these two actually said he didn't hear. In Plato's account of that conversation, Phædo describes to Echecrates Socrates' conversation with the Thebian Pythagoreans, Simmias and Cebes, though by his own account he only witnessed that conversation and refrained from contributing to it. Plato even has Phædo explain his absence: “Plato,” he tells Echecrates, “I believe, was ill.”
We look to Socrates' death from a distance. Not only by time, but by this doubly embedded narrative, we feel removed from the event. But this same distance draws us close to Socrates' thought. Neither Simmias nor Cebes understood Socrates' words as well as Phædo did by the time he was asked to repeat them. Even Phædo failed to notice crucial details that Plato points out. Had we overheard Socrates' conversation, we would not have understood it. We look to Socrates' death from a distance, but to understand Socrates, we don't need to access him—we need Plato.
Abstract. This paper discusses Tennenbaum's Theorem in its original context of models of arithmetic, which states that there are no recursive nonstandard models of Peano Arithmetic. We focus on three separate areas: the historical background to the theorem; an understanding of the theorem and its relationship with the Gödel–Rosser Theorem; and extensions of Tennenbaum's theorem to diophantine problems in models of arthmetic, especially problems concerning which diophantine equations have roots in some model of a given theory of arithmetic.
§ 1.Some historical background. The theorem known as “Tennenbaum's Theorem” was given by Stanley Tennenbaum in a paper at the April meeting in Monterey, California, 1959, and published as a one-page abstract in the Notices of the American Mathematical Society [28]. It is easily stated as saying that there is no nonstandard recursive model of Peano Arithmetic, and is an attractive and rightly often-quoted result.
This paper celebrates Tennenbaum's Theorem; we state the result fully and give a proof of it andother related results later. This introduction is in the main historical. The goals of the latter parts of this paper are: to set out the connections between Tennenbaum's Theorem for models of arithmetic and the Gödel–Rosser Theorem and recursively inseparable sets; and to investigate stronger versions of Tennenbaum's Theorem and their relationship to some diophantine problems in systems of arithmetic.
Tennenbaum's theorem was discovered in a period of foundational studies, associated particularly with Mostowski, where it still seemed conceivable that useful independence results for arithmetic could be achieved by a “handson” approach to building nonstandard models of arithmetic.
Conversation March 3, 1972. Husserl's philosophy is very different before 1909 from what it is after 1909. At this point he made a fundamental philosophical discovery, which changed his whole philosophical outlook and is even reflected in his style of writing. He describes this as a time of crisis in his life, both intellectual and personal. Both were resolved by his discovery. At this time he was working on phenomenological investigation of time.
There is a certain moment in the life of any real philosopher where he for the first time grasps directly the system of primitive terms and their relationships. This is what had happened to Husserl. Descartes, Schelling, Plato discuss it. Leibniz described it (the understanding or the system?) as being like the big dipper — it leads the ships. It was called understanding the absolute.
The analytic philosophers try to make concepts clear by defining them in terms of primitive terms. But they don't attempt to make the primitive terms clear. Moreover, they take the wrong primitive terms, such as “red”, etc., while the correct primitive terms would be “object”, “relation”, “well”, “good”, etc.
The understanding of the system of primitive terms and their relationships cannot be transferred from one person to another. The purpose of reading Husserl should be to use his experience to get to this understanding more quickly. (“Philosophy As Rigorous Science” is the first paper Husserl wrote after his discovery.)
Perhaps the best way would be to repeat his investigation of time. At one point there existed a 500-page manuscript on the investigation (mentioned in letters to Ingarden, with whom he wished to publish the manuscript).
Abstract. Finite set theory, here denoted ZFfin, is the theory obtained by replacing the axiom of infinity by its negation in the usual axiomatization of ZF (Zermelo-Fraenkel set theory). An ω-model of ZFfin is a model in which every set has at most finitely many elements (as viewed externally). Mancini and Zambella (2001) employed the Bernays-Rieger method of permutations to construct a recursive ω-model of ZFfin that is nonstandard (i.e., not isomorphic to the hereditarily finite sets Vω). In this paper we initiate the metamathematical investigation of ω-models of ZFfin. In particular, we present a new method for building ω-models of ZFfin that leads to a perspicuous construction of recursive nonstandard ω-models of ZFfin without the use of permutations. Furthermore, we show that every recursive model of ZFfin is an ω-model. The central theorem of the paper is the following:
Theorem A. For every graph (A, F), where F is a set of unordered pairs of A, there is an ω-model m of ZFfin whose universe contains A and which satisfies the following conditions:
(1) (A, F) is definable in m;
(2) Every element of m is definable in (m, a)a ∈ A;
(3) If (A, F) is pointwise definable, then so is m;
(4) Aut(m) ≅ Aut(A, F).
Theorem A enables us to build a variety of ω-models with special features, in particular:
Corollary 1. Every group can be realized as the automorphism group of an ω-model of ZFfin.
Corollary 2. For each infinite cardinal κ there are 2κrigid nonisomorphic ω-models of ZFfinof cardinality κ. […]
§1. Introduction. In this survey of the history of constructivism. more space has been devoted to early developments (up till ca. 1965) than to the work of the later decades. Not only because most of the concepts and general insights have emerged before 1965, but also for practical reasons: much of the work since 1965 is of a too technical and complicated nature to be described adequately within the limits of this article.
Constructivism is a point of view (or an attitude) concerning the methods and objects of mathematics which is normative: not only does it interpret existing mathematics according to certain principles, but it also rejects methods and results not conforming to such principles as unfounded or speculative (the rejection is not always absolute, but sometimes only a matter of degree: a decided preference for constructive concepts and methods). In this sense the various forms of constructivism are all ‘ideological’ in character.
Constructivism as a specific viewpoint emerges in the final quarter of the 19th century, and may be regarded as a reaction to the rapidly increasing use of highly abstract concepts and methods of proof in mathematics, a trend exemplified by the works of R.Dedekind and G. Cantor.
The mathematics before the last quarter of the 19th century is, from the viewpoint of today, in the main constructive, with the notable exception of geometry, where proof by contradiction was commonly accepted and widely employed.
The proof of the irrationality of √2 involves proving that there cannot be positive integers n and m such that n2 = 2m2. This can be proved with a simple number-theoretic argument: First we note that n must be even, whence m must also be even, and hence both are divisible by 2. Then we observe that this is a contradiction if we assume that n is chosen minimally. There is also a geometric proof known already to Euclid, but the proof given by Tennenbaum seems to be entirely new. It is as follows: In Picture 1 we have on the left hand side two squares superimposed, one solid and one dashed. Let us assume that the area of the solid square is twice the area of the dashed square. Let us also assume that the side of each square is an integer and moreover the side of the solid square is as small an integer as possible. In the right hand side of Picture 1 we have added another copy of the dashed square to the lower left corner of the solid square, thereby giving rise to a new square in the middle and two small squares in the corners. The combined area of the two copies of the original dashed square is the same as the area of the original big solid square. In the superimposed picture the middle square gets covered by a dashed square twice while the small corner squares are not covered by the dashed squares at all. Hence the area of the middle square must equal the combined area of the two corner squares.
The work of Stanley Tennenbaum in set theory was centered on the investigation of Suslin's Hypothesis (SH), to which he made crucial contributions. In 1963 Tennenbaum established the relative consistency of ¬SH, and in 1965, together with Robert Solovay, the relative consistency of SH. In the formative period after Cohen's 1963 discovery of forcing when set theory was transmuting into a modern, sophisticated field of mathematics, this work on SH exhibited the power of forcing for elucidating a classical problem of mathematics and stimulated the development of new methods and areas of investigation.
§ 1 discusses the historical underpinnings of SH. § 2 then describes Tennenbaum's consistency result for ¬SH and related subsequent work. § 3 then turns to Tennenbaum's work with Solovay on SH and the succeeding work on iterated forcing and Martin's Axiom. To cast an amusing sidelight on the life and the times, I relate the following reminiscence of Gerald Sacks from this period, no doubt apocryphal: Tennenbaum let it be known that he had come into a great deal of money, $30,000,000 it was said, and started to borrow money against it. Gerald convinced himself that Tennenbaum seriously believed this, but nonetheless asked Simon Kochen about it. Kochen replied “Well, with Stan he might be one per-cent right. But then, that's still $300,000.”
§1. Suslin's problem. At the end of the first volume of Fundamenta Mathematicae there appeared a list of problems with one attributed to Mikhail Suslin [1920], a problem that would come to be known as Suslin's Problem.
To the memory of our unforgettable friend Stanley Tennenbaum (1927-2005), Mathematician, Educator, Free Spirit.
In this first of a series of papers on ultrafinitistic themes, we offer a short history and a conceptual pre-history of ultrafinistism. While the ancient Greeks did not have a theory of the ultrafinite, they did have two words, murios and apeiron, that express an awareness of crucial and often underemphasized features of the ultrafinite, viz. feasibility, and transcendence of limits within a context. We trace the flowering of these insights in the work of Van Dantzig, Parikh, Nelson and others, concluding with a summary of requirements which we think a satisfactory general theory of the ultrafinite should satisfy.
First papers often tend to take on the character of manifestos, road maps, or both, and this one is no exception. It is the revised version of an invited conference talk, and was aimed at a general audience of philosophers, logicians, computer scientists, and mathematicians. It is therefore not meant to be a detailed investigation. Rather, some proposals are advanced, and questions raised, which will be explored in subsequent works of the series.
Our chief hope is that readers will find the overall flavor somewhat “Tennenbaumian”.
§1. Introduction: The radical Wing of constructivism. In their Constructivism in Mathematics, A. Troelstra and D. Van Dalen dedicate only a small section to Ultrafinitism (UF in the following). This is no accident: as they themselves explain therein, there is no consistent model theory for ultrafinitistic mathematics.