To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Mass customization has been identified as a competitive strategy by an increasing number of companies. Family-based product design is an efficient and effective means to realize sufficient product variety, while satisfying a range of customer demands in support for mass customization. This paper presents a knowledge decision support approach to product family design evaluation and selection for mass customization process. Here, product family design is viewed as a selection problem with the following stages: product family (design alternatives) generation, product family design evaluation, and selection for customization. The fundamental issues underlying product family design for mass customization are discussed. Then, a knowledge support framework and its relevant technologies are developed for module-based product family design for mass customization. A systematic fuzzy clustering and ranking model is proposed and discussed in detail. This model supports the imprecision inherent in decision making with fuzzy customers' preference relations and uses fuzzy analysis techniques for evaluation and selection. A neural network technique is also adopted to adjust the membership function to enhance the model. The focus of this paper is on the development of a knowledge-intensive support scheme and a comprehensive systematic fuzzy clustering and ranking methodology for product family design evaluation and selection. A case study and the scenario of knowledge support for power supply family evaluation, selection, and customization are provided for illustration.
We define a constructive model for ${\Delta^0_2}$-maps, that is, maps recursively definable from a map deciding the halting problem. Our model refines an existing constructive interpretation for classical reasoning over one-quantifier formulas: it is compositional (Modus Ponens is interpreted as an application) and semantical (rather than translating classical proofs into intuitionistic ones, we define a mathematical structure intuitionistically validating excluded middle for one-quantifier formulas).
The use of modularity in the design of a new product or the adoption of a product platform, as the base to define new solutions within a product family, offers the company a chance to meet diverse customer needs at low cost because of economies of scale in all phases of the product's life cycle. At present, the concept of modularity in product design is becoming widely used in many industries such as automobiles and consumer electronics. However, if modularity and mass customization have attracted the interest of industries and researchers, the greatest efforts have been focused on the theoretical aspect whereas the related design support technologies have been only partially implemented. In this context, our intent is to develop highly reusable models, which are able to reconfigure themselves on the basis of new functional requirements. The proposed approach is based on the definition of what we call self-configuring components and multiple-level functions. To describe the approach, a practical example related to the design of modules for woodworking machines is reported.
This paper presents general syntactic conditions ensuring the strong normalisation and the logical consistency of the Calculus of Algebraic Constructions, an extension of the Calculus of Constructions with functions and predicates defined by higher-order rewrite rules. On the one hand, the Calculus of Constructions is a powerful type system in which one can formalise the propositions and natural deduction proofs of higher-order logic. On the other hand, rewriting is a simple and powerful computation paradigm. The combination of the two allows, among other things, the development of formal proofs with a reduced size and more automation compared with more traditional proof assistants. The main novelty is to consider a general form of rewriting at the predicate-level that generalises the strong elimination of the Calculus of Inductive Constructions.
In their quest to manage the complexity of offering greater product variety, firms in many industries are considering platform-based development of product families. Key in this approach is the sharing of components, modules, and other assets across a family of products. Current research indicates that companies are often choosing physical elements of the product architecture (i.e., components, modules, building blocks) for building platform-based product families. Other sources for platform potential are widely neglected. We argue that for complex products and systems with hierarchic product architectures and considerable freedom in design, a new platform type, the system layout, offers important commonality potential. This layout platform standardizes the arrangement of subsystems within the product family. This paper is based on three industry case studies, where a product family based on a common layout could be defined. In combination with segment-specific variety restrictions, this results in an effective, efficient, and flexible positioning of a company's products. The employment of layout platforms leads to substantial complexity reduction, and is the basis for competitive advantage, as it imposes a dominant design on a product family, improves its configurability, and supports effective market segmentation.
In an effort to improve customization for today's highly competitive global marketplace, many companies are utilizing product families and platform-based product development to increase variety, shorten lead times, and reduce costs. The key to a successful product family is the product platform from which it is derived either by adding, removing, or substituting one or more modules to the platform or by scaling the platform in one or more dimensions to target specific market niches. This nascent field of engineering design has matured rapidly in the past decade, and this paper provides a comprehensive review of the flurry of research activity that has occurred during that time to facilitate product family design and platform-based product development for mass customization. Techniques for identifying platform leveraging strategies within a product family are reviewed along with metrics for assessing the effectiveness of product platforms and product families. Special emphasis is placed on optimization approaches and artificial intelligence techniques to assist in the process of product family design and platform-based product development. Web-based systems for product platform customization are also discussed. Examples from both industry and academia are presented throughout the paper to highlight the benefits of product families and product platforms. The paper concludes with a discussion of potential areas of research to help bridge the gap between planning and managing families of products and designing and manufacturing them.
This chapter gives an estimate of the research value of word-for-word translation into a pidgin language, rather than into the full normal form of an output language.
Introduction
The basic problem in machine translation is that of multiple meaning, or polysemy. There are two lines of research that highlight this problem in that both set a low value on the information-carrying value of grammar and syntax, and a high one on the resolution of semantic ambiguity. These are:
matching the main content-bearing words and phrases with a semantic thesaurus that determines their meanings in context;
word-for-word matching translation into a pidgin language using a very large bilingual word-and-phrase dictionary.
This chapter examines the second.
The phrase ‘Mechanical Pidgin’ was first used by R. H. Richens to describe the output given at the beginning of Section 2 of this chapter (below), which, he said, was not English at all but a special language, with the vocabulary of English and a structure reminiscent of Chinese. Machine translation output always is a pidgin, whose characteristics per se are never investigated. Either the samples of this pidgin are post-edited into fuller English, or the nature of the output is explained away as low-level machine translation, or rough machine translation, or some vague remark is made to the effect that pidgin machine translation is all right for most purposes.
To the question ‘What is a word?’ philosophers usually give, in succession (as the discussion proceeds), three replies:
‘Everybody knows what a word is.’
‘Nobody knows what a word is.’
‘From the point of view of logic and philosophy, it doesn't matter anyway what a word is, since the statement is what matters, not the word.’
In this paper I shall discuss these three reactions in turn, and dispute the last. Since it is part of my argument that the ways of thinking of several different disciplines must be correlated if we are to progress in our thinking as to what a word is, I shall try to exemplify as many differing contentions as possible by the use of the word ward, since this word is a word which can be used in all senses of ‘word’, which many words cannot.
Two preliminary points about terminology need to be made clear. I am using the word ‘word’ here in the type sense as used by logicians, rather than in the token sense, as synonymous with ‘record of single occurrence of pattern of sound-waves issuing from the mouth’. Thus, when I write here ‘mouth’, ‘mouth’, ‘mouth’, I write only one word.
The second point is that I use in this paper, in different senses, the terms ‘Use’, ‘usage’ and ‘use’. The question as to how the words ‘usage’ and ‘use’ should be used is, as philosophers know, a very thorny one.
1. Current relativist conceptions of science depend widely, though vaguely, upon the insights of T. S. Kuhn (1962), and, in particular, upon his notion of a paradigm. This notion is being used by relativists to support the contention that, since scientific theory is paradigm-founded, and therefore context-based, there can be no one discernible process of scientific verification. However, as I have shown in an earlier paper (1970a), there is another, more exact conception of a Kuhnian paradigm to be considered: namely, that conception of it which says that it is either an analogically used artefact, or even sometimes an actual ‘crude analogy’, that is, an analogical figure of speech expressed in a string of words.
This alternative conception of paradigm, far from supporting a verification-deprived conception of science (which, for those of us philosophers who are also trying to do technological science, just seems a conception of science totally divorced from scientific reality) can, on the contrary, be used to enrich and amplify the most strictly verification-based philosophy of science that is known, namely the Braithwaitean conception of it as a verifiable hypothetico-deductive (H-D) system. For such a paradigm, even though, in unselfconscious scientific thinking, it is usually a crude and concrete conceptual structure, can yet be shown to yield a set of abstract attributes.
The purpose of this chapter is to present a philosophical model of real translation. ‘Translation’ is here used in its ordinary sense: in the sense, that is, in which we say that passages of Burke can be translated into Ciceronian Latin prose, or that the sentence ‘He shot the wrong woman’ is untranslatable into good French. The term ‘philosophical’, however, needs some explaining, since, so far as I know, no one has made a philosophical model of translation as yet. I shall call a model of translation ‘philosophical’ if it has the following characteristics:
It must not only throw some light on the problem of transformation within a language, but must deal also with the problem of reference to something. That is to say, it must relate the strings of language units in the various languages with which it deals to public and recognisable situations in everyday life. It is characteristic of philosophers that, unlike most linguists, they do not regard a text in language as self-contained.
It must deal in concepts, not only in words or terms. All philosophers believe in concepts, though they sometimes pretend not to.
It must face, and not evade, the problem of constructing a universal grammar, while yet recognising fully how greatly languages differ, and howperipheral is the whole problemof determining the nature of language.
The study of language, like the study of mathematical systems, has always been thought to be relevant to the study of forms of argument in science. Language as the scientist uses it, however, is assumed to be potentially interlingual, conceptual and classificatory. This fact makes current philosophical methods of studying language irrelevant to the philosophy of science.
An alternative method of analysing language is proposed. This is that we should take as a model for language the classification system of a great library. Such a classification system is described.
Classification systems of this kind, however, tend to break down because of the phenomena of profusion of meaning, extension of meaning and overlap of meaning in actual languages. The librarian finds that empirically based semantic aggregates (overlapping clusters of meanings) are forming within the system. These are defined as concepts. By taking these aggregates as units, the system can still be used to classify.
An outline sketch is given of a mathematical model of language, language being taken as a totality of semantic aggregates. Language, thus considered, forms a finite lattice. A procedure for retrieving information within the system is described.
The scientific procedures of phrase-coining, classifying and analogy-finding are described in terms of the model.
The point of relevance of the study of language to the philosophy of science
Two very general disciplines have always been thought especially relevant to our understanding of the nature of science.
Faced with the necessity of saying, in a finite space and in an extremely finite time, what I believe the thesaurus theory of language to be, I have decided on the following procedure.
First, I give, in logical and mathematical terms, what I believe to be the abstract outlines of the theory. This account may sound abstract, but it is being currently put to practical use. That is to say, with its help an actual thesaurus to be used for medium-scale mechanical translation (MT) tests, and consisting of specifications in terms of archeheads, heads and syntax markers, made upon words, is being constructed straight on to punched cards. The cards are multiply punched; a nuisance, but they have to be, since the thesaurus in question has 800 heads. There is also an engineering bottleneck about interpreting them; at present, if we wish to reproduce the pack, every reproduced card has to be written on by hand, which makes the reproduction an arduous business; a business also that will become more and more arduous as the pack grows larger. If this interpreting difficulty can be overcome, however, we hope to be able to offer to reproduce this punched-card thesaurus mechanically, as we finish it, for any other MT group that is interested, so that, at last, repeatable, thesauric translations (or mistranslations) can be obtained.
This chapter examines a first-stage translation from Latin into English with the aid of Roget's Thesaurus of a passage from Virgil's Georgics.
The essential feature of this program is the use of a thesaurus as an interlingua: the translation operations are carried out on a head language into which the input text is transformed and from which an output is obtained. The notion of ‘heads’ is taken from the concepts or topics under which Roget classified words in his thesaurus. These operations are of three kinds: semantic, syntactic and grammatical.
The general arrangement of the program is as follows:
Dictionary matching: the chunks of the input language are matched with the entries in a Latin interlingual dictionary giving the raw material of the head language; this consists of heads representing the semantic, syntactic and grammatical elements of the input.
Operations on the semantic heads: these give a first-stage translation.
Operations on the syntactic heads: giving a syntactically complete, though unparsed, translation.
Operations on the grammatical heads: giving a parsed and correctly ordered output.
Cleaning up operations: the output is ‘trimmed’ by, e.g., insertion of capital letters, removal of repetitions like ‘farmer-er’.
Only Stage 2 of the procedure is given in detail here.
Information obtained from stage 1
The Latin sentence to be translated was chunked as follows:
AGRI-COL-A IN-CURV-O TERR-AM DI-MOV-IT AR-ATRO
A number of these generated syntactic heads only. Those with semantic head entries are AGRI-COL-IN-CURV-TERR-DI-MOV-AR-.
Margaret Masterman was ahead of her time by some twenty years: many of her beliefs and proposals for language processing by computer have now become part of the common stock of ideas in the artificial intelligence (AI) and machine translation (MT) fields. She was never able to lay adequate claim to them because they were unacceptable when she published them, and so when they were written up later by her students or independently ‘discovered’ by others, there was no trace back to her, especially in these fields where little or nothing over ten years old is ever reread. Part of the problem, though, lay in herself: she wrote too well, which is always suspicious in technological areas. Again, she was a pupil of Wittgenstein, and a proper, if eccentric, part of the whole Cambridge analytical movement in philosophy, which meant that it was always easier and more elegant to dissect someone else's ideas than to set out one's own in a clear way. She therefore found her own critical articles being reprinted (e.g. chapter 11, below) but not the work she really cared about: her theories of language structure and processing.
The core of her beliefs about language processing was that it must reflect the coherence of language, its redundancy as a signal.
The purpose of the paper that I want here to present is to make a suggestion for computing semantic paragraph patterns.
I had thought that just putting forward this suggestion would involve putting forward a way of looking at language so different from that of everyone else present, either from the logical side or the linguistic side, that I would get bogged down in peripheral controversy to the extent of never getting to the point. I was going to start by saying, ‘Put on my tomb: “This is what she was trying for”.’ But it is not so.
I don't know what has happened, but I don't disagree with Yehoshua Bar-Hillel as much as I did.
And on the linguistic side I owe this whole colloquium an apology and put forward the excuse that I was ill. I ought to have mastered the work of Weinreich (1971). I am trying to. But it is not just that simple a matter to master a complex work in a discipline quite different from that which one ordinarily follows.
I may misinterpret, but it seems to me that the kind of suggestion I put forward in this paper could be construed as a crude way of doing the kind of thing Weinreich has asked for. But Yehoshua Bar-Hillel is actually very right when he wants to question all the time what real use the computer can be in this field. So don't be misled by the size of the output in this paper.