To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Conceptual design produces a number of functions the designed product is to fulfill, several solution principles (means) for each function, and multiple overall principle solutions (concepts). Besides concept synthesis, it is important to determine the (few) early solution properties that are of interest at the concept stage. Further activities are assessing the consequences of the chosen means and their instantiation, the effects of changes, and how decisions affect other elements. Using a quantitative functional representation can facilitate these tasks, but a balance is needed between product-dependent tools predicting many detailed properties, and product-independent, generally applicable tools with limited prediction capabilities. A balance between a closed, general set of predefined building blocks and extensibility by modeling application-specific, individual elements is also necessary. In this paper, a generally applicable conceptual design model is presented, which has been established by theoretical reasoning applied to a number of products. These products were the subjects of previous company-ordered student projects. The resulting information model spans continuously from requirements to concepts and permits modeling desired functionality (functions), achieved functionality (means and their value choices), and explicit constraints (internal and external relations between parameters of requirements, functions and means). To indicate the suitability in principle, the model has been implemented in an interactive, incremental prototype for computer support that permits modeling, storage, and reuse in a database. It can be concluded that the model permits explicit modeling of complex relations, automatic change propagation, and handling of many concept alternatives. Integrated, bidirectional, and continuous connections from requirements to concepts facilitate conceptual design, reuse, documentation of the results, and allow changes to be made and their effects assessed easily. Incremental constraint networks are approved, for example, in configuration design or geometry modelers, and the significance of this article is to enable their use also for quantitative analysis of incomplete, evolving concepts in original design tasks allowing different principle solutions, and for various products of mechanical design.
In engineering design, the end goal is the creation of an artifact, product, system, or process that fulfills some functional requirements at some desired level of performance. As such, knowledge of functionality is essential in a wide variety of tasks in engineering activities, including modeling, generation, modification, visualization, explanation, evaluation, diagnosis, and repair of these artifacts and processes. A formal representation of functionality is essential for supporting any of these activities on computers. The goal of Parts 1 and 2 of this Special Issue is to bring together the state of knowledge of representing functionality in engineering applications from both the engineering and the artificial intelligence (AI) research communities.
Design couples synthesis and analysis in iterative cycles, alternatively generating solutions, and evaluating their validity. The accuracy and depth of evaluation has increased markedly because of the availability of powerful simulation tools and the development of domain-specific knowledge bases. Efforts to extend the state of the art in evaluation have unfortunately been carried out in stovepipe fashion, depending on domain-specific views both of function and of what constitutes “good” design. Although synthesis as practiced by humans is an intentional process that centers on the notion of function, computational synthesis often eschews such intention for sheer permutation. Rather than combining synthesis and analysis to form an integrated design environment, current methods focus on comprehensive search for solutions within highly circumscribed subdomains of design. This paper presents an overview of the progress made in representing design function across abstraction levels proven useful to human designers. Through an example application in the domain of mechatronics, these representations are integrated across domains and throughout the design process.
By allowing the programmer to write code that can generate code at run-time, meta-programming offers a powerful approach to program construction. For instance, meta-programming can often be employed to enhance program efficiency and facilitate the construction of generic programs. However, meta-programming, especially in an untyped setting, is notoriously error-prone. In this paper, we aim at making meta-programming less error-prone by providing a type system to facilitate the construction of correct meta-programs. We first introduce some code constructors for constructing typeful code representation in which program variables are represented in terms of deBruijn indexes, and then formally demonstrate how such typeful code representation can be used to support meta-programming. With our approach, a particular interesting feature is that code becomes first-class values, which can be inspected as well as executed at run-time. The main contribution of the paper lies in the recognition and then the formalization of a novel approach to typed meta-programming that is practical, general and flexible.
We study isomorphisms of inductive types (that is, recursive types satisfying a condition of strict positivity) in an extensional simply typed $\lambda$-calculus with product and unit types. We first show that the calculus enjoys strong normalisation and confluence. Then we extend it with new conversion rules ensuring that all inductive representations of the product and unit types are isomorphic, and such that the extended reduction remains convergent. Finally, we define the notion of a faithful copy of an inductive type and a corresponding conversion relation that also preserves the good properties of the calculus.
The study of isomorphisms of types has, in the main, been carried out in an intuitionistic setting. We extend some of this work to classical logic for both call-by-name and call-by-value computations by means of polarised linear logic and game semantics. This leads to equational characterisations of these isomorphisms for all the propositional connectives.
Type isomorphisms are pairs of functions $f:A\rightarrow B$ and $g:B\rightarrow A$ that are mutually inverse with respect to convertibility. They can be used in the context of type checking to perform type checking modulo type isomorphisms, by adopting the rule $$\frac{a:A}{a:B}$$ whenever $f:A\rightarrow B$ and $g:B\rightarrow A$ have been declared as type isomorphisms. Type isomorphisms may be viewed as a special instance of coercions that provide embeddings of one type into another. Indeed, type systems for coercive subtyping feature a rule $$\frac{a:A}{a:B}$$ whenever there exists a coercion $f:A\rightarrow B$. By declaring $f$ and $g$ as coercions for every type isomorphism $f:A\rightarrow B$ and $g:B\rightarrow A$, one can simulate type-checking modulo type isomorphisms. However, the proposed encoding relies on the possibility of declaring coercions $f:A\rightarrow B$ and $g:B\rightarrow A$ simultaneously. Such coercions, which we call back-and-forth coercions, are only allowed provided $f$ and $g$ are mutually inverse. In principle, type isomorphisms, viewed as back-and-forth coercions, could be used in the context of proof assistants based on dependent type theory in order to relate equivalent representations of mathematical notions, for example, the polar and cartesian representations of complex numbers. However, the coercions that map one representation into another are not mutually inverse because of the intensional nature of dependent type theories. Consequently, the standard concept of type isomorphisms has limited applicability in the context of proof assistants.
In order to circumvent this problem, we develop a computational interpretation of implicit coercions that allows for the definition of back-and-forth coercions without requiring them to be mutually inverse. We illustrate the usefulness of our approach in a number of formal developments that require us to navigate between different representations of mathematical objects or structures. We also discuss important meta-theoretical properties of our interpretation.
We develop a matching algorithm for an equational theory with multiplication, exponentiation and a unit element. The algorithm is proved consistent, complete and minimal using techniques based on initial algebras.
The first-order isomorphism problem is to decide whether two non-recursive types using product- and function-type constructors are isomorphic under the axioms of commutative and associative products, and currying and distributivity of functions over products. We show that this problem can be solved in $O(n \log^2 n)$ time and $O(n)$ space, where $n$ is the input size. This result improves upon the $O(n^2 \log n)$ time and $O(n^2)$ space bounds of the best previous algorithm. We also describe an $O(n)$ time algorithm for the linear isomorphism problem, which does not include the distributive axiom, thereby improving upon the $O(n \log n)$ time of the best previous algorithm for this problem.
It is interesting to note that the very first papers related to isomorphism of types were written before the notion itself appeared. Their subjects were the study of equality of terms defined on numbers, the isomorphism of objects in certain categories and the invertibility of $\lambda$-terms, but not the isomorphism of types ‘as such’. One may cite the so-called ‘Tarsky High School Algebra Problem’: whether all identities between terms built from $+, x, \uparrow$, variables and constants are derivable from basic ‘high school equalities’, like $(xy)^z=x^zy^z$. The earliest publications related to this problem date from the 1940s, cf. Birkhoff (1940) – an extensive bibliography may be found in Burris and Yeats (2002).
We consider shifted equality sets of the form EG(a,g1,g2) = {ω | g1(ω) = ag2(ω)}, where g1 and g2 are nonerasingmorphisms and a is a letter. We are interested in the familyconsisting of the languages h(EG(J)), where h is a coding and(EG(J)) is a shifted equality set. We prove several closureproperties for this family. Moreover, we show that everyrecursively enumerable language L ⊆ A* is a projectionof a shifted equality set, that is, L = πA(EG(a,g1,g2)) for some (nonerasing) morphisms g1 and g2 and aletter a, where πA deletes the letters not in A. Thenwe deduce that recursively enumerable star languages coincide withthe projections of equality sets.
We investigate the structure of “worst-case” quasi reduced ordered decision diagrams and Boolean functions whose truth tables are associated to: we suggest different ways to count and enumerate them. We, then, introduce a notion of complexity which leads to the concept of “hard” Boolean functions as functions whose QROBDD are “worst-case” ones. So we exhibit the relation between hard functions and the Storage Access function (also known as Multiplexer).
The paper presents an elementary approach for the calculation of the entropyof a class of languages. This approach is based on the consideration ofroots of a real polynomial and is also suitable for calculating theBernoulli measure. The class of languages we consider here is ageneralisation of the Łukasiewicz language.
Dans cet article, nous exploitons la réductibilité d'unpolynômed'une variable pour calculer efficacement l'idéal des relationsalgébriques entre ses racines.
We study deterministic one-way communication complexity of functions with Hankel communication matrices. Some structural properties of such matrices are establishedand applied to the one-way two-party communication complexity of symmetric Boolean functions.It is shown that the number of required communication bits does not depend on the communication direction, provided thatneither direction needs maximum complexity. Moreover, in order to obtain an optimal protocol, it is in any case sufficient to consider only the communication directionfrom the party with the shorter input to the other party. These facts do not hold for arbitrary Boolean functions in general. Next, gaps between one-way and two-way communication complexity for symmetric Boolean functions are discussed.Finally, we give some generalizations to the case of multiple parties.
In previous work (Gough and Way 2004), we showed that our Example-Based Machine Translation (EBMT) system improved with respect to both coverage and quality when seeded with increasing amounts of training data, so that it significantly outperformed the on-line MT system Logomedia according to a wide variety of automatic evaluation metrics. While it is perhaps unsurprising that system performance is correlated with the amount of training data, we address in this paper the question of whether a large-scale, robust EBMT system such as ours can outperform a Statistical Machine Translation (SMT) system. We obtained a large English-French translation memory from Sun Microsystems from which we randomly extracted a near 4K test set. The remaining data was split into three training sets, of roughly 50K, 100K and 200K sentence-pairs in order to measure the effect of increasing the size of the training data on the performance of the two systems. Our main observation is that contrary to perceived wisdom in the field, there appears to be little substance to the claim that SMT systems are guaranteed to outperform EBMT systems when confronted with ‘enough’ training data. Our tests on a 4.8 million word bitext indicate that while SMT appears to outperform our system for French-English on a number of metrics, for English-French, on all but one automatic evaluation metric, the performance of our EBMT system is superior to the baseline SMT model.
This paper presents a very simple and effective approach to using parallel corpora for automatic bilingual lexicon acquisition. The approach, which uses the Random Indexing vector space methodology, is based on finding correlations between terms based on their distributional characteristics. The approach requires a minimum of preprocessing and linguistic knowledge, and is efficient, fast and scalable. In this paper, we explain how our approach differs from traditional cooccurrence-based word alignment algorithms, and we demonstrate how to extract bilingual lexica using the Random Indexing approach applied to aligned parallel data. The acquired lexica are evaluated by comparing them to manually compiled gold standards, and we report overlap of around 60%. We also discuss methodological problems with evaluating lexical resources of this kind.
Parallel texts have become a vital element for natural language processing. We present a panorama of current research activities related to parallel texts, and offer some thoughts about the future of this rich field of investigation.