To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Can you tell the difference between talking to a human and talking to a machine? Or, is it possible to create a machine which is able to converse like a human? In fact, what is it that even makes us human? Turing's Imitation Game, commonly known as the Turing Test, is fundamental to the science of artificial intelligence. Involving an interrogator conversing with hidden identities, both human and machine, the test strikes at the heart of any questions about the capacity of machines to behave as humans. While this subject area has shifted dramatically in the last few years, this book offers an up-to-date assessment of Turing's Imitation Game, its history, context and implications, all illustrated with practical Turing tests. The contemporary relevance of this topic and the strong emphasis on example transcripts makes this book an ideal companion for undergraduate courses in artificial intelligence, engineering or computer science.
We survey the use and effect of decomposition-based techniques in qualitativespatial and temporal constraint-based reasoning, and clarify the notions of atree decomposition, a chordal graph, and a partitioning graph, and theirimplication with a particular constraint property that has been extensively usedin the literature, namely, patchwork. As a consequence, we prove that a recentlyproposed decomposition-based approach that was presented in the study byNikolaou and Koubarakis for checking the satisfiability of qualitative spatialconstraint networks lacks soundness. Therefore, the approach becomes quitecontroversial as it does not seem to offer any technical advance at all, whileresults of an experimental evaluation of it in a following work presented in thestudy by Sioutis become questionable. Finally, we present a particular treedecomposition that is based on the biconnected components of the constraintgraph of a given large network, and show that it allows for cost-freeutilization of parallelism for a qualitative constraint language that haspatchwork for satisfiable atomic networks.
Latency hinders a mobile robot teleoperator's ability to perform remote tasks. However, this effect is not well modeled. This paper develops a model for teleoperator steering behavior as a PD controller based on projected lateral displacement, which was tuned to reflect user performance determined by a 31-subject user study under constant and variable latency (having mean latencies between 0 and 750 ms). Additionally, we determined that operator performance under variable latency could be mapped to the expected performance of an equivalent constant latency. We then tested additional latency distributions in simulation and demonstrated equivalent steering performance among several different latency distributions.
We consider the multi-armed bandit problem under a cost constraint. Successive samples from each population are i.i.d. with unknown distribution and each sample incurs a known population-dependent cost. The objective is to design an adaptive sampling policy to maximize the expected sum of n samples such that the average cost does not exceed a given bound sample-path wise. We establish an asymptotic lower bound for the regret of feasible uniformly fast convergent policies, and construct a class of policies, which achieve the bound. We also provide their explicit form under Normal distributions with unknown means and known variances.
Optimization is an important step in the design and development of a planar parallel manipulator. For optimization processes, workspace analysis is a crucial and preliminary objective. Generally, the workspace analysis for such manipulators is carried out using a non-dimensional approach. For planar parallel manipulators of two degrees of freedom (2-DOF), a non-dimensional workspace analysis is very advantageous. However, it becomes very difficult in the case of 3-DOF and higher DOF manipulators because of the complex shape of the workspace. In this study, the workspace shape is classified as a function of the geometric parameters, and the closed-form area expressions are derived for a constant orientation workspace of a three revolute–revolute–revolute (3-RRR) planar manipulator. The approach is also shown to be feasible for different orientations of a mobile platform. An optimization procedure for the design of planar 3-RRR manipulators is proposed for a prescribed workspace area. It is observed that the closed-form area expression for all the possible shapes of the workspace provides a larger solution space, which is further optimized considering singularity, mass of the manipulator, and a force transmission index.
Design rationale (DR) is an important category within design knowledge, and effective reuse of it depends on its successful retrieval. In this paper, an ontology-based DR retrieval approach is presented, which allows users to search by entering normal queries such as questions in natural language. First, an ontology-based semantic model of DR is developed based on the extended issue-based information system-based DR representation in order to effectively utilize the semantics embedded in DR, and a database of ontology-based DR is constructed, which supports SPARQL queries. Second, two SPARQL query generation methods are proposed. The first method generates initial SPARQL queries from natural language queries automatically using template matching, and the other generates initial SPARQL queries automatically from DR record-based queries. In addition, keyword extension and optimization is conducted to enhance the SPARQL-based retrieval. Third, a design rationale retrieval prototype system is implemented. The experimental results show the advantages of the proposed approach.
Identifying failure paths and potentially hazardous scenarios resulting from component faults and interactions is a challenge in the early design process. The inherent complexity present in large engineered systems leads to nonobvious emergent behavior, which may result in unforeseen hazards. Current hazard analysis techniques focus on single hazards (fault trees), single faults (event trees), or lists of known hazards in the domain (hazard identification). Early in the design of a complex system, engineers may represent their system as a functional model. A function failure reasoning tool can then exhaustively simulate qualitative failure scenarios. Some scenarios can be identified as hazardous by hazard rules specified by the engineer, but the goal is to identify scenarios representing unknown hazards. The incidences of specific subgraphs in graph representations of known hazardous scenarios are used to train a classifier to distinguish hazard from nonhazard. The algorithm identifies the scenario most likely to be hazardous, and presents it to the engineer. After viewing the scenario and judging its safety, the engineer may have insight to produce additional hazard rules. The collaborative process of strategic presentation of scenarios by the computer and human judgment will identify previously unknown hazards. The feasibility of this methodology has been tested on a relatively simple functional model of an electrical power system with positive results. Related work applying function failure reasoning to a team of robotic rovers will provide data from a more complex system.
This paper presents a method for automatically generating new designs from a set of existing objects of the same class using machine learning. In this particular work, we use a custom parametric chair design program to produce a large set of chairs that are tested for their physical properties using ergonomic simulations. Design schemata are found from this set of chairs and used to generate new designs by placing constraints on the generating parameters used in the program. The schemata are found by training decision trees on the chair data sets. These are automatically reverse engineered by examining the structure of the trees and creating a schema for each positive leaf. By finding a range of schemata, rather than a single solution, we maintain a diverse design space. This paper also describes how schemata for different properties can be combined to generate new designs that possess all properties required in a design brief. The method is shown to consistently produce viable designs, covering a large range of our design space, and demonstrates a significant time saving over generate and test using the same program and simulations.
Dealing with component interactions and dependencies remains a core and fundamental aspect of engineering, where conflicts and constraints are solved on an almost daily basis. Failure to consider these interactions and dependencies can lead to costly overruns, failure to meet requirements, and lengthy redesigns. Thus, the management and monitoring of these dependencies remains a crucial activity in engineering projects and is becoming ever more challenging with the increase in the number of components, component interactions, and component dependencies, in both a structural and a functional sense. For these reasons, tools and methods to support the identification and monitoring of component interactions and dependencies continues to be an active area of research. In particular, design structure matrices (DSMs) have been extensively applied to identify and visualize product and organizational architectures across a number of engineering disciplines. However, the process of generating these DSMs has primarily used surveys, structured interviews, and/or meetings with engineers. As a consequence, there is a high cost associated with engineers' time alongside the requirement to continually update the DSM structure as a product develops. It follows that the proposition of this paper is to investigate whether an automated and continuously evolving DSM can be generated by monitoring the changes in the digital models that represent the product. This includes models that are generated from computer-aided design, finite element analysis, and computational fluid dynamics systems. The paper shows that a DSM generated from the changes in the product models corroborates with the product architecture as defined by the engineers and results from previous DSM studies. In addition, further levels of product architecture dependency were also identified. A particular affordance of automatically generating DSMs is the ability to continually generate DSMs throughout the project. This paper demonstrates the opportunity for project managers to monitor emerging product dependencies alongside changes in modes of working between the engineers. The application of this technique could be used to support existing product life cycle change management solutions, cross-company product development, and small to medium enterprises who do not have a product life cycle management solution.
Optimized lightweight manufacturing of parts is crucial for automotive and aeronautical industries in order to stay competitive and to reduce costs and fuel consumption. Hence, aluminum becomes an unquestionable material choice regarding these challenges. Nevertheless, using only virgin aluminum is not satisfactory because its extraction requires high use of energy and effort, and its manufacturing has high environmental impact. For these reasons, the use of recycled aluminum alloys is recommended considering their properties meet the expected technical and environmental added values. This requires complete reengineering of the classical life cycle of aluminum-based products and the collaboration practices in the global supply chain. The results from several interdependent disciplines all need to be taken into account for a global product/process optimization. Toward achieving this, a method for sustainability assessment integration into product life cycle management and a platform for life cycle simulation integrating environmental concerns are proposed in this paper. The platform may be used as a decision support system in the early product design phase by simulating the life cycle of a product (from material selection to production and recycling phases) and calculating its impact on the environment.
Documents are a useful source of expert knowledge in organizations and can be used to foresee, in an earlier stage of a product's life cycle, potential issues and solutions that might occur in later stages of its life cycle. In this research, these stages are, respectively, design and assembly. Even if these documents are available online, it is rather difficult for users to access the knowledge contained in these documents. It is therefore desirable to automatically extract the knowledge contained in these documents and store them in a computer accessible or manipulable form. This paper describes an approach for the first step in this acquisition process: automatically identifying segments of documents that are relevant to aircraft assembly, so that they can be further processed for acquiring expert knowledge. Such identification of relevant segments is necessary for avoiding processing of unrelated information that is costly and possibly distracting for domain relevance. The approach to extracting relevant segments has two steps. The first step is the identification of sentences that form a coherent segment of text, within which the topic does not shift. The second step is to classify segments that are within the topics of interest for knowledge acquisition, that is, aircraft assembly in this instance. These steps filter out segments that are unrelated, and therefore need not be processed for subsequent knowledge acquisition. The steps are implemented by understanding the contents of documents. Using methods of discourse analysis, in particular, discourse representation theory, a list of discourse entities is obtained. The difference in discourse entities between sentences is used to distinguish between segments. The list of discourse entities in a segment is compared against a domain ontology for classification. The implementation and results of validation on sample texts for these steps are described.
Although there has been considerable computer-aided conceptual design research, most of the proposed approaches are domain specific and can merely achieve conceptual design of energy flows-processing systems. Therefore, this research is devoted to the development of a general (i.e., domain-independent) and knowledge-based methodology that can search in a wide multidisciplinary solution space for suitable solution principles for desired material-flow processing functions without designers' biases toward familiar solution principles. It first proposes an ontology-based approach for representing desired material-flow processing functions in a formal and unambiguous manner. Then a rule-based approach is proposed to represent the functional knowledge of a known solution principle in a general and flexible manner. Thereafter, a simulation-based retrieval approach is developed, which can search for suitable solution principles for desired material-flow processing functions. The proposed approaches have been implemented as a computer-aided conceptual design system for test. The conceptual design of a coin-sorting device demonstrates that our functional representation methodology can make the proposed computer-aided conceptual design system to effectively and precisely retrieve suitable solution principles for a desired material-flow processing function.
Requirement planning is one of the most critical tasks in the product development process. Despite its significant impact on the outcomes of the design process, engineering requirement planning is often conducted in an ad hoc manner without much structure. In particular, the requirement planning phase suffers from a lack of quantifiable measures for evaluating the quality of the generated requirements and also a lack of structure and formality in representing engineering requirements. The main objective of this research is to develop a formal Web Ontology Language ontology for standard representation of engineering requirements. The proposed ontology uses explicit semantics that makes the ontology amenable to automated reasoning. To demonstrate how the proposed ontology can support requirement analysis and evaluation in engineering design, three possible services enabled by the ontology are introduced in this paper. These services are information content measurement, specificity and completeness analysis, and requirement classification. The proposed ontology and its associated algorithms and tools are validated experimentally in this work.
In decision-based design, the principal role of a designer is to make decisions. Decision support is crucial to augment this role. In this paper, we present an ontology that provides decision support from both the “construct” and the “information” perspectives that address the gap that existing research focus on these two perspectives separately and cannot provide effective decision support. The decision support construct in the ontology is the compromise decision support problem (cDSP) that is used to make multiobjective design decisions. The information for decision making is archived as cDSP templates and represented using frame-based ontology for facilitating reuse, consistency maintaining, and rapid execution. In order to facilitate designers’ effective reuse of the populated cDSP templates ontology instances, we identified three types of modification that can be made when design consideration evolves. In our earlier work, part of the utilization (consistency checking) of the ontology has been demonstrated through a thin-walled pressure vessel redesign example. In this paper, we comprehensively present the ontology utilization including consistency checking, trade-off analysis, and design space visualization based on the pressure vessel example.