To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The complexity of mechanical design is reflected in the complexity of the design constraints which relate functional requirements to design parameters. Reformulations of the design constraints can significantly reduce this complexity. This is accomplished by a transformation to alternative design parameters, such as a critical ratio, a non-dimensional parameter, or a simple difference; e.g. the ratio of surface area to volume for heat transfer loss, the Reynold's number in fluid mechanics, or the velocity difference across a fluid coupling. We have developed a method by which the alternative parameters are chosen for physical significance and for the ability to create a more direct correspondence to functional behavior as determined by measures of serial and block decomposability of the constraints. Rules have been developed for the creation of physically significant new parameters from the algebraic combination of the original parameters. The rules are based on engineering principles and rely on knowledge about what a parameter physically represents rather than other qualities such as dimensions. A computer based system, called EUDOXUS, has been developed to automate this procedure. The system operates on a set of design constraints to produce sets of transformed constraints in terms of alternative design parameters. The method and its implementation have demonstrated successful results for many highly nonlinear and highly coupled parameterized designs from many mechanical engineering domains.
An expert system approach for designing air-conditioning systems for cars and trucks is presented. A brief introduction to the automotive application of the vapor-compression refrigeration cycle is provided as general background. The method presented uses an integrated approach combining the power of conventional analysis programs, databases and model-based expert system technology. Some sample rules from the knowledge base have been included in the paper to illustrate the application of the domain knowledge and its interaction with algorithmic programs. The system architecture is very open and modular, and it lends itself to easy modifications and future expansions. Possibilities for system enhancements are also outlined in the paper. The approach presented in this paper provides substantial benefits to the automotive air-conditioning design engineer particularly in the early stages of new vehicle platform planning and development. A pilot system has been successfully tested at General Motors for preliminary design of automotive air-conditioning systems.
Interval arithmetic has been extensively applied to systems of linear equations by the interval matrix arithmetic community. This paper demonstrates through simple examples that some of this work can be viewed as particular instantiations of an abstract “design operation,” the RANGE operation of the Labeled Interval Calculus formalism for inference about sets of possibilities in design. These particular operations promise to solve a variety of design problems that lay beyond the reach of the original Labeled Interval Calculus. However, the abstract view also leads to a new operation, apparently overlooked by the matrix mathematics community, that should also be useful in design; the paper provides an algorithm for computing it.
The application of truth maintenance techniques to the component selection phase of the engineering design process is described. The aims of the research include the selection of components that most closely match a set of requirements which include fluid characteristics, economic constraints, and company policies or preferences. In addition, we consider the storage and retrieval of requirement sets and corresponding component selections. The benefits of maintaining temporal knowledge are twofold: to enhance performance through reuse when new requirement sets are similar to previous sets, and to re-assess selections in the light of changes over time, for example, to the set of available components, to company policy, or, perhaps, to national safety regulations.
We propose a temporal truth maintenance system (TTMS) to support this management of selection knowledge over time.
Class-centered data models, such as the object-oriented data model, are inadequate for supporting engineering design product models because of their lack of support for object evolution, schema evolution, and semantic and user-defined relationships. Description logic overcomes these limitations by providing constructs for intentional description of classes, relationships, and objects. By combining description logic with object-oriented modelling concepts, design product schemas and data can be uniformly represented and modified throughout the design process.
This paper examines the architecture of back-propagation neural networks used as approximators by addressing the interrelationship between the number of training pairs and the number of input, output, and hidden layer nodes required for a good approximation. It concentrates on nets with an input layer, one hidden layer, and one output layer. It shows that many of the currently proposed schemes for selecting network architecture for such nets are deficient. It demonstrates in numerous examples that overdetermined neural networks tend to give good approximations over a region of interest, while underdetermined networks give approximations which can satisfy the training pairs but may give poor approximations over that region of interest. A scheme is presented that adjusts the number of hidden layer nodes in a neural network so as to give an overdetermined approximation. The advantages and disadvantages of using multiple output nodes are discussed. Guidelines for selecting the number of output nodes are presented.
An interactive system, referred to as MECXPERT {Mechanism Expert}, has been designed with the expressed purpose of assisting nonexpert design engineers in creating mechanisms for fulfilling specific motion-conversion and/or power-transmission requirements. The particular knowledge representation scheme chosen for this application comprises a hybrid formulation of a rule-based production system with a frame-based approach. The underlying control strategy is based on a series of special-purpose, domain-specific operators whose function is to move from one problem space to another through various stages or ‘states’ that comprise the mechanism design process.
The primary focus of this paper centers on the representation of knowledge and its control within an expert system for creative mechanism design. An overview summarizing the reasons for developing such an expert system is provided, and the formulation of a problem is discussed through an example taken from the design of a variable-stroke internal-combustion engine mechanism.
In wood engineering design, an important task is the derivation of a construct from site specific data. Human experts perform the task in two phases, first qualitatively and then quantitatively in a hierarchical fashion. COWEN (Computer Wood ENgineer) is a fully implemented research prototype expert system that performs the qualitative phase and makes two contributions to the technology of expert systems. The first contribution is a Knowledge Level specification of the task prior to considering Symbol Level implementation. This is important because expert systems have been defined as mostly symbolic processors in the literature. The second contribution is that this Knowledge Level specification has led to the conclusion that additional qualitative sciences, besides physics and geometry, are needed for an engineering task. This is an interesting discovery because qualitative reasoning research in Artificial Intelligence (AI) has approached engineering design from the viewpoints of physics and geometry only.
The concept of discretization of a design space is used to make initial selection of prior designs using specification matching, and to direct redesign with evaluation and iteration. A general paradigm for routine design has been developed and implemented in a system called IDS (Initial Design Selection.) For a given design domain, certain characteristics are identified that allow all specifications and models (which represent prior designs) in that design domain to be described in terms of their values or intervals that these values may lie in. These characteristics are seen as dimensions of a design space discretized into a finite number of partitions. Each partition is represented by a model, thought of as occupying its center. Each such model is associated with a deep model, which contains sufficient information for the modeled device, process, or system to be realized. Despite the fact that the models inhabiting the space are shallow, the paradigm comprises a relatively rich mathematical structure. This paper describes in detail a computational methodology to implement this domain-independent paradigm. The IDS paradigm presents a convenient and structured framework for acquiring and representing domain knowledge. This paper also briefly discusses an enhanced version of the system, which attempts iterative redesign directed by the particular mismatch between a specification and an otherwise promising model. To date, this methodology has been applied in a variety of design domains, including mechanism design, hydraulic component selection, assembly methods, and non-destructive testing methods.
A symbolic calculus for reasoning about process plans is proposed in this paper. The main focus of attention is the selection and sequencing of material removal operations for components in accordance with the design geometry. This is a central issue in automated process planning. The proposed symbolic calculus defines a computational formalism for symbolic manipulation of feature volumes, so that reasoning about volumetric removals can be treated in a logical manner by using well-defined procedures of algorithmic synthesis. This potentially encourages a more generic approach to the automation of outline and detailed process planning. The underlying philosophy is that a properly interpreted object topology upon the feature model allows the logical synthesis of volumetric removal sequences. The number of sequences is constrained by algorithms within the planning system that consider part geometry as expressed by features. This reduces the problem space associated with plan synthesis. Some of the geometrically viable sequences have the potential for further development to form viable machining removal sequences, or outline process plans.
Qualitative physics, a subfield of artificial intelligence, adapts intuitive and non-numerical reasoning for descriptive analysis of physical systems. The application of a set-based qualitative algebra to matrix analysis (QMA) allows for the development of a qualitative matrix stiffness methodology for linear elastic structural analysis. The unavoidable introduction of arithmetic ambiguity requires the reinforcement of physical constraints complementary to standard matrix operations. The overall analysis technique incorporates such constraints within the set-based framework with logic programming. Truss, beam, and frame structures demonstrate constraint relationships, which prune spurious solutions resulting from qualitative arithmetic relations. Though QMA is not a panacea for all structural applications, it provides greater insight into new notions of physical analysis.
The coordination of intelligent, interacting, agents is rapidly gaining importance as such systems are deployed under diverse conditions. When robots are used in teams rather than as individuals, their coordination can become more critical for system performance than their individual capabilities. The deployment strategies and communication modes play an important role in the coordination of these teams.
This paper examines a Game Theoretic deployment approach to robotic teams in an unstructured environment. A simulation model is developed and used to compare the performance of gaming rules with a non-anticipatory deterministic deployment rule. The initial Game Theoretic rule can be enhanced to exhibit both locally and globally adaptive characteristics. The new rule outperforms both the deterministic algorithm and the straightforward game-theoretic rule. This is achieved by adapting to trends in local regions in the environment as well as anticipating global eventualities.
The status of research methodology employed by studies on the application of AI techniques to solving problems in engineering design, analysis, and manufacturing is poor. There may be many reasons for this status, including: unfortunate heritage from AI, poor educational system, and researchers’ sloppiness. Understanding this status is a prerequisite for improvement. The study of research methodology can promote such understanding, but, most importantly, it can assist in improving the situation. Concepts from the philosophy of science are introduced, and models of world views of science are built on them. These world views are combined with research heuristics or research perspectives and criteria for evaluating research to create a layered model of research methodology. This layered model can serve to organize and facilitate a better understanding of future studies of research methodologies. Many of the issues involved in the study of AI and AIEDAM research methodology using this layered model are discussed.