To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article presents the results of research concerning possibilities of applying multilayer perceptron type of neural network for fault diagnosis, state estimation, and prediction in the gas pipeline transmission network. The influence of several factors on accuracy of the multilayer perceptron was considered. The emphasis was put on the multilayer perceptrons' function as a state estimator. The choice of the most informative features, the amount and sampling period of training data sets, as well as different configurations of multilayer perceptrons were analyzed.
This paper introduces an architecture for aggregation of output from different diagnostic tools. The diagnostic fusion tool deals with conflict resolution, where diagnostic tools disagree; temporal information discord, where the estimate of different tools is separated in time; differences in information updates, where the classifiers are updated at different rates; fault coverage discrepancies; and integration of a priori performance specifications. To this end, a hierarchical weight manipulation approach is introduced which creates and successively refines a fused output. The performance of the fusion tool is evaluated throughout its design. This allows impact assessment of adding heuristics and enables early tuning of parameters. Results obtained from diagnosing on-board faults from aircraft engines are shown which demonstrate the fusion tool's operation.
Creative Stimulator (CreaStim) is an intelligent interface for pattern design that behaves as a semiactive partner to human designers rather than as a passive graphical or computational tool. By making adjustments to psychological differentials and/or design parameters, CreaStim is able to help designers to explore innovative pattern designs and to get inspiration, producing different types of novel designs. In this article, the mechanism, the technique, the implementation, and the testing of CreaStim are described. The basic principle of CreaStim is the catastrophe theory, which implies that sudden realization in the thinking process of design may lead to creativity. CreaStim tries to stimulate and/or impact designers' creativity in design process using the output of it, rather than to simulate the sudden realization. The core of the CreaStim is a neural network-based imagining engine, a data repository, and its learning strategies considering psychological factors. The psychological factors, which are thought one of the key influences to creative design, are based on the questionnaires completed by designers about the existing successful designs. The repository contains not only a traditional database storing functional attributes, economic attributes, graphic description, structural description, and psychological attributes, but also methods, rule-based knowledge, and pattern-type knowledge. And it is managed by an application program called Design Template Group (DTG) manager. Trained with 12 pieces of successful pattern designs and 528 pieces of pseudo-examples produced and evaluated by the authors, CreaStim is implemented for a PC and an evaluation poll from five designers shows that designers may most likely get some inspiration from the produced patterns and some of them can even be adopted as the design alternatives directly.
Barlow and Proschan presented some interesting connections between univariate classifications of life distributions and partial orderings where equivalent definitions for increasing failure rate (IFR), increasing failure rate average (IFRA), and new better than used (NBU) classes were given in terms of convex, star-shaped, and superadditive orderings. Some related results are given by Ross and Shaked and Shanthikumar. The introduction of a multivariate generalization of partial orderings is the object of the present article. Based on that concept of multivariate partial orderings, we also propose multivariate classifications of life distributions and present a study on more IFR-ness.
We consider a reflected independent superposition of a Brownian motion and a compound Poisson process with positive and negative jumps, which can be interpreted as a model for the content process of a storage system with different types of customers under heavy traffic. The distributions of the duration of a busy cycle and the maximum content during a cycle are determined in closed form.
This issue of AIEDAM focuses on AI in equipment service. Recently there has been a strong and renewed emphasis on AI technologies that can be used to monitor products and processes, detect incipient failures, identify possible faults (in various stages of development), determine preventive or corrective action, and generate a cost-efficient repair plan and monitor its execution. This renewed emphasis stems from a focus of manufacturing companies on the service market where they hope to grow their market share by offering their customers novel and aggressive service contracts. This service market includes power generation equipment, aircraft engines, medical imaging systems, and locomotives, just to name a few. In some of these new service offerings, the old parts-and-labor billing model is replaced by guaranteed uptime. This in turn places the motivation to maintain equipment in working order on the servicing company. Monitoring can be more efficiently accomplished, in part, by employing remotely monitored systems. Big strides have been taken for in-use monitoring of stationary equipment, such as manufacturing plants or high-end appliances, and also mobile systems such as transportation systems (vehicles, aircraft, locomotives, etc.). While advances in hardware development make it possible to perform these tasks efficiently, there are new avenues for progress in accompanying AI software techniques. Some of these approaches have their roots in efforts of years past while others arise from new challenges. Characteristics of typical challenges for AI in monitoring and diagnosis (M&D) service can be categorized into input, model, and output. In particular, input questions try
Variance reduction techniques are often underused in simulation studies. In this article, we indicate how certain ones can be efficiently employed when analyzing queuing models. The first technique considered is that of dynamic stratified sampling; the second is the utilization of multiple control variates; the third concerns the replacement of random variables by their conditional expectations when trying to estimate the expected value of a sum of random variables.
This article concerns Markov decision chains with finite state and action spaces, and a control policy is graded via the expected total-reward criterion associated to a nonnegative reward function. Within this framework, a classical theorem guarantees the existence of an optimal stationary policy whenever the optimal value function is finite, a result that is obtained via a limit process using the discounted criterion. The objective of this article is to present an alternative approach, based entirely on the properties of the expected total-reward index, to establish such an existence result.
We investigate the performance of channel assignment policies for cellular networks. The networks are given by an interference graph which describes the reuse constraints for the channels. In the first part, we derive lower bounds on the expected (weighted) number of blocked calls under any channel assignment policy over finite time intervals as well as in the average case. The lower bounds are solutions of deterministic control problems. As far as the average case is concerned, the control problem can be replaced by a linear program. In the second part, we consider the cellular network in the limit, when the number of available channels as well as the arrival intensities are linearly increased. We show that the network obeys a functional law of large numbers and that a fixed channel assignment policy which can be computed from a linear program is asymptotically optimal. Special networks like fully connected and star networks are considered.
We consider a single-server queue that is initially empty and operates under the first-in–first-out service discipline. In this system, delays (waiting times in queue) experienced by subsequent arriving customers form a transient process. We investigate its transient behavior by constructing a sample-path coupling of the transient and a general (delayed) processes. From the coupling, we obtain an identity that relates the sample paths of these two processes. This identity helps us to better understand the queue's approach to the stationary limit and to derive upper and lower bounds on the expected transient delay. In addition, we use a Brownian-motion model to approximate the identity. This produces an approximation of the expected transient delay. The approximation turns out to be identical to the corresponding first moment of a reflected Brownian motion. Thus, it is easy to compute and its accuracy is supported by numerical experiments.
In this article, the time of the first occurrence of a rare event in a regenerating process is investigated. We obtain the bound of deviation from the distribution of the time of the first occurrence of a rare event in a regenerating process to an exponential distribution.
We study the scheduling of jobs in a system of two interconnected service stations, called Q1 and Q2, on m(m ≥ 2) identical machines available in parallel, so that every machine can process any jobs in Q1 or Q2. Preemption is allowed. Under certain conditions on the arrival, service, and interconnection processes, we determine a scheduling policy that minimizes the expected makespan.
When dealing with time continuous processes, the discovered association rules may change significantly over time. This often reflects a change in the process as well. Therefore, two questions arise: What kind of deviation occurs in the association rules over time, and how could these temporal rules be presented efficiently? To address this problem of representation, we propose a method of visualizing temporal association rules in a virtual model with interactive exploration. The presentation form is a three-dimensional correlation matrix, and the visualization methods used are brushing and glyphs. Interactive functions used for displaying rule attributes and exploring temporal rules are implemented by utilizing Virtual Reality Modeling Language v2 mechanisms. Furthermore, to give a direction of rule potential for the user, the rule statistical interestingness is evaluated on the basis of combining weighted characteristics of rule and rule matrix. A constraint-based association rule mining tool which creates the virtual model as an output is presented, including the most relevant experiences from the development of the tool. The applicability of the overall approach has been verified by using the developed tool for data mining on a hot strip mill of a steel plant.
Product configuration is the process of generating a product variant from a previously defined product family model and additional product specifications for this variant. The process of finding and sequencing the relevant operations for manufacturing this product is called process planning. This article combines the two principles in a new concept of process configuration that solves the process planning task using product configuration methods. The second section develops characteristics for two process configuration concepts, the interactive process configuration and the automation-based process configuration. Following an overview of the implementation of a process configuration system, the results of a case study in the aluminum rolling industry are presented. The main benefits of the process configuration concept are observed in a reduced knowledge-maintenance effort and in increased problem-solving speed.
In equipment monitoring and diagnostics, it is very important to distinguish between a sensor failure and a system failure. In this paper, we develop a comprehensive methodology based on a hybrid system of AI and statistical techniques. The methodology is designed for monitoring complex equipment systems, which validates the sensor data, associates a degree of validity with each measurement, isolates faulty sensors, estimates the actual values despite faulty measurements, and detects incipient sensor failures. The methodology consists of four steps: redundancy creation, state prediction, sensor measurement validation and fusion, and fault detection through residue change detection. Through these four steps we use the information that can be obtained by looking at: information from a sensor individually, information from the sensor as part of a group of sensors, and the immediate history of the process that is being monitored. The advantage of this methodology is that it can detect multiple sensor failures, both abrupt as well as incipient. It can also detect subtle sensor failures such as drift in calibration and degradation of the sensor. The four-step methodology is applied to data from a gas turbine power plant.
We present the production version of two EOQ-type models in heavy traffic. The output is interpreted by random demand and the input by a deterministic production plus random returns. In the ON part of the cycle, the inventory content is a reflected Brownian motion, and in the OFF part, it is a Brownian motion with a negative drift. The ON/OFF periods generate an alternative renewal process and the content-level process is a regenerative process. Two control policies are considered. In one policy that is natural under conditions of continuous review, production is stopped when the content level in the ON period reaches a predetermined level q. In the other policy, which resembles periodic review, production is stopped when the ON time reaches a predetermined time t0.
Product design and diagnosis are, today, worlds apart. Despite strong areas of overlap at the ontological level, traditional design process theory and practice does not recognize diagnosis as a part of the modeling process chain; neither do diagnosis knowledge engineering processes reference design modeling tasks as a source of knowledge acquisition. This paper presents the DAEDALUS knowledge engineering framework as a methodology for integrating design and diagnosis tasks, models, and modeling environments around a common Domain Ontology and Product Models Library. The approach organizes domain knowledge around the execution of a set of tasks in an enterprise product engineering task workflow. Each task employs a Task Application which uses a customized subset of the Domain Ontology—the Task Ontology—to construct a graphical Product Model. The Ontology is used to populate the models with relevant concepts (variables) and relations (relationships), thus serving as a concept dictionary-style mechanism for knowledge sharing and reuse across the different Task Applications. For inferencing, each task employs a local Problem-solving Method (PSM), and a Model-PSM Mapping, which operate on the local Product Model to produce reasoning outcomes. The use of a common Domain Ontology across tasks and models facilitates semantic consistency of variables and relations in constructing Bayesian networks for design and diagnosis.
The approach is motivated by inefficiencies encountered in cleanly exchanging and integrating design FMEA and diagnosis models. Demonstration software under development is intended to illustrate how the DAEDALUS framework can be applied to knowledge sharing and exchange between Bayesian network-based design FMEA and diagnosis modeling tasks. Anticipated limitations of the DAEDALUS methodology are discussed, as is its relationship to Tomiyama's Knowledge Intensive Engineering Framework (KIEF). DAEDALUS is grounded in formal knowledge engineering principles and methodologies established during the past decade. Finally, the framework is presented as one possible approach for improved integration of generalized design and diagnostic modeling and knowledge exchange.
This article addresses computational synthesis systems that attempt to find a structural description that matches a set of initial functional requirements and design constraints with a finite sequence of production rules. It has been previously shown by the author that it is computationally difficult to identify a sequence of production rules that can lead to a satisficing design solution. As a result, computational synthesis, particularly with large volumes of selection information, requires effective design search procedures. Many computational synthesis systems utilize transformational search strategies. However, such search strategies are inefficient due to the combinatorial nature of the problem. In this article, the problem is approached using a completely different paradigm. The new approach encodes a design search problem as a Boolean (propositional) satisfiability problem, such that from every satisfying Boolean-valued truth assignment to the corresponding Boolean expression we efficiently can derive a solution to the original synthesis problem (along with its finite sequence of production rules). A major advantage of the proposed approach is the possibility of utilizing recently developed powerful randomized search algorithms for solving Boolean satisfiability problems, which considerably outperform the most widely used satisfiability algorithms. The new design-as-satisfiability technique provides a flexible framework for stating a variety of design constraints, and also represents properly the theory behind modern constraint-based design systems.
For a two-node tandem fluid model with gradual input, we compute the joint steady-state buffer-content distribution. Our proof exploits martingale methods developed by Kella and Whitt. For the case of finite buffers, we use an insightful sample-path argument to extend an earlier proportionality result of Zwart to the network case.
The Waxman graphs are frequently chosen in simulations as topologies resembling communications networks. For the Waxman graphs, we present analytic, exact expressions for the link density (average number of links) and the average number of paths between two nodes. These results show the similarity of Waxman graphs to the simpler class G>p(N). The first result enables one to compare simulations performed on the Waxman graph with those on other graphs with same link density. The average number of paths in Waxman graphs can be useful to dimension (or estimate) routing paths in networks. Although higher-order moments of the number of paths in Gp(N) are difficult to compute analytically, the probability distribution of the hopcount of a path between two arbitrary nodes seems well approximated by a Poisson law.