To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
An operational profile describes, in a probabilistic way, how a software is utilized by its users. It makes the testing procedure more realistic and efficient. We consider a model where the software is tested sequentially in all of the operations that it is designed to perform. The stochastic and deterministic model parameters involving costs and failures all depend on the operations. In particular, the failure process generated by each fault is quite general and debugging is not necessarily perfect. Our aim is to find the optimal testing durations in all of the operations in order to minimize the total expected cost. This problem leads to an interesting nonlinear programming formulation that can be solved using well-known procedures in convex optimization.
For every problem mentioned by crew members in an aircraft log book, an associated repair action note is entered in the same log book by a maintenance technician after the problem has been handled. These hand-written repair notes, subsequently transcribed into a database, give an account of the actions undertaken by the technicians to fix the problems. Written in a free-text format with peculiar linguistic characteristics, including many arbitrary abbreviations and missing auxiliaries, they contain valuable information that can be used for decision support methods such as case-based reasoning. We use natural language techniques in our information extraction system to analyze the structure and contents of these notes in order to determine the pieces of equipment involved in a repair and what was done to them. Lexical information and domain knowledge are extracted from an electronic version of the illustrated parts catalog for the particular airplane, and are used at different stages of the process, from the morpholexical analysis to the evaluation of the semantic expression generated by the syntactical analyzer.
We use a Poisson imbedding technique to investigate the possibilities of generalizing some pathwise and stochastic monotonicity results from the M/M/k queues to systems with monotone failure rate service time distribution. A dichotomy between the decreasing failure rate (DFR) and the increasing failure rate (IFR) cases is revealed: In the DFR case, we achieve that the number of customers in the systems is stochastically increasing if it is idle at time 0, whereas in the IFR case, there is an alternating character that renders a bound between two identical systems that have different initial conditions. We also explore how our methods work in a comparison between systems with different numbers of stations but the same maximal capacity.
Analysis of assembly properties of a product is needed during the initial design stage in order to identify potential assembly problems, which affect product performance in the later stages of life cycle. Assemblability analysis and evaluation play a key role in assembly design, assembly operation analysis and assembly planning. This paper develops a novel approach to assemblability and assembly sequence analysis and evaluation using the concept of the fuzzy set theory and neuro-fuzzy integration. Assemblability is described by assembly-operation difficulty, which can be represented by a fuzzy number between 0 and 1. Assemblability evaluation is therefore fuzzy evaluation of assembly difficulty. The evaluation structure covers not only the assembly parts' geometric and physical characteristics, but also takes into account the assembly operation data necessary to assemble the parts. The weight of each assemblability factor is subject to change to match the real assembly environments based on expert advice. The approach has the flexibility to be used in various assembly methods and different environments. It can be used in a knowledge-based design for assembly expert system with learning ability. Through integration with the CAD system, the developed system can effectively incorporate the concurrent engineering knowledge into the preliminary design process so as to provide users with suggestions for improving a design and also helping to obtain better design ideas. The applications in assembly design and planning show that the proposed approach and system are feasible.
We present methods and tools from the Soft Computing (SC) domain, which is used within the diagnostics and prognostics framework to accommodate imprecision of real systems. SC is an association of computing methodologies that includes as its principal members fuzzy, neural, evolutionary, and probabilistic computing. These methodologies enable us to deal with imprecise, uncertain data and incomplete domain knowledge typically encountered in real-world applications. We outline the advantages and disadvantages of these methodologies and show how they can be combined to create synergistic hybrid SC systems. We conclude the paper with a description of successful SC case study applications to equipment diagnostics.
Long-term-care (LTC) insurance contracts provide the insured with different benefits for several nursing care levels, for a limited number of benefit eligibility periods. A common assumption in pricing these LTC contracts is that the insured will exercise the right to claim benefits as soon as the eligibility conditions are satisfied. This assumption, however, may contradict the insured's optimization, as it might be worthwhile not to claim when in low care levels and, by doing so, save the option of claiming higher (more expensive) care levels in the future. We term this option of the insured as the deferral option. The consequence of the traditional pricing (i.e., of ignoring the deferral option) is unexpected losses to the insurer. The factors affecting the deferral option's value are the risk of death, the discount factor, the benefit levels of the different care levels, and the transition probabilities between the different care levels.
We consider a batch scheduling problem in which the processing time of a batch of jobs equals the maximum of the processing times of all jobs in the batch. This is the case, for example, for burn-in operations in semiconductor manufacturing and other testing operations. Processing times are assumed to be random, and we consider minimizing the makespan and the flow time. The problem is much more difficult than the corresponding deterministic problem, and the optimal policy may have many counterintuitive properties. We prove various structural properties of the optimal policy and use these to develop a polynomial-time algorithm to compute the optimal policy.
The paper describes the task of performing efficient decision-theoretic troubleshooting of electromechanical devices. In general, this task is NP-complete, but under fairly strict assumptions, a greedy approach will yield an optimal sequence of actions, as discussed in the paper. This set of assumptions is weaker than the set proposed by Heckerman et al. (1995). However, the printing system domain, which motivated the research and which is described in detail in the paper, does not meet the requirements for the greedy approach, and a heuristic method is used. The method takes value of identification of the fault into account and it also performs a partial two-step look-ahead analysis. We compare the results of the heuristic method with optimal sequences of actions, and find only minor differences between the two.
This paper presents STAL, a variant of Typed Assembly Language with constructs and types to support a limited form of stack allocation. As with other statically-typed low-level languages, the type system of STAL ensures that a wide class of errors cannot occur at run time, and therefore the language can be adapted for use in certifying compilers where security is a concern. Like the Java Virtual Machine Language (JVML), STAL supports stack allocation of local variables and procedure activation records, but unlike the JVML, STAL does not pre-suppose fixed notions of procedures, exceptions, or calling conventions. Rather, compiler writers can choose encodings for these high-level constructs using the more primitive RISC-like mechanisms of STAL. Consequently, some important optimizations that are impossible to perform within the JVML, such as tail call elimination or callee-saves registers, can be easily expressed within STAL.
We introduce Horn linear logic as a comprehensive logical system capable of handling the typical AI problem of making a plan of the actions to be performed by a robot so that he could get into a set of final situations, if he started with a certain initial situation. Contrary to undecidability of propositional Horn linear logic, the planning problem is proved to be decidable for a reasonably wide class of natural robot systems.
The planning problem is proved to be EXPTIME-complete for the robot systems that allow actions with non-deterministic effects. Fixing a finite signature, that is a finite set of predicates and their finite domains, we get a polynomial time procedure of making plans for the robot system over this signature.
The planning complexity is reduced to PSPACE for the robot systems with only pure deterministic actions.
As honest numerical parameters in our algorithms we invoke the length of description of a planning task ‘from W to Z˜’ and the Kolmogorov descriptive complexity of AxT, a set of possible actions.
We introduce a language based upon lambda calculus with products, coproducts and strictly positive inductive types that allows the definition of recursive terms. We present the implementation (foetus) of a syntactical check that ensures that all such terms are structurally recursive, i.e. recursive calls appear only with arguments structurally smaller than the input parameters of terms considered. To ensure the correctness of the termination checker, we show that all structurally recursive terms are normalizing with respect to a given operational semantics. To this end, we define a semantics on all types and a structural ordering on the values in this semantics and prove that all values are accessible with regard to this ordering. Finally, we point out how to do this proof predicatively using set based operators.
We define the class of divisibility monoids that arise as quotients of the free monoid Σ* modulo certain equations of the form ab = cd. These form a much larger class than free partially commutative monoids, and we show, under certain assumptions, that the recognizable languages in these divisibility monoids coincide with c-rational languages. The proofs rely on Ramsey's theorem, distributive lattice theory and on Hashigushi's rank function generalized to these monoids. We obtain Ochmański's theorem on recognizable languages in free partially commutative monoids as a consequence.
Certain ‘Finite Structure Conditions’ on a geometric theory are shown to be sufficient for its classifying topos to be a presheaf topos. The conditions assert that every homomorphism from a finite structure of the theory to a model factors via a finite model, and they hold in cases where the finitely presentable models are all finite.
The conditions are shown to hold for the theory of strongly algebraic (or SFP) information systems and some variants, as well as for some other theories already known to be classified by presheaf toposes.
The work adheres to geometric constructivism throughout, and in consequence provides ‘topical’ categories of domains (internal in the category of toposes and geometric morphisms) with an analogue of Plotkin's double characterization of strongly algebraic domains, by sets of minimal upper bounds and by sequences of finite posets.
This paper concerns the elimination of higher type quantifiers and gives two theorems. The first theorem shows that quantifiers in formulae of a specific form can be eliminated. The second theorem shows that quantifiers in formulae of a similar form cannot be eliminated, that is, such formulae do not have an equivalent first-order formula. The proof is based on the Ehrenfeucht game. These theorems are important for design of an interpreter of a ν act, which is a representation of mathematical action. Moreover, even if the universe is assumed to be finite, these theorems hold.
We show that some natural refinements of the Straubing and Brzozowskihierarchies correspond (via the so called leaf-languages) step by step tosimilar refinements of the polynomial-time hierarchy. This extends a result of Burtschik and Vollmer on relationship between the Straubing and thepolynomial hierarchies. In particular, this applies to the Boolean hierarchyand the plus-hierarchy.
Word and tree codes are studied in a common framework, that of polypodeswhich are sets endowed with a substitution like operation. Many examples aregiven and basic properties are examined. The code decomposition theorem isvalid in this general setup.
The characteristic parameters Kw and Rw of a word w over a finite alphabet are defined as follows: Kw is the minimal natural number such that w has no repeated suffix of length Kw and Rw is the minimal natural number such that w has no right special factor of length Rw. In a previous paper, published on this journal, we have studied the distributions of these parameters, as well as the distribution of the maximal length of a repetition, among the words of each length on a given alphabet. In this paper we give the exact values of these distributions in a special case. However, these values give upper bounds to the distributions in the general case. Moreover, we study the most frequent and the average values of the characteristic parameters and of the maximal length of a repetition over the set of all words of length n.
The class of weak parallel machines is interesting, because it contains somerealistic parallel machine models, especially suitable for pipelinedcomputations. We prove that a modification of the bulk synchronous parallel(BSP) machine model, called decomposable BSP (dBSP), belongs to the class ofweak parallel machines if restricted properly. We will also correct someearlier results about pipelined parallel Turing machines.