To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Let X1, …, Xn be independent exponential random variables with their respective hazard rates λ1, …, λn, and let Y1, …, Yn be independent exponential random variables with common hazard rate λ. Denote by Xn:n, Yn:n and X1:n, Y1:n the corresponding maximum and minimum order statistics. Xn:n−X1:n is proved to be larger than Yn:n−Y1:n according to the usual stochastic order if and only if with . Further, this usual stochastic order is strengthened to the hazard rate order for n=2. However, a counterexample reveals that this can be strengthened neither to the hazard rate order nor to the reversed hazard rate order in the general case. The main result substantially improves those related ones obtained in Kochar and Rojo and Khaledi and Kochar.
The two critical issues related to product design exploration are addressed: the balance between stylistic consistency and innovation, and the control of design process under a great diversity of requirements. To address these two issues, the view of understanding product design exploration is first sought. In this view, the exploration of designs is not only categorized as a problem-solving activity but also as a problem-finding activity. A computational framework is developed based on this view, and it encompasses the belief that these two activities go hand in hand to accomplish the design tasks in an interactive design environment. The framework adopts an integration approach of two key computational techniques, shape grammars and evolutionary computing, for addressing the above two critical issues. For the issues of stylistic consistency, this paper focuses on the computational techniques in balancing the conflicts of stylistic consistency and innovation with shape grammars. For the issues of controlling design process, the practical concerns of monitoring the design process through various activities starting from the preparation works to the implementation of shape grammars have been emphasized in the development of this framework. To evaluate the effectiveness of the framework, the experiments have been set up to reflect the practical situations with which the designers have to deal. The system generates a number of models from scratch with numerical analysis that can be evaluated effectively by the designers. This reduces the designers' time and allows the designers to concentrate their efforts on performing higher level of design activities such as evaluation of designs and making design decisions.
Traditional failure modes and effects analysis (FMEA) methods lack sufficient semantics and structure to provide full traceability between the failure modes and the effects of the complex system. To overcome this limitation, this paper proposes a formal failure knowledge representation model combined with the structural decomposition of the complex system. The model defines the failure modes as the inherent properties of the physical entities at different hierarchical levels, and employs the individual color, unified color, and Boolean matrix of the polychromatic sets to represent the failure modes in terms of their interrelationships and their relations to the physical system. This method is a structure-based modeling technique that provides a simple, yet comprehensive framework to organize the failure modes and their causes and effects more systematically and completely. Using the iterative search process operated on the reasoning matrices, the end effects on the entire system can be achieved automatically, which allows for the consideration of both the single and multiple failures. An example is embedded in the description of the methodology for better understanding. Because of the powerful mathematical modeling capability of the polychromatic sets, the approach presented in this paper makes significant progress in FMEA formalization.
We study the performance of a M/DK/1 queue under Fair Sojourn Protocol (FSP). We use a Markov process with mixed real- and measure-valued states to characterize the queuing process of system and its related processor sharing queue. The infinitesimal generator of the Markov process is derived. Classifying customers according to their service time, using techniques in multiclass queuing system, and borrowing recently developed heavy traffic results for processor-sharing queues, we are able to derive approximations for average waiting time for the jobs.
We consider the problem of routing calls dynamically in a multiskill call center. Calls from different skill classes are offered to the call center according to a Poisson process. The agents in the center are grouped according to their heterogeneous skill sets that determine the classes of calls they can serve. Each agent group serves calls with independent exponentially distributed service times. We consider two scenarios. The first scenario deals with a call center with no buffers in the system, so that every arriving call either has to be routed immediately or has to be blocked and is lost. The objective in the system is to minimize the average number of blocked calls. The second scenario deals with call centers consisting of only agents that have one skill and fully cross-trained agents, where calls are pooled in common queues. The objective in this system is to minimize the average number of calls in the system. We obtain nearly optimal dynamic routing policies that are scalable with the problem instance and can be computed online. The algorithm is based on one-step policy improvement using the relative value functions of simpler queuing systems. Numerical experiments demonstrate the good performance of the routing policies. Finally, we discuss how the algorithm can be used to handle more general cases with the techniques described in this article.
We consider a Markovian stochastic fluid flow model in which the fluid level has a lower bound zero and a positive upper bound. The behavior of the process at the boundaries is modeled by parameters that are different than those in the interior and allow for modeling a range of desired behaviors at the boundaries. We illustrate this with examples. We establish formulas for several time-dependent performance measures of significance to a number of applied probability models. These results are achieved with techniques applied within the fluid flow model directly. This leads to useful physical interpretations, which are presented.
Consider a three-person game that occurs in stages. The state of the game is given by the integral amounts of chips that the players have, say x=(x1, x2, x3) with M=x1+x2+x3 fixed. At a stage of the game, player i places ai chips in the pot, an integer between 1 and xi. (Player i is already eliminated from the game if xi=0.) The winner of the pot is then immediately chosen in such a way that player i wins the pot with probability proportional to the index wiai for i with xi>0. The idea is that if player i bets more, then he is more likely to win, but this is modified by weights that parameterize the players’ abilities.
Each player is trying to maximize his probability of taking all the chips (i.e., reaching xi=M). In the two-person game, it is known that a Nash equilibrium is for each player to adopt strategy σ of playing timidly (ai=1) or boldly (ai=xi) according to whether the game is in his favor or not (assuming the other also plays σ). In this article, we investigate whether this also is the form of a Nash equilibrium in a three-person game when the weights are of the form (w1, w2, w3)=(w, w, 1−2w) with 0<w<1/2. It turns out that this is true if w<1/3, but not true if w>1/3 and M≥8.
A novel 4-dof 2SPS+SPR parallel kinematic machine is proposed, and its kinematics, statics, and workspace are studied systematically. First, the geometric constrained equations are established, and the inverse displacement kinematics is analyzed. Second, the poses of active/constrained forces are determined, and the formulae for solving inverse/forward velocities are derived. Third, the formulae for solving inverse/forward accelerations are derived. Finally, a workspace is constructed and its active/constrained forces are solved. The analytic results are verified by its simulation mechanism to be consistent with the calculated ones.
This paper presents a novel model of snake-like robots based on a spatial linkage mechanism. The reasonable structural parameters of the mechanism are obtained by performing a kinematic simulation. Then the kinematics of the spatial linkage mechanism is developed and the motor angles of the robot for performing lateral undulation are analyzed based on the Serpenoid curve. The torque of servomotors at each moment is also obtained. The experiments detailed in this paper confirm that the robot is of the ability to realize several motion modes, including lateral undulation, left and right turning motions, and uplifting of the head.
In this paper we present a language for programming with higher-order modules. The language HML is based on Standard ML in that it provides structures, signatures and functors. In HML, functors can be declared inside structures and specified inside signatures; this is not possible in Standard ML. We present an operational semantics for the static semantics of HML signature expressions, with particular emphasis on the handling of sharing. As a justification for the semantics, we prove a theorem about the existence of principal signatures. This result is closely related to the existence of principal type schemes for functional programming languages with polymorphism.
Interpreting η-conversion as an expansion rule in the simply-typed λ-calculus maintains the confluence of reduction in a richer type structure. This use of expansions is supported by categorical models of reduction, where β-contraction, as the local counit, and η-expansion, as the local unit, are linked by local triangle laws. The latter form reduction loops, but strong normalization (to the long βη-normal forms) can be recovered by ‘cutting’ the loops.
In my previous Functional Pearls article (Bird, 1992), I proved a theorem giving conditions under which an optimization problem could be implemented by a greedy algorithm. A greedy algorithm is one that picks a ‘best’ element at each stage. Here, we return to this theorem and extend it in various ways. We then use the theory to solve an intriguing problem about unravelling sequences into a smallest number of ascending subsequences.
The tree-drawing problem is to produce a ‘tidy’ mapping from elements of a tree to points in the plane. In this paper, we derive an efficient algorithm for producing tidy drawings of trees. The specification, the starting point for the derivations, consists of a collection of intuitively appealing criteria satisfied by tidy drawings. The derivation shows constructively that these criteria completely determine the drawing. Indeed, the criteria completely determine a simple but inefficient algorithm for drawing a tree, which can be transformed into an efficient algorithm using just standard techniques and a small number of inventive steps.
The algorithm consists of an upwards accumulation followed by a downwards accumulation on the tree, and is further evidence of the utility of these two higher-order tree operations.
Here ⋀0 is the set of closed λ-terms,. is the set of natural numbers and the ⌜n⌝ are the Church's numerals λfx.fnx. Such an E is called reducing if, moreover
An ingenious recursion theoretic proof by Statman will be presented, showing that every enumerator is reducing. I do not know any direct proof.
A new program transformation method is presented. It is a further refinement of supercompilation where the supercompiler is not applied directly to the function to be transformed, but to a metafunction, namely an interpreter which computes this function using its definition and an abstract (i.e. including variables) input. It is shown that with this method such tranformations become possible which the direct application of the supercompiler cannot perform. Examples include the merging of iterative loops, function inversion, and transformation of deterministic into non-deterministic algorithms, and vice-versa.
This article describes theoretical and practical aspects of an implemented self-applicable partial evaluator for the untyped lambda-calculus with constants and a fixed point operator. To the best of our knowledge, it is the first partial evaluator that is simultaneously higher-order, non-trivial, and self-applicable.
Partial evaluation produces a residual program from a source program and some of its input data. When given the remaining input data the residual program yields the same result that the source program would when given all its input data. Our partial evaluator produces a residual lambda-expression given a source lambda-expression and the values of some of its free variables. By self-application, the partial evaluator can be used to compile and to generate stand-alone compilers from a denotational or interpretive specification of a programming language.
An essential component in our self-applicable partial evaluator is the use of explicitbinding time information. We use this to annotate the source program, marking asresidual the parts for which residual code is to be generated and marking aseliminable the parts that can be evaluated using only the data that is known during partial evaluation. We give a simple criterion,well-annotatedness, that can be used to check that the partial evaluator can handle the annotated higher-order programs without committing errors.
Our partial evaluator is simple, is implemented in a side-effect free subset of Scheme, and has been used to compile and to generate compilers and a compiler generator. In this article we examine two machine-generated compilers and find that their structures are surprisingly natural.