To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class. The OCC problem has been considered and applied under many research themes, such as outlier/novelty detection and concept learning. In this paper, we present a unified view of the general problem of OCC by presenting a taxonomy of study for OCC problems, which is based on the availability of training data, algorithms used and the application domains applied. We further delve into each of the categories of the proposed taxonomy and present a comprehensive literature review of the OCC algorithms, techniques and methodologies with a focus on their significance, limitations and applications. We conclude our paper by discussing some open research problems in the field of OCC and present our vision for future research.
In proof-theoretic semantics of intuitionistic logic it is well known that elimination rules can be generated from introduction rules in a uniform way. If introduction rules discharge assumptions, the corresponding elimination rule is a rule of higher level, which allows one to discharge rules occurring as assumptions. In some cases, these uniformly generated elimination rules can be equivalently replaced with elimination rules that only discharge formulas or do not discharge any assumption at all—they can be flattened in a terminology proposed by Read. We show by an example from propositional logic that not all introduction rules have flat elimination rules. We translate the general form of flat elimination rules into a formula of second-order propositional logic and demonstrate that our example is not equivalent to any such formula. The proof uses elementary techniques from propositional logic and Kripke semantics.
This paper explores the contributions of Answer Set Programming (ASP) to the study of an established theory from the field of Second Language Acquisition: Input Processing. The theory describes default strategies that learners of a second language use in extracting meaning out of a text based on their knowledge of the second language and their background knowledge about the world. We formalized this theory in ASP, and as a result we were able to determine opportunities for refining its natural language description, as well as directions for future theory development. We applied our model to automating the prediction of how learners of English would interpret sentences containing the passive voice. We present a system, PIas, that uses these predictions to assist language instructors in designing teaching materials.
I develop a formal logic in which quantified arguments occur in argument positions of predicates. This logic also incorporates negative predication, anaphora and converse relation terms, namely, additional syntactic features of natural language. In these and additional respects, it represents the logic of natural language more adequately than does any version of Frege’s Predicate Calculus. I first introduce the system’s main ideas and familiarize it by means of translations of natural language sentences. I then develop a formal system built on these principles, the Quantified Argument Calculus or Quarc. I provide a truth-value assignment semantics and a proof system for the Quarc. I next demonstrate the system’s power by a variety of proofs; I prove its soundness; and I comment on its completeness. I then extend the system to modal logic, again providing a proof system and a truth-value assignment semantics. I proceed to show how the Quarc versions of the Barcan formulas, of their converses and of necessary existence come out straightforwardly invalid, which I argue is an advantage of the modal Quarc over modal Predicate Logic as a system intended to capture the logic of natural language.
We give a framework for delimited control with multiple prompts, in the style of Parigot's λμ-calculus, through a series of incremental extensions by starting with the pure λ-calculus. Each language inherits the semantics and reduction theory of its parent, giving a systematic way to describe each level of control. For each language of interest, we fully characterize its semantics in terms of a reduction semantics, operational semantics, continuation-passing style transform, and abstract machine. Furthermore, the control operations are expressed in terms of fine-grained primitives that can be used to build well-known, higher-level control operators. In order to illustrate the expressive power provided by various languages, we show how other computational effects can be encoded in terms of these control operators.
This paper presents a novel one-degree-of-freedom (1-DOF) single-loop reconfigurable 7R mechanism with multiple operation modes (SLR7RMMOM), composed of seven revolute (R) joints, via adding a revolute joint to the overconstrained Sarrus linkage. The SLR7RMMOM can switch from one operation mode to another without disconnection and reassembly, and is a non-overconstrained mechanism. The algorithm for the inverse kinematics of the serial 6R mechanism using kinematic mapping is adopted to deal with the kinematic analysis of the SLR7RMMOM. First, a numerical method is applied and an example is given to show that there are 13 sets of solutions for the SLR7RMMOM, corresponding to each input angle. Among these solutions, nine sets are real solutions, which are verified using both a computer-aided design (CAD) model and a prototype of the mechanism. Then an algebraic approach is also used to analyse the mechanism and same results are obtained as the numerical one. It is shown from both numerical and algebraic approaches that the SLR7RMMOM has three operation modes: a translational mode and two 1-DOF planar modes. The transitional configurations among the three modes are also identified.
We address the problem to know whether the relation induced by a one-rulelength-preserving rewrite system is rational. We partially answer to a conjecture of ÉricLilin who conjectured in 1991 that a one-rule length-preserving rewrite system is arational transduction if and only if the left-hand side u and theright-hand side v of the rule of the system are not quasi-conjugate orare equal, that means if u and v are distinct, there donot exist words x, y and z such thatu = xyz and v = zyx.We prove the only if part of this conjecture and identify two non trivialcases where the if part is satisfied.
Various large-scale suppliers frequently use web-based spot markets, along with discount stores and foreign distributors, for inventory liquidation. Recognizing the potential benefits of such practices, we consider a multi-period, integrated replenishment, and liquidation problem for a capacitated supplier facing stochastic demand from a spot market along with its primary market (with higher priority contractual customers). In each period, the supplier must decide: (i) how much to produce, and (ii) if there are excess units left after sales to the primary market, how many of these to liquidate. We show that the optimal policy is characterized by two quantities: the critical produce-up-to level and the critical retain-up-to level. We establish bounds for these two quantities. We identify two practical benchmark policies and establish thresholds on the unit revenue earned from the spot market such that one of the two benchmark policies is optimal. We provide closed form expressions to determine these thresholds for the infinite horizon problem under specific conditions on the available production capacity. In general, it is difficult, if not impossible, to theoretically determine these thresholds in closed form for the finite horizon problem. Hence, we report results of a computational study to gain insights regarding the behavior of the optimal policy with respect to the spot market revenue. Our computational results also quantify the benefits of the optimal policy relative to the benchmark policies and examine the effects of demand correlation.
We present a new incremental algorithm for minimising deterministic finite automata. Itruns in quadratic time for any practical application and may be halted at any point,returning a partially minimised automaton. Hence, the algorithm may be applied to a givenautomaton at the same time as it is processing a string for acceptance. We also includesome experimental comparative results.
Biomimetic design applies biological analogies to solve design problems and has been known to produce innovative solutions. However, when designers are asked to perform biomimetic design, they often have difficulty recognizing analogies between design problems and biological phenomena. Therefore, this research aims to investigate designer behaviors that either hinder or promote the use of analogies in biomimetic design. A verbal protocol study was conducted on 30 engineering students working in small teams while participating in biomimetic design sessions. A coding scheme was developed to analyze cognitive processes involved in biomimetic design. We observed that teams were less likely to apply overall biological analogies if they tended to recall existing solutions that could be easily associated with specific superficial or functional characteristics of biological phenomena. We also found that the tendency to evaluate ideas, which reflects critical thinking, correlates with the likelihood of identifying overall biological analogies. Insights from this paper may contribute toward developing generalized methods to facilitate biomimetic design.
Designing and implementing typed programming languages is hard. Every new type system feature requires extending the metatheory and implementation, which are often complicated and fragile. To ease this process, we would like to provide general mechanisms that subsume many different features. In modern type systems, parametric polymorphism is fundamental, but intersection polymorphism has gained little traction in programming languages. Most practical intersection type systems have supported only refinement intersections, which increase the expressiveness of types (more precise properties can be checked) without altering the expressiveness of terms; refinement intersections can simply be erased during compilation. In contrast, unrestricted intersections increase the expressiveness of terms, and can be used to encode diverse language features, promising an economy of both theory and implementation. We describe a foundation for compiling unrestricted intersection and union types: an elaboration type system that generates ordinary λ-calculus terms. The key feature is a Forsythe-like merge construct. With this construct, not all reductions of the source program preserve types; however, we prove that ordinary call-by-value evaluation of the elaborated program corresponds to a type-preserving evaluation of the source program. We also describe a prototype implementation and applications of unrestricted intersections and unions: records, operator overloading, and simulating dynamic typing.
A common way of dynamically scheduling jobs in a manufacturing system is by implementing dispatching rules. The issues with this method are that the performance of these rules depends on the state the system is in at each moment and also that no “ideal” single rule exists for all the possible states that the system may be in. Therefore, it would be interesting to use the most appropriate dispatching rule for each instance. To achieve this goal, a scheduling approach that uses machine learning can be used. Analyzing the previous performance of the system (training examples) by means of this technique, knowledge is obtained that can be used to decide which is the most appropriate dispatching rule at each moment in time. In this paper, a literature review of the main machine learning based scheduling approaches from the last decade is presented.
The goal of machining scheme selection (MSS) is to select the most appropriate machining scheme for a previously designed part, for which the decision maker must take several aspects into consideration. Because many of these aspects may be conflicting, such as time, cost, quality, profit, resource utilization, and so on, the problem is rendered as a multiobjective one. Consequently, we consider a multiobjective optimization problem of MSS in this study, where production profit and machining quality are to be maximized while production cost and production time must be minimized, simultaneously. This paper presents a new discrete method for particle swarm optimization, which can be widely applied in MSS to find out the set of Pareto-optimal solutions for multiobjective optimization. To deal with multiple objectives and enable the decision maker to make decisions according to different demands on each evaluation index, an analytic hierarchy process is implemented to determine the weight value of evaluation indices. Case study is included to demonstrate the feasibility and robustness of the hybrid algorithm. It is shown from the case study that the multiobjective optimization model can simply, effectively, and objectively select the optimal machining scheme according to the different demands on evaluation indices.
In a 1976 paper published in Science, Knuth presented an algorithm to sample (non-uniform) self-avoiding walks crossing a square of side k. From this sample, he constructed an estimator for the number of such walks. The quality of this estimator is directly related to the (relative) variance of a certain random variable Xk. From his experiments, Knuth suspected that this variance was extremely large (so that the estimator would not be very efficient). But how large? For the analogous Rosenbluth algorithm, which samples unconfined self-avoiding walks of length n, the variance of the corresponding estimator is believed to be exponential in n.
A few years ago, Bassetti and Diaconis showed that, for a sampler à la Knuth that generates walks crossing a k × k square and consisting of North and East steps, the relative variance is only $O(\sqrt k)$. In this note we take one step further and show that, for walks consisting of North, South and East steps, the relative variance jumps to $2^{k(k+1)}/(k+1)^{2k}$. This is exponential in the average length of the walks, which is of order k2. We also obtain partial results for general self-avoiding walks crossing a square, suggesting that the relative variance could be exponential in k2 (which is again the average length of these walks).
Knuth's algorithm is a basic example of a widely used technique called sequential importance sampling. The present paper, following the paper by Bassetti and Diaconis, is one of very few examples where the variance of the estimator can be found.
This paper addresses the fulfillment of requirements related to case-based reasoning (CBR) processes for system design. Considering that CBR processes are well suited for problem solving, the proposed method concerns the definition of an integrated CBR process in line with system engineering principles. After the definition of the requirements that the approach has to fulfill, an ontology is defined to capitalize knowledge about the design within concepts. Based on the ontology, models are provided for requirements and solutions representation. Next, a recursive CBR process, suitable for system design, is provided. Uncertainty and designer preferences as well as ontological guidelines are considered during the requirements definition, the compatible cases retrieval, and the solution definition steps. This approach is designed to give flexibility within the CBR process as well as to provide guidelines to the designer. Such questions as the following are conjointly treated: how to guide the designer to be sure that the requirements are correctly defined and suitable for the retrieval step, how to retrieve cases when there are no available similarity measures, and how to enlarge the research scope during the retrieval step to obtain a sufficient panel of solutions. Finally, an example of system engineering in the aeronautic domain illustrates the proposed method. A testbed has been developed and carried out to evaluate the performance of the retrieval algorithm and a software prototype has been developed in order to test the approach. The outcome of this work is a recursive CBR process suitable to engineering design and compatible with standards. Requirements are modeled by means of flexible constraints, where the designer preferences are used to express the flexibility. Similar solutions can be retrieved even if similarity measures between features are not available. Simultaneously, ontological guidelines are used to guide the process and to aid the designer to express her/his preferences.
Recently, new types of layouts have been proposed in the literature in order to handle a large number of products. Among these are the fractal layout, aiming at minimization of routing distances. There are already researchers focusing on the design; however, we have noticed that the current approach usually executes several times the allocations of fractal cells on the shop floor up to find the best allocations, which may present a significant disadvantage when applied to a large number of fractal cells owing to combinatorial features. This paper aims to propose a criterion, based on similarity among fractal cells, developed and implemented in a Tabu search heuristics, in order to allocate it on the shop floor in a feasible computational time. Once our proposed procedure is modeled, operations of each workpiece are separated in n subsets and submitted to simulation. The results (traveling distance and makespan) are compared to distributed layout and to functional layout. The results show, in general, a trade-off behavior, that is, when the total routing distance decreases, the makespan increases. Based on our proposed method, depending on the value of segregated fractal cell similarity, it is possible to reduce both performance parameters. Finally, we conclude the proposed procedure shows to be quite promising because allocations of fractal cells demand reduced central processing unit time.