To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The author has written articles and papers on the possibilities of differentiated spaces in the composition of electroacoustic music (Emmerson 1994, 1998). He extends this into a more practical discussion on the spaces used for the presentation of electroacoustic music (acousmatic music and ‘live electronic’ music), sound installations and other sonic art. The move into more informal ‘club’ environments is not without controversy. The ‘sampling’ approach to the very act of listening and ‘consuming’ sonic art has challenged traditional concert hall presentation. This paper brings various possibilities into plans (at once conceptual but also intended to have practical application) for a multi-space ‘Sound House’: a centre for the performance of the sonic arts. This centre is socially embedded within interpersonal human interaction and is not to be found in the current performance possibilities of the Internet – though it may be connected to others of its kind through this means.
This paper will deal with the changes that have come about in the description of musical knowledge and with the ensuing needs in this field in the era of decentralisation. Throughout almost all the twentieth century, musical practices continued to be expressed by a system of cultural mediations that proved to be a practical impediment to the emergence of non-conventional cultures. Electronic music in particular and its corrosive tendencies, though spread and supported by remarkable composers, has ended up being devoured by academic immune systems. Now that the diffusion of Net computing has induced people to intervene in musical material, a poetics based on interference is spreading. By gaining ground on grammatical and self-referenced poetics, this trend has gradually become a palpable fact and music perceives itself as both individual writing and a production of social meaning. At present, a globe-net-transfer of sound material passes through different contexts and spaces, and seems to be adapting itself to different social speeds. Through the Net we can, on the one hand, replace, manipulate and recontextualise musical parameters until a different significance emerges; at the same time, randomising and hybridising musical objects can partially change our perception of the same musical events. On the other hand, music online databases, audio browsers and musical queries may open the way to overthrowing, reorganising and personalising music description. This could occur at different levels and to different degrees of complexity, both as a social event and as an active, user-based combination of musical structures.
It is recommended that we collaborate with science and industry in the search for technological innovation in media-related fields, in order to help design a future environment that will benefit a healthy musical culture. Since those of us who are already immersed in the technology are accustomed to its less attractive attributes as well as its potentials, we can consult with colleagues about essential characteristics of the old-fashioned world which we and they do not want to lose. With our collective imaginations, it should be possible to preserve many of these aspects by transforming them, with sensitivity, to fit new contexts. A few diverse examples are given of features that might be considered both attractive and endangered. Suggestions ranging from the practical to the whimsical illustrate some possibilities for injecting the essence of these experiences into a more technological world, thereby enriching it.
A common way of dynamically scheduling jobs in a flexible manufacturing system (FMS) is by means of dispatching rules. The problem of this method is that the performance of these rules depends on the state the system is in at each moment, and no single rule exists that is better than the rest in all the possible states that the system may be in. It would therefore be interesting to use the most appropriate dispatching rule at each moment. To achieve this goal, a scheduling approach which uses machine learning can be used. Analyzing the previous performance of the system (training examples) by means of this technique, knowledge is obtained that can be used to decide which is the most appropriate dispatching rule at each moment in time. In this paper, a review of the main machine learning-based scheduling approaches described in the literature is presented.
For a normal sample with unknown mean, the almost universally used estimator of the variance, σ2, is “the sample variance.” This estimator is the minumum variance unbiased estimator of σ2, but it is inadmissible under square error loss. It is dominated by the maximum likelihood estimator, which is also inadmissible. We consider a class of estimators and compare these estimators under a class of loss functions which we call “log symmetric.”
Techniques for machine condition monitoring and diagnostics are gaining acceptance in various industrial sectors. They have proved to be effective in predictive or proactive maintenance and quality control. Along with the fast development of computer and sensing technologies, sensors are being increasingly used to monitor machine status. In recent years, the fusion of multisensor data has been applied to diagnose machine faults. In this study, multisensors are used to collect signals of rotating imbalance vibration of a test rig. The characteristic features of each vibration signal are extracted with an auto-regressive (AR) model. Data fusion is then implemented with a Cascade-Correlation (CC) neural network. The results clearly show that multisensor data-fusion-based diagnostics outperforms the single sensor diagnostics with statistical significance.
The efficient use of critical machines or equipment in a manufacturing system requires reliable information about their current operating conditions. This information is often used as a basis for machine condition monitoring and fault diagnosis—which essentially is an endeavor of knowledge extraction. Rough set theory provides a novel way to knowledge acquisition, especially when dealing with vagueness and uncertainty. It focuses on the discovery of patterns in incomplete and/or inconsistent data. However, rough set theory requires the data analyzed to be in discrete manner. This paper proposes a novel approach to the treatment of continuous-valued attributes in multi-concept classification for mechanical diagnosis using rough set theory. Based on the proposed approach, a prototype system called RClass-Plus has been developed. RClass-Plus is validated using a case study on mechanical fault diagnosis. Details of the validation are described.
We consider two single-server queues in tandem. The customers receive service at the first station and then they proceed to the second. There are no intermediate waiting positions and, so, the blocking phenomenon is observed. The arriving customers who find the first station unavailable (busy or blocked) behave like retrial customers. We consider the case of “constant retrial policy.” For such a system, we obtain the system state probabilities in steady state in terms of generating functions. We present also some interesting measures of the system performance and directly deduce results for the corresponding ordinary (without retrials) model. We consider, finally, the special case of exponential service times as an example.
A crucial early stage in the engineering design process is the conceptual design phase, during which an initial solution design is generated. The quality of this initial design has a great bearing on the quality and success of the produced artefact. Typically, the knowledge required to perform this task is only acquired through many years of experience, and so is often at a premium. This has led to a number of attempts to automate this phase using intelligent computer systems. However, the knowledge of how to generate designs has proved difficult to acquire directly from human experts, and as a result, is often unsatisfactory in these systems. The application of inductive machine learning techniques to the acquisition of this sort of knowledge has been advocated as one approach to overcoming the difficulties surrounding its capture. Rather than acquiring the knowledge from human experts, the knowledge would be inferred automatically from a set of examples of the design process. This paper describes the authors' investigations into the general viability of this approach in the context of one particular conceptual design task, that of the design of fluid power circuits. The analysis of a series of experiments highlights a number of issues that would seem to arise regardless of the working domain or particular machine learning algorithm used. These issues, presented and discussed here, cast serious doubts upon the practicality of such an approach to knowledge acquisition, given the current state of the art.
This paper presents a method for the automatic deduction of priority lists of input information as well as for the extraction of task relations from the available design knowledge. The method is based on multiple extensive searches of the design space and produces a final priority list for the instantiation of the primary design entities. If followed, this list ensures the generation of the most decisive design information at the very beginning of the design process. Additionally, the method can produce a priority list of design tasks, which represents the order of completion of these tasks. Finally, it offers a representation platform for tracking the evolution of the design process. First, a brief overview of the current literature is presented, after which the method is presented in detail. Entities, descriptors, and tasks are used for the representation of the design knowledge. They are linked in order to form multiple design relations, formally represented by digraphs. Simple set relations and graph theories are used as mathematical background to the method. Design experience is also taken into account through a weighting process of the primary design entities. Finally, an example of a belt conveyor design is presented, followed by a discussion of the results and some general conclusions. The method may be considered as a design-assisting tool that dynamically processes pieces of design knowledge and suggests corresponding design paths. Additionally, it relates the design tasks in a ordered form. Its extension—currently under elaboration—is expected to treat systematically the problem of identifying and handling the design knowledge inconsistencies.
Steady-state availability has long been a popular descriptor of effectiveness for repairable systems because it captures both the operability and repairability aspects of the system. A related measure of effectiveness is the availability of continuous service, which is particularly relevant for safety critical applications. In this article, two different measures of this quantity are described for a repairable system whose state is described by an ergodic finite-state-space continuous-time Markov chain. Using these ideas, formulas for computing system long-run mean time between failures and the long-run system reliability function are derived.
Using a weak derivation approach to gradient estimation, we consider the problem of pricing an American call option on stock paying dividends at discrete times. Similar simulation-based sensitivity estimators were introduced earlier by Fu and Hu (1995) who used smoothed perturbation analysis. We improve upon their results by presenting an estimator with a uniformly lower variance. In addition, we reduced the multidimensional optimization problem of pricing an option with multiple ex-dividend dates to a one-dimensional one. Numerical examples indicate that this approach saves a considerable amount of computation time. Our estimator holds uniformly for a class of payoff functions, and applications to other types of options will be addressed in the article.
Tandem queues with finite buffers have been widely discussed as basic models of communication and manufacturing systems. The cycle time is the important measure in such systems. In this article, we consider finite tandem queues with communication blocking and general service-time distributions. We introduce an order on pairs of random variable sets to give effective upper bounds for the expected cycle times.
It is well known that mixtures of decreasing failure rate (DFR) distributions are always DFR. It turns out that, very often, mixtures of increasing failure rate (IFR) distributions can decrease at least in some intervals of time. Usually, this property can be observed asymptotically as t → ∞. In this article, several types of underlying continuous IFR distribution are considered. Two models of mixing are studied: additive and multiplicative. The limiting behavior of a mixture failure rate function is analyzed. It is shown that the conditional characteristics (expectation and variance) of the mixing parameter are crucial for the limiting behavior. Several examples are presented and possible generalizations are discussed.
We present a performance analysis of a multiaddress call extraction server. The extraction server splits a multiaddress call into individual calls. The operation mechanisms of the server are complicated, and it is difficult to apply the standard Markov chain technique directly. Assuming the case when the server is offered by saturated input, we propose a simple probabilistic method to give the probability distribution that is related to the number of extracted calls. Applying the method, we also obtain the throughput of the extraction server. Assuming an unsaturated input, we evaluate the mean waiting time of an individual call using a diffusion approximation. Numerical simulations are performed and compared with our results. These results indicate the accuracy of our modeling approach.
Let X1,..., Xn be independent exponential random variables with possibly different scale parameters. Kochar and Korwar [J. Multivar. Anal. 57 (1996)] conjectured that, in this case, the successive normalized spacings are increasing according to hazard rate ordering. In this article, we prove this conjecture in the case of a single-outlier exponential model when all except one of the parameters are identical. We also prove that the spacings are more dispersed and larger in the sense of hazard rate ordering when the vector of scale parameters is more dispersed in the sense of majorization.
In his 1903, Principles of Mathematics, Bertrand Russell mentioned possible definitions of conjunction, disjunction, negation and existential quantification in terms of implication and universal quantification, exploiting impredicative universal quantifiers over all propositions. In his 1965 Ph.D. thesis Dag Prawitz showed that these definitions hold in intuitionistic second order logic. More recently, these definitions have been used to represent logic in various impredicative type theories. This treatment of logic is distinct from the more standard Curry–Howard representation of logic in a dependent type theory.
The main aim of this paper is to compare, in a purely logical, non type-theoretic setting, this Russell–Prawitz representation of intuitionistic logic with other possible representations. It turns out that associated with the Russell–Prawitz representation is a lax modal operator, which we call the Russell–Prawitz modality, and that any lax modal operator can be used to give a translation of intuitionistic logic into itself that generalises both the double negation interpretation, double negation being a paradigm example of a lax modality, and the Russell–Prawitz representation.
This paper presents an extension of the simply typed λ-calculus that allows iteration and case reasoning over terms of functional types that arise when using higher order abstract syntax. This calculus aims at being the kernel for a type theory in which the user will be able to formalize logics or formal systems using the LF methodology, while taking advantage of new induction and recursion principles, extending the principles available in a calculus such as the Calculus of Inductive Constructions. The key idea of our system is the use of modal logic S4. We present here the system, its typing rules and reduction rules. The system enjoys the decidability of typability, soundness of typed reduction with respect to the typing rules, the Church–Rosser and strong normalization properties and it is a conservative extension over the simply typed λ-calculus. These properties entail the preservation of the adequacy of encodings.
We reconsider the foundations of modal logic, following Martin-Löf's methodology of distinguishing judgments from propositions. We give constructive meaning explanations for necessity and possibility, which yields a simple and uniform system of natural deduction for intuitionistic modal logic that does not exhibit anomalies found in other proposals. We also give a new presentation of lax logic and find that the lax modality is already expressible using possibility and necessity. Through a computational interpretation of proofs in modal logic we further obtain a new formulation of Moggi's monadic metalanguage.