To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Structured objects are items with defined properties that are to be represented in a computer system. Research in Knowledge Representation (KR) and in Database Design (DB) has produced languages for describing structured objects. Although different in the particular means for defining properties, both areas share the goal of representing a part of the world in a structured way. Moreover, the rise of object-centred formalisms in the last decade has significantly influenced the convergence of languages.
Distributed Artificial Intelligence systems, in which multiple agents interact to improve their individual performance and to enhance the systems' overall utility, are becoming an increasingly pervasive means of conceptualising a diverse range of applications. As the discipline matures, researchers are beginning to strive for the underlying theories and principles which guide the central processes of coordination and cooperation. Here agent communities are modelled using a distributed goal search formalism, and it is argued that commitments (pledges to undertake a specific course of action) and conventions (means of monitoring commitments in changing circumstances) are the foundation of coordination in multi-agent systems. An analysis of existing coordination models which use concepts akin to commitments and conventions is undertaken before a new unifying framework is presented. Finally, a number of prominent coordination techniques which do not explicitly involve commitments or conventions are reformulated in these terms to demonstrate their compliance with the central hypothesis of this paper.
Assuring the reliability of knowledge-based systems has become an important issue in the development of the knowledge engineering discipline. There has been a workshop devoted to these topics at most of the major AI conferences (IJCAI, AAAI and ECAI) for the last five years, and the 1994 European Conference on Artificial Intelligence (ECAI-94) in Amsterdam was no exception. The focus of the meeting was on validation techniques for KBS, where validation is defined as the process of determining if a KBS meets its users' requirements; implicitly, validation includes verification, which is the process of determining if a KBS has been constructed to comply with certain formally-specified properties, such as consistency and irredundancy. The Amsterdam workshop was an intimate meeting, and the fifteen attendees were predominantly from European institutions. In spite of—or perhaps because of—this intimacy, the workshop succeeded in highlighting many of the significant trends and issues within its area of concern. The purpose of this short article is to review the trends and issues in question, drawing upon the contributions made during the workshop.
A central area of application for knowledge-based systems is for giving consultative advice to the user. Such systems engage the user in a dialogue in the process of collecting enough information to be able to infer a conclusion from the knowledge base. Traditionally, then, the main initiative in the consultation process has been allocated to the system
The purpose of this paper is to evaluate and compare three of the most powerful expert system tools available: KEE from Intellicorp, Knowledge Craft from The Carnegie Group Inc., and ART from Inference Corporation.
These three tools are industrial development environments which are fully supported and well beyond research prototypes. They were implemented on Lisp machines initially, but will soon be available on conventional computers. The three systems are very flexible and offer many ways of representing knowledge.
The first part of the paper is a technical overview of each tool. The second part presents their common features. The third part discusses the advantages and drawbacks of each, according to types of application.
This work has been partially sponsored by the Commission of the European communities within the Esprit Project 932 entitled ‘Knowledged-based Real-Time supervision in Computer Integrated Manufacturing’. We are grateful to Didier Gaget-Soufflot (Graphael, France) and Bernard Raust (CGE/TITN) for their participation in the evaluation of KEE and ART respectivily.
The relentless growth of the computer industry over more than 30 years has been driven on by a series of major innovations. High-level languages; solid-state logic; compatible machine ranges; disk storage; time-sharing; data communications; virtual machine architectures; the use of LSI and solid-state memory; text processing; personal computers; sucessful packaged software; each advance in its turn has opened up new markets and set off a new spurt of expansion. A vital conclusion of this study is that expert systems promise to be another such major innovation.
By
A. Jentzen, Johann Wolfgang Goethe Universität,
P. E. Kloeden, Johann Wolfgang Goethe Universität,
A. Neuenkirch, Johann Wolfgang Goethe Universität
Edited by
Felipe Cucker, City University of Hong Kong,Allan Pinkus, Technion - Israel Institute of Technology, Haifa,Michael J. Todd, Cornell University, New York
Itô stochastic calculus is a mean-square or L2 calculus with different transformation rules to deterministic calculus. In particular, in view of its definition, an Itô integral is not as robust to approximation as the deterministic Riemann integral. This has some critical implications for the development of effective numerical schemes for stochastic differential equations (SDEs). Higher order numerical schemes for both strong and weak convergences have been derived using stochastic Taylor expansions, but most proofs in the literature assume the uniform boundedness of all relevant partial derivatives of the coefficient functions, which is highly restrictive. Problems also arise if solutions are restricted to certain regions, e.g. must be positive and the coefficient functions involve square roots.
Pathwise convergence is an alternative to the strong and weak convergences of the literature. It arises naturally in many important applications and, in fact, numerical calculations are carried out pathwise. Pathwise convergent numerical schemes for both SDEs and random ordinary differential equations (RODEs) will be discussed here. In particular, we see that a strong Taylor scheme for SDEs of order γ converges pathwise with order γ – ∈ for every arbitrarily small ∈ > 0 under the usual assumptions. We also introduce modified Taylor schemes which converge pathwise in a similar way, even when the coefficient derivatives are not uniformly bounded or are restricted to certain regions. In particular, these schemes can be applied without difficulty to volatility models in finance.
High oscillation is everywhere and it is difficult to compute. The conjunction of these two statements forms the rationale of this volume and it is therefore appropriate to deliberate further upon them.
Rapidly oscillating phenomena occur in electromagnetics, quantum theory, fluid dynamics, acoustics, electrodynamics, molecular modelling, computerised tomography and imaging, plasma transport, celestial mechanics – and this is a partial list! The main reason to the ubiquity of these phenomena is the presence of signals or data at widely different scales. Typically, the slowest signal is the carrier of important information, yet it is overlayed with signals, usually with smaller amplitude but with considerably smaller wavelength (cf. the top of Fig. 1). This presence of different frequencies renders both analysis and computation considerably more challenging. Another example of problems associated with high oscillation is provided by the wave packet at the bottom of Fig. 1 and by other phenomena which might appear dormant (or progress sedately, at measured pace) for a long time, only to demonstrate suddenly (and often unexpectedly) much more hectic behaviour.
The difficulty implicit in high oscillation becomes a significant stumbling block once we attempt to produce reliable numerical results. In principle, the problem can be alleviated by increasing the resolution of the computation (the step size, spatial discretization parameter, number of modes in an expansion, the bandwidth of a filter), since high oscillation is, after all, an artefact of resolution: zoom in sufficiently and all signals oscillate slowly.