To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In the coming years, the Web is expected to evolve from a structure containing information resources that have little or no explicit semantics to a structure having a rich semantic infrastructure. The key defining feature that is intended to distinguish the future Semantic Web from today's Web is that the content of the Web will be usable by machines (i.e. software agents). Meaning needs to be communicated between agents who advertise and/or require the ability to perform tasks on the Web. Agents also need to determine the meaning of passive (i.e. non-agent) information resources on the web to perform these tasks.
This paper describes an approach to ontology negotiation between agents supporting intelligent information management. Ontologies are declarative (data-driven) expressions of an agent's “world”: the objects, operations, facts and rules that constitute the logical space within which an agent performs. Ontology negotiation enables agents to cooperate in performing a task, even if they are based on different ontologies.
Our objective is to increase the opportunities for “strange agents” – that is, agents not necessarily developed within the same framework or with the same contextual operating assumptions – to communicate in solving tasks when they encounter each other on the web. In particular, we have focused on information search tasks.
We have developed a protocol that allows agents to discover ontology conflicts and then, through incremental interpretation, clarification and explanation, establish a common basis for communicating with each other. We have implemented this protocol in a set of Java classes that can be added to a variety of agents, irrespective of their underlying ontological assumptions. We have demonstrated the use of the protocol, through this implementation, in a test-bed that includes two large scientific archives: NASA's Global Change Master Directory and NOAA's Wind and Sea Index.
This paper presents an overview of different methods for resolving ontology mismatches and motivates the Ontology Negotiation Protocol (ONP) as a method that addresses some problems with other approaches. Much remains to be done. The protocol must be tested in larger and less familiar contexts (for example, numerous archives that have not been preselected) and it must be extended to accommodate additional forms of clarification and ontology evolution.
This article presents and motivates an extended ontology knowledge model which represents explicitly semantic information about concepts. This knowledge model results from enriching the standard conceptual model with semantic information which precisely characterises the concept's properties and expected ambiguities, including which properties are prototypical of a concept and which are exceptional, the behaviour of properties over time and the degree of applicability of properties to subconcepts. This enriched conceptual model permits a precise characterisation of what is represented by class membership mechanisms and helps knowledge engineers to determine, in a straightforward manner, the meta-properties holding for a concept. Meta-properties are recognised to be the main tool for a formal ontological analysis that allows us to build ontologies with a clean and untangled taxonomic structure.
This enriched semantics can prove useful to describe what is known by agents in a multi-agent system, and might facilitate the use of reasoning mechanisms on the knowledge that instantiates an ontology. These mechanisms can be used to solve ambiguities that can arise when agents with heterogeneous ontologies have to interoperate in order to perform a task.
Robustness is a key issue for natural language processing in general and parsing in particular, and many approaches have been explored in the last decade for the design of robust parsing systems. Among those approaches is shallow or partial parsing, which produces minimal and incomplete syntactic structures, often in an incremental way. We argue that with a systematic incremental methodology one can go beyond shallow parsing to deeper language analysis, while preserving robustness. We describe a generic system based on such a methodology and designed for building robust analyzers that tackle deeper linguistic phenomena than those traditionally handled by the now widespread shallow parsers. The rule formalism allows the recognition of n-ary linguistic relations between words or constituents on the basis of global or local structural, topological and/or lexical conditions. It offers the advantage of accepting various types of inputs, ranging from raw to chunked or constituent-marked texts, so for instance it can be used to process existing annotated corpora, or to perform a deeper analysis on the output of an existing shallow parser. It has been successfully used to build a deep functional dependency parser, as well as for the task of co-reference resolution, in a modular way.
The growing availability of textual sources has lead to an increase in the use of automatic knowledge acquisition approaches from textual data, as in Information Extraction (IE). Most IE systems use knowledge explicitly represented as sets of IE rules usually manually acquired. Recently, however, the acquisition of this knowledge has been faced by applying a huge variety of Machine Learning (ML) techniques. Within this framework, new problems arise in relation to the way of selecting and annotating positive examples, and sometimes negative ones, in supervised approaches, or the way of organizing unsupervised or semi-supervised approaches. This paper presents a new IE-rule learning system that deals with these training set problems and describes a set of experiments for testing this capability of the new learning approach.
Academic work on agents and ontologies is often oblivious to the complexities and realities of enterprise computing. At the same time, the practitioners of enterprise computing, although they are adept at the building of robust, real-life enterprise applications, are unaware of the academic body of work and the opportunities of applying novel approaches of academic origin. Enterprise applications are very complex systems that are designed to support critical business operations. This article outlines the technical and business foundations of enterprise application software and briefly discusses viable opportunities for agents and ontology research.
Topic analysis is important for many applications dealing with texts, such as text summarization or information extraction. However, it can be done with great precision only if it relies on structured knowledge, which is difficult to produce on a large scale. In this paper, we propose using bootstrapping to solve this problem: a first topic analysis based on a weakly structured source of knowledge, a collocation network, is used for learning explicit topic representations that then support a more precise and reliable topic analysis.
This paper explores the effectiveness of index terms more complex than the single words used in conventional information retrieval systems. Retrieval is done in two phases: in the first, a conventional retrieval method (the Okapi system) is used; in the second, complex index terms such as syntactic relations and single words with part-of-speech information are introduced to rerank the results of the first phase. We evaluated the effectiveness of the different types of index terms through experiments using the TREC-7 test collection and 50 queries. The retrieval effectiveness was improved for 32 out of 50 queries. Based on this investigation, we then introduce a method to select effective index terms by using a decision tree. Further experiments with the same test collection showed that retrieval effectiveness was improved in 25 of the 50 queries.
Robustness has been traditionally stressed as a general desirable property of any computational model and system. The human NL interpretation device exhibits this property as the ability to deal with odd sentences. However, the difficulties in a theoretical explanation of robustness within the linguistic modelling suggested the adoption of an empirical notion. In this paper, we propose an empirical definition of robustness based on the notion of performance. Furthermore, a framework for controlling the parser robustness in the design phase is presented. The control is achieved via the adoption of two principles: the modularisation, typical of the software engineering practice, and the availability of domain adaptable components. The methodology has been adopted for the production of CHAOS, a pool of syntactic modules, which has been used in real applications. This pool of modules enables a large validation of the notion of empirical robustness, on the one side, and of the design methodology, on the other side, over different corpora and two different languages (English and Italian).
It is now more than ten years since researchers in the US Knowledge Sharing Effort envisaged a future where complex systems could be built by combining knowledge and services from multiple knowledge bases and the first agent communication language, KQML, was proposed (Neches et al., 1991). This model of communication, based on speech acts, a declarative message content representation language and the use of explicit ontologies defining the domains of discourse (Genesereth & Ketchpel, 1994), has become widely recognised as having great benefits for the integration of disparate and distributed information sources to form an open, extensible and loosely coupled system. In particular, this idea has become a key tenet in the multi-agent systems research community.
The IEEE Standard Upper Ontology (IEEE, 2001) is an effort to create a large, general-purpose, formalontology. The ontology will be an open standard that can be reused for both academic and commercialpurposes without fee, and it will be designed to support additional domain-specific ontologies. Theeffort is targeted for use in automated inference, semantic interoperability between heterogeneousinformation systems and natural language processing applications. The effort was begun in May 2000with an e-mail discussion list, and since then there have been over 6000 e-mail messages among 170subscribers. These subscribers include representatives from government, academia and industry invarious countries. The effort was officially approved as an IEEE standards project in December 2000.Recently a successful workshop was held at IJCAI 2001 to discuss progress and proposals for thisproject (IJCAI, 2001).
This paper examines a perceived desire amongst software agent application and platform developers to have the ability to send domain-specific objects within inter-agent messages. If this feature is to be supported without departing from the notion that agents communicate in terms of knowledge, it is important that the meaning of such objects be well defined. Using an object-oriented metamodelling approach, the relationships between ontologies and agent communication and content languages in FIPA-style agent systems are examined. It is argued that for use with distributed multi-agent systems, ontologies should describe the nature of object identity and reference for each defined concept, and a UML profile supporting these modelling capabilities is presented. Finally it is shown how, given an ontology in UML, an ontology-specific object-oriented content language can be generated, allowing object structures (viewed in the abstract as UML object diagrams) to be used within message content to represent propositions, definite descriptions or (for classes without identity) value expressions.
Agent-Oriented Software Engineering (AOSE) is rapidly emerging in response to urgent needs in both software engineering and agent-based computing. While these two disciplines coexisted without remarkable interaction until some years ago, today there is rich and fruitful interaction among them and various approaches are available that bring together techniques, concepts and ideas from both sides. This article offers a guide to the broad body of literature on AOSE. The guide, which is intended to be of value to both researchers and practitioners, is structured according to key issues and key topics that arise when dealing with AOSE: methods and frameworks for requirements engineering, analysis, design, and implementation; languages for programming, communication and coordination and ontology specification; and development tools and platforms.
Numerous argumentation systems have been proposed in the literature. Yet there often appears to be a shortfall between proposed systems and possible applications. In other words, there seems to be a need for further development of proposals for argumentation systems before they can be used widely in decision-support or knowledge management. I believe that this shortfall can be bridged by taking a hybrid approach. Whilst formal foundations are vital, systems that incorporate some of the practical ideas found in some of the informal approaches may make the resulting hybrid systems more useful. In informal approaches, there is often an emphasis on using graphical notation with symbols that relate more closely to the real-world concepts to be modelled. There may also be the incorporation of an argument ontology oriented to the user domain. Furthermore, in informal approaches there can be greater consideration of how users interact with the models, such as allowing users to edit arguments and to weight influences on graphs representing arguments. In this paper, I discuss some of the features of argumentation, review some key formal argumentation systems, identify some of the strengths and weaknesses of these formal proposals and finally consider some ways to develop formal proposals to give hybrid argumentation systems. To focus my discussions, I will consider some applications, in particular an application in analysing structured news reports.
The concept of holonic systems has its roots in the desire to understand the structure of natural systems (e.g. living organisms and social organisations) and in particular their ability to behave in a stable yet flexible manner in the face of change. It is not surprising that the lessons learned from these natural systems could help with the design and control of complex man-made systems. However, a key issue is, how can one translate holonic concepts to real industrial environments? For example, one of the key holonic concepts, the holon, can be described as a self-contained autonomous and cooperative entity; when deciding how to implement holons, software agents appear to be the logical choice. In this paper, we summarise the presentations and discussions from a workshop held at the recent International Conference on Autonomous Agents that focused on this issue and brought together researchers from both the holonic systems and the multi-agents systems communities.
Argumentation concepts have been applied to numerous knowledge engineering endeavours in recent years. For example, a variety of logics have been developed to represent argumentation in the context of a dialectical situation such as a dialogue. In contrast to the dialectical approach, argumentation has also been used to structure knowledge. This can be seen as a non-dialectical approach. The Toulmin argument structure has often been used to structure knowledge non-dialectically yet most studies that apply the Toulmin structure do not use the original structure but vary one or more components. Variations to the Toulmin structure can be understood as different ways to integrate a dialectical perspective with a non-dialectical one. Drawing the dialectical/non-dialectical distinction enables the specification of a framework called the generic actual argument model that is expressly non-dialectical. The framework enables the development of knowledge-based systems that integrate a variety of inference procedures, combine information retrieval with reasoning and facilitate automated document drafting. Furthermore, the non-dialectical framework provides the foundation for simple dialectical models. Systems based on our approach have been developed in family law, refugee law, determining eligibility for government legal aid, copyright law and e-tourism.
The use of agents in electronic commerce has been explored greatly over the past several years. A large majority of this effort is toward commerce where businesses have direct transactions with consumers (B2C). However, the transactions that occur between businesses (B2B) are far more prevalent than B2C. Research where agents are used for B2B can be classified in five basic areas, service discovery, mediation, negotiation, process management (be it workflow or supply-chain management), and evaluation. At the 2001 International Bi-Conference Sessions on Agent-Based Approaches to B2B Interoperability (AgentB2B), practitioners were invited to present their research and industry efforts in each of these areas. This paper summaries the work and conclusions presented at these two events.
We survey the main results on computability and totality in Scott–Eršov-domains as well as their applications to the theory of functionals of higher types and the semantics of functional programming languages. A new density theorem is proved and applied to show the equivalence of the hereditarily computable total continuous functionals with the hereditarily effective operations over a large class of base types.