To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The computational background to digital signal processing (DSP) involves a number of techniques of numerical analysis. Those techniques which are of particular value are:
solutions to linear systems of equations
finite difference analysis
numerical integration
A large number of DSP algorithms can be written in terms of a matrix equation or a set of matrix equations. Hence, computational methods in linear algebra are an important aspect of the subject. Many DSP algorithms can be classified in terms of a digital filter. Two important classes of digital filter are used in DSP, as follows.
Convolution filters are nonrecursive filters. They use linear processes that operate on the data directly.
Fourier filters operate on data obtained by computing the discrete Fourier transform of a signal. This is accomplished using the fast Fourier transform algorithm.
Digital filters
Digital filters fall into two main categories:
real-space filters
Fourier-space filters
Real-space filters Real-space filters are based on some form of ‘moving window’ principle. A sample of data from a given element of the signal is processed giving (typically) a single output value. The window is then moved on to the next element of the signal and the process repeated. A common real-space filter is the finite impulse response (FIR) filter.
The benefits of using ontologies have been recognised in many areas such as knowledge and content management, electronic commerce and recently the emerging field of the Semantic Web. These new applications can be seen as a great success of research in ontologies. On the other hand, moving into real application comes with new challenges that need to be addressed on a principled level rather than for specific applications. This special issue will be devoted to less well-explored topics that have come into focus recently as a response to the new problems we face when trying to use ontologies in heterogeneous distributed environments. These environments include the use of ontologies in peer-to-peer and pervasive computing systems.
This paper explores the hypothesis that ontologies can be used to improve the capabilities and performance of on-board route planning for autonomous vehicles. We name a variety of general benefits that ontologies may provide, and list numerous specific ways that ontologies may be used in different components of our chosen infrastructure: the 4D/RCS system architecture developed at NIST. Our initial focus is on simple roadway driving scenarios where the controlled vehicle encounters objects in its path. Our approach is to develop an ontology of objects in the environment, in conjunction with rules for estimating the damage that would be incurred by collisions with the different objects in different situations. Automated reasoning is used to estimate collision damage; this information is fed to the route planner to help it decide whether to avoid the object. We describe our current experiments and plans for future work.
This document describes COBRA-ONT, an ontology for supporting pervasive context-aware systems. COBRA-ONT, expressed in the Web Ontology Language OWL, is a collection of ontologies for describing places, agents and events and their associated properties in an intelligent meeting-room domain. This ontology is developed as a part of the Context Broker Architecture (CoBrA), a broker-centric agent architecture that provides knowledge sharing, context reasoning and privacy protection supports for pervasive context-aware systems. We also describe an inference engine for reasoning with information expressed using the COBRA-ONT ontology and the ongoing research in using the DAML-Time ontology for context reasoning.
We introduce a simple method to build Lexicalized Hidden Markov Models (L-HMMs) for improving the precision of part-of-speech tagging. This technique enriches the contextual Language Model taking into account a set of selected words empirically obtained. The evaluation was conducted with different lexicalization criteria on the Penn Treebank corpus using the TnT tagger. This lexicalization obtained about a 6% reduction of the tagging error, on an unseen data test, without reducing the efficiency of the system. We have also studied how the use of linguistic resources, such as dictionaries and morphological analyzers, improves the tagging performance. Furthermore, we have conducted an exhaustive experimental comparison that shows that Lexicalized HMMs yield results which are better than or similar to other state-of-the-art part-of-speech tagging approaches. Finally, we have applied Lexicalized HMMs to the Spanish corpus LexEsp.
The new frontier of research on Information Extraction from texts is portability without any knowledge of Natural Language Processing. The market potential is very large in principle, provided that a suitable easy-to-use and effective methodology is provided. In this paper we describe LearningPinocchio, a system for adaptive Information Extraction from texts that is having good commercial and scientific success. Real world applications have been built and evaluation licenses have been released to external companies for application development. In this paper we outline the basic algorithm behind the scenes and present a number of applications developed with LearningPinocchio. Then we report about an evaluation performed by an independent company. Finally, we discuss the general suitability of this IE technology for real world applications and draw some conclusion.
Plain lists of collocations as provided to date by most approaches to automatic acquisition of collocations from corpora are useful as a resource for dictionary construction. However, their use is rather limited in the case of NLP-applications such as Text Generation, Machine Translation and Text Summarization if not enriched by information on the grammatical function of the collocation elements and by information on the semantics of the collocations as multiword units. In this article, we describe an approach to a fine-grained classification of verb-noun bigrams according to a semantically motivated typology of collocations and illustrate this with Spanish material. The typology of collocations that underlies our classification is based on verb-noun Lexical Functions (LFs) from the Explanatory Combinatorial Lexicology. In the first stage of the approach, the program learns the semantic features of each LF from training data. In the second stage, it examines the semantic features of verb-noun candidate bigrams and compares them with the features of all the LFs taken into account. A candidate whose features are sufficiently similar to those of a specific LF is considered to be an instance of this LF. The semantic features of both the training material and the candidate bigrams are derived from the hyperonymy hierarchies provided by the EuroWordNet. In the experiments carried out to validate the approach, we achieved an average $f$-score of about 70%.
Ontologies provide potential support for knowledge and content management on a P2P platform. Although we can design ontologies beforehand for an application, it is argued that in P2P environments static or predefined ontologies cannot satisfy the ever-changing requirements of all users. So we propose every user should make proposals for what kind of ontology is the most apt to his need. Collecting all these proposals (or votes) helps the drift of ontologies. This paper introduces OntoVote, a scalable distributed vote-collecting mechanism based on application-level broadcast trees, and describes how OntoVote can be applied to ontology drift on a P2P platform by discussing several problems involved in the voting process.
We have produced an ontology specifying a model of computer attack. Our ontology is based upon an analysis of over 4000 classes of computer intrusions and their corresponding attack strategies and is categorised according to system component targeted, means of attack, consequence of attack and location of attacker. We argue that any taxonomic characteristics used to define a computer attack be limited in scope to those features that are observable and measurable at the target of the attack. We present our model as a target-centric ontology that is to be refined and expanded over time. We state the benefits of forgoing dependence upon taxonomies in favour of ontologies for the classification of computer attacks and intrusions. We have specified our ontology using the DARPA Agent Markup Language+Ontology Inference Layer and have prototyped it using DAMLJessKB. We present our model as a target-centric ontology and illustrate the benefits of utilising an ontology in lieu of a taxonomy, by presenting a use-case scenario of a distributed intrusion detection system.
Ontologies are entering widespread use in many areas such as knowledge and content management, electronic commerce and the Semantic Web. In this paper we show how the use of ontologies has helped us overcome some important problems in the development of pervasive computing environments. We have integrated ontologies and Semantic Web technology into our pervasive computing infrastructure. Our investigations have shown that Semantic Web technology can be integrated into our CORBA-based infrastructure to augment several important services. This work suggests a number of requirements for future research in the development of ontologies, reasoners, languages and interfaces.
We think of match as an operator that takes two graph-like structures (e.g. database schemas or ontologies) and produces a mapping between elements of the two graphs that correspond semantically to each other. The goal of this paper is to propose a new approach to matching, called semantic matching. As its name indicates, in semantic matching the key intuition is to exploit the model-theoretic information, which is codified in the nodes and the structure of graphs. The contributions of this paper are (i) a rational reconstruction of the major matching problems and their articulation in terms of the more generic problem of matching graphs, (ii) the identification of semantic matching as a new approach for performing generic matching and (iii) a proposal for implementing semantic matching by testing propositional satisfiability.
IVR [interactive voice response] technology is at a point now where consumers almost cannot tell the difference between talking to a person and talking to a computer.