To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Knowledge-based systems have often been criticised for the limited theoretical base upon which they are constructed. This view asserts that systems are often developed in an ad hoc, individual way that leads to unmaintainable, unreliable and non-rigorous systems. The last decade, however, has seen an increased effort to produce methodologies to counter this view as well as continued research into validation and verification techniques. This paper presents a brief discussion of some of the important research in knowledge-based system life cycles and development methods. Methodologies are considered and are discussed in light of two sets of quality assurance criteria.
Technology influences all art, and therefore all music, including composition, performance and listening. It always has, and it always will. For example, technical developments in materials, mechanics and manufacturing were important factors that permitted the piano to supersede the harpsichord as the primary concert Western keyboard instrument by about 1800. And with each new technical development new performance issues have been introduced. Piano performance technique is quite different from harpsichord technique, and composers responded to these differences with new music ideas and gestures. The multiple relationships between technology and composer and performer are dynamic and of paramount importance to each party. And a true consideration of any aspect of music requires that all three areas be examined. This has always been a part of music, and so these relationships are inherently important within computer music. The difference is that electronic technology has caused a fundamental change for all aspects of music, a difference that is as pivotal in the history of Western music as was the shift from oral to written preservation of music over a thousand years ago, and then also the accessibility provided by printed music five hundred years ago. In computer music, all parties are always acutely aware of the presence and influence of machine technology in both the visual and audible realms.
This paper takes a systemic perspective on interactive signal processing and introduces the author's Audible Eco-Systemic Interface (AESI) project. It starts with a discussion of the paradigm of ‘interaction’ in existing computer music and live electronics approaches, and develops following bio-cybernetic principles such as ‘system/ambience coupling’, ‘noise’, and ‘self-organisation’. Central to the paper is an understanding of ‘interaction’ as a network of interdependencies among system components, and as a means for dynamical behaviour to emerge upon the contact of an autonomous system (e.g. a DSP unit) with the external environment (room or else hosting the performance). The author describes the design philosophy in his current work with the AESI (whose DSP component was implemented as a signal patch in KYMA5.2), touching on compositional implications (not only live electronics situations, but also sound installations).
Most scholars writing on the use of samplers express anxiety over the dissolution of boundaries between human-generated and automated musical expression, and focus on the copyright infringement issues surrounding sampling practices without adequately exploring samplists' musical and political goals. Drawing on musical examples from various underground electronic music genres and on interviews with electronic musicians, this essay addresses such questions as: What is a sampler, and how does the sampling process resonate with or diverge from other traditions of instrument-playing? How do electronic musicians use the ‘automated’ mechanisms of digital instruments to achieve nuanced musical expression and cultural commentary? What are some political implications of presenting sampled and processed sounds in a reconfigured compositional environment? By exploring these issues, I hope to counter the over-simplified, uninformed critical claims that sampling is a process of ‘theft’ and ‘automation’, and instead offer insight into the myriad and complex musical and political dimensions of sampling in electronic music production.
Seeking new forms of expression in computer music, a small number of laptop composers are braving the challenges of coding music on the fly. Not content to submit meekly to the rigid interfaces of performance software like Ableton Live or Reason, they work with programming languages, building their own custom software, tweaking or writing the programs themselves as they perform. Often this activity takes place within some established language for computer music like SuperCollider, but there is no reason to stop errant minds pursuing their innovations in general scripting languages like Perl. This paper presents an introduction to the field of live coding, of real-time scripting during laptop music performance, and the improvisatory power and risks involved. We look at two test cases, the command-line music of slub utilising, amongst a grab-bag of technologies, Perl and REALbasic, and Julian Rohrhuber's Just In Time library for SuperCollider. We try to give a flavour of an exciting but hazardous world at the forefront of live laptop performance.
Over the last ten years a community that is interested in context has emerged. Brézillon (1999) gave a survey of the literature on context in artificial intelligence. There is now a series of conferences on context, a website and a mailing list. The number of web pages with the word “context” has increased tenfold in the last five years. Being among the instigators of the use of context in real-world applications, I present in this paper the evolution of my thoughts over the last years and the results that have been obtained, including a representation formalism based on contextual graphs and the use of this formalism in a real-world application called SART. I present how procedures, practices and context are intertwined, as identified in the SART application and in different domains. I root my view of context in the artificial intelligence area and give a general presentation of my view of context under the three aspects – external knowledge, contextual knowledge and proceduralised context – with the implementation of this view in contextual graphs. I discuss how reasoning is carried out, based on procedure and practices, in the formalism of contextual graphs and show how incremental acquisition of practices is integrated in this formalism.
The current trend in the production of information appliances is the human-centred, customer-centred approach, where technology serves human needs invisibly, unobtrusively. The emphasis shifts from the application programs to the users, their tasks and their workplaces, making computation often move off the desktop to become embedded in the world around us. In this scenario the role of the visual interface becomes crucial since, as far as the customer is concerned, the interface is the product. In this paper we briefly survey the most recent results in the field of advanced visual interfaces, with focus on users' needs and ways to serve them.
Users of online search engines often find it difficult to express their need for information in the form of a query. However, if the user can identify examples of the kind of documents they require then they can employ a technique known as relevance feedback. Relevance feedback covers a range of techniques intended to improve a user's query and facilitate retrieval of information relevant to a user's information need. In this paper we survey relevance feedback techniques. We study both automatic techniques, in which the system modifies the user's query, and interactive techniques, in which the user has control over query modification. We also consider specific interfaces to relevance feedback systems and characteristics of searchers that can affect the use and success of relevance feedback systems.
We present an Augmented Template-Based approach to text realization that addresses the requirements of real-time, interactive systems such as a dialog system or an intelligent tutoring system. Template-based approaches are easier to implement and use than traditional approaches to text realization. They can also generate texts more quickly. However traditional template-based approaches with rigid templates are inflexible and difficult to reuse. Our approach augments traditional template-based approaches by adding several types of declarative control expressions and an attribute grammar-based mechanism for processing missing or inconsistent slot fillers. Therefore, augmented templates can be made more general than traditional ones, yielding templates that are more flexible and reusable across applications.
This paper introduces an abductive framework for updating knowledge bases represented by extended disjunctive programs. We first provide a simple transformation from abductive programs to update programs which are logic programs specifying changes on abductive hypotheses. Then, extended abduction, which was introduced by the same authors as a generalization of traditional abduction, is computed by the answer sets of update programs. Next, different types of updates, view updates and theory updates are characterized by abductive programs and computed by update programs. The task of consistency restoration is also realized as special cases of these updates. Each update problem is comparatively assessed from the computational complexity viewpoint. The result of this paper provides a uniform framework for different types of knowledge base updates, and each update is computed using existing procedures of logic programming.
A principal aim of this journal is to bridge the gap between traditional computational linguistics research and the implementation of practical applications with potential real-world use. This new column aims to help with that bridge-building, by providing a window on to what is happening with speech and language technologies in the industrial and commercial worlds, and a discussion forum for topics of interest in the technology transfer space.
The problem of integrating knowledge from multiple and heterogeneous sources is a fundamental issue in current information systems. To cope with this problem, the concept of mediator has been introduced as a software component providing intermediate services, linking data resources and application programs, and making transparent the heterogeneity of the underlying systems. In designing a mediator architecture, we believe that an important aspect is the definition of a formal framework by which one is able to model integration according to a declarative style. To this purpose, the use of a logical approach seems very promising. Another important aspect is the ability to model both static integration aspects, concerning query execution, and dynamic ones, concerning data updates and their propagation among the various data sources. Unfortunately, as far as we know, no formal proposals for logically modeling mediator architectures both from a static and dynamic point of view have already been developed. In this paper, we extend the framework for amalgamated knowledge bases, presented in Subrahmanian (1994), to deal with dynamic aspects. The language we propose is based on the Active U-Datalog language (Bertino et al., 1998), and extends it with annotated logic and amalgamation concepts from Kifer and Subrahmanian (1992) and Subrahmanian (1987). We model the sources of information and the mediator (also called supervisor) as Active U-Datalog deductive databases, thus modeling queries, transactions, and active rules, interpreted according to the PARK semantics (Gottlob et al., 1996). By using active rules, the system can efficiently perform update propagation among different databases. The result is a logical environment, integrating active and deductive rules, to perform queries and update propagation in an heterogeneous mediated framework.
Most information systems that deal with natural language texts do not tolerate much deviation from their idealized and simplified model of language. Spoken dialog is notoriously ungrammatical, however. Because the MAREDI project focuses in particular on the automatic analysis of scripted dialogs, we needed to develop a robust capacity to analyze transcribed spoken language. This paper summarizes the current state of our work. It presents the main elements of our approach, which is based on exploiting surface markers as the best route to the semantics of the conversation modelled. We highlight the foundations of our particular conversational model, and give an overview of the MAREDI system. We then discuss its three key modules, a connectionist network to recognise speech acts, a robust syntactic analyzer, and a semantic analyzer.
We aim at finding the minimal set of fragments that achieves maximal parse accuracy in Data Oriented Parsing (DOP). Experiments with the Penn Wall Street Journal (WSJ) treebank show that counts of almost arbitrary fragments within parse trees are important, leading to improved parse accuracy over previous models tested on this treebank. We isolate a number of dependency relations which previous models neglect but which contribute to higher accuracy. We show that the history of statistical parsing models displays a tendency towards using more and larger fragments from training data.
The GEIG metric for quantifying the accuracy of parsing became influential through the Parseval programme, but many researchers have seen it as unsatisfactory. The Leaf-Ancestor (LA) metric, first developed in the 1980s, arguably comes closer to formalizing our intuitive concept of relative parse accuracy. We support this claim via an experiment that contrasts the performance of alternative metrics on the same body of automatically-parsed examples. The LA metric has the further virtue of providing straightforward indications of the location of parsing errors.