To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Richly annotated dialogue corpora are essential for new research directions in statistical learning approaches to dialogue management, context-sensitive interpretation, and context-sensitive speech recognition. In particular, large dialogue corpora annotated with contextual information and speech acts are urgently required. We explore how existing dialogue corpora (usually consisting of utterance transcriptions) can be automatically processed to yield new corpora where dialogue context and speech acts are accurately represented. We present a conceptual and computational framework for generating such corpora. As an example, we present and evaluate an automatic annotation system which builds ‘Information State Update’ (ISU) representations of dialogue context for the Communicator (2000 and 2001) corpora of human–machine dialogues (2,331 dialogues). The purposes of this annotation are to generate corpora for reinforcement learning of dialogue policies, for building user simulations, for evaluating different dialogue strategies against a baseline, and for training models for context-dependent interpretation and speech recognition. The automatic annotation system parses system and user utterances into speech acts and builds up sequences of dialogue context representations using an ISU dialogue manager. We present the architecture of the automatic annotation system and a detailed example to illustrate how the system components interact to produce the annotations. We also evaluate the annotations, with respect to the task completion metrics of the original corpus and in comparison to hand-annotated data and annotations produced by a baseline automatic system. The automatic annotations perform well and largely outperform the baseline automatic annotations in all measures. The resulting annotated corpus has been used to train high-quality user simulations and to learn successful dialogue strategies. The final corpus will be made publicly available.
Digraphic ciphers based on linear transformations—matrices
We have seen in the previous chapters that various techniques associated with the frequencies of individual letters and their combinations enable the cryptanalyst to cope with different kinds of substitution ciphers. Conceivably this may remain true even for more sophisticated methods of cryptography so long as the unit of cryptography remains a single letter. Perhaps the way for the cryptographer to prevent the cryptanalyst's successes with letter frequencies might be to make the unit of encipherment a group of letters instead of just one. A system of cryptography in which a group of n plain text letters is replaced as a unit by a group of n cipher letters is called a polygraphic system.
In the simplest case, n = 2, the system is called digraphic. Each pair of plain text letters is replaced by a cipher digraph.
There are many different ways to set up the plain-cipher relationships for a digraphic system. For example, a 26 × 26 square can be constructed with the 262 = 676 possible digraphs entered randomly into the cells of the square. Normal alphabets across the top of the square and down the left side serve as plain language coordinates. The cipher equivalent of the plain digraph P1P2 is found in the cell on row P1 and in column P2. A portion of such a square is shown in Figure 4.1. For example, the cipher equivalents for AC, BE, CD are RA, AS, YE.
Mark Davison examines several legal models designed to protect databases, considering in particular the EU Directive, the history of its adoption and its transposition into national laws. He compares the Directive with a range of American legislative proposals, as well as the principles of misappropriation that underpin them. In addition, the book also contains a commentary on the appropriateness of the various models in the context of moves for an international agreement on the topic. This book will be of interest to academics and practitioners, including those involved with databases and other forms of new media.
In this paper we reflect on the performer–instrument relationship by turning towards the thinking practices of the French philosopher Maurice Merleau-Ponty (1908–1961). Merleau-Ponty’s phenomenological idea of the body as being at the centre of the world highlights an embodied position in the world and bestows significance onto the body as a whole, onto the body as a lived body. In order to better understand this two-way relationship of instrument and performer, we introduce the notion of the performative layer, which emerges through strategies for dealing with discontinuities, breakdowns and the unexpected in network performance.
This article examines differing approaches to the definition, classification and modelling of interactive music systems, drawing together both historical and contemporary practice. Concepts of shared control, collaboration and conversation metaphors, mapping, gestural control, system responsiveness and separation of interface from sound generator are discussed. The article explores the potential of interactive systems to facilitate the creation of dynamic compositional sonic architectures through performance and improvisation.
The use of a laptop computer for musical performance has become widespread in the electronic music community. It brings with it many issues pertaining to the communication of musical intent. Critics argue that performances of this nature fail to engage audiences because many performers use the mouse and/or computer keyboard to control their musical works, leaving no visual cues to guide the audience as to the correlation between performance gestures and musical outcomes. The author will argue that interfaces need to communicate something of their task and that cognitive affordances (Gibson 1979) associated with the performance interface become paramount if the musical outcomes are to be perceived as clearly tied to real-time performance gestures – in other words, that the audience are witnessing the creation of the music in that moment as distinct to the manipulation of pre-recorded or pre-sequenced events. Interfaces of his kind lend themselves particularly to electroacoustic and computer music performance where timbre, texture and morphology may be paramount.
Composers, musicians and computer scientists have begun to use software-based agents to create music and sound art in both linear and non-linear (non-predetermined form and/or content) idioms, with some robust approaches now drawing on various disciplines. This paper surveys recent work: agent technology is first introduced, a theoretical framework for its use in creating music/sound art works put forward, and an overview of common approaches then given. Identifying areas of neglect in recent research, a possible direction for further work is then briefly explored. Finally, a vision for a new hybrid model that integrates non-linear, generative, conversational and affective perspectives on interactivity is proposed.
This paper explores the differences in the design and performance of acoustic and new digital musical instruments, arguing that with the latter there is an increased encapsulation of musical theory. The point of departure is the phenomenology of musical instruments, which leads to the exploration of designed artefacts as extensions of human cognition – as scaffolding onto which we delegate parts of our cognitive processes. The paper succinctly emphasises the pronounced epistemic dimension of digital instruments when compared to acoustic instruments. Through the analysis of material epistemologies it is possible to describe the digital instrument as an epistemic tool: a designed tool with such a high degree of symbolic pertinence that it becomes a system of knowledge and thinking in its own terms. In conclusion, the paper rounds up the phenomenological and epistemological arguments, and points at issues in the design of digital musical instruments that are germane due to their strong aesthetic implications for musical culture.
We present an experimental study on articulation in bowed strings that provides important elements for a discussion about sound synthesis control. The study focuses on bow acceleration profiles and transient noises, measured for different players for the bowing techniques detaché and martelé. We found that maximum of these profiles are not synchronous, and temporal shifts are dependent on the bowing techniques. These results allow us to bring out important mechanisms in sound and gesture articulation. In particular, the results reveal a potential shortcoming of mapping strategies using simple frame-by-frame data-stream procedures. We propose instead to consider input control data as time functions, and consider gesture co-articulation processes.
In the majority of discussions surrounding the design of digital instruments and real-time performance systems, notions such as control and mapping are seen from a classical systems point of view: the former is often seen as a variable from an input device or perhaps some driving signal, while the latter is considered as the liaison between input and output parameters. At the same time there is a large body of research regarding gesture in performance that is concerned with the expressive and communicative nature of musical performance. While these views are certainly central to a conceptual understanding of ‘instrument’, it can be limiting to consider them a priori as the only proper model, and to mediate one’s conception of digital instrument design by fixed notions of control, mapping and gesture. As an example of an alternative way to view instrumental response, control structuring and mapping design, this paper discusses the concept of gesture from the point of view of the perception of human intentionality in sound and how one might consider this in interaction design.
Throughout the short history of interactive digital music, there have been frequent calls for a new language of interaction that incorporates and acknowledges the unique capabilities of the computational medium. In this paper we suggest that a conceptualisation of possible modes of performance–time interaction can only be sensibly approached in light of the ways that computers alter the social–artistic interactions that are precursive to performance. This conceptualisation hinges upon a consideration of the changing roles of composition, performer and instrument in contemporary practice. We introduce the term behavioural object to refer to software that has the capacity to act as the musical and social focus of interaction in digital systems. Whilst formative, this term points to a new framework for understanding the role of software in musical culture. We discuss the potential for behavioural objects to contribute actively to musical culture through two types of agency: performative agency and memetic agency.
Mobile phones offer an attractive platform for interactive music performance. We provide a theoretical analysis of the sensor capabilities via a design space and show concrete examples of how different sensors can facilitate interactive performance on these devices. These sensors include cameras, microphones, accelerometers, magnetometers and multitouch screens. The interactivity through sensors in turn informs aspects of live performance as well as composition though persistence, scoring, and mapping to musical notes or abstract sounds.
This article presents a theoretical framework for the design of expressive musical instruments, the Musical Interface Technology Design Space: MITDS. The activities of imagining, designing and building new musical instruments, performing, composing, and improvising with them, and analysing the whole process in an effort to better understand the interface, our physical and cognitive associations with it, and the relationship between performer, instrument and audience can only be seen as an ongoing iterative work-in-progress. It is long-term evolutionary research, as each generation of a new musical instrument requires inventiveness and years of dedication towards the practice and mastery of its performance system (comprising the interface, synthesis and the mappings between them). Many revisions of the system may be required in order to develop musical interface technologies that enable us to achieve truly expressive performances. The MITDS provides a conceptual framework for describing, analysing, designing and extending the interfaces, mappings, synthesis algorithms and performance techniques for interactive musical instruments. It provides designers with a theoretical base to draw upon when creating technologically advanced performance systems, and can be seen as a set of guidelines for analysis, and a taxonomy of design patterns for interactivity in musical instruments. The MITDS focuses mainly on human-centred design approaches to realtime control of the multidimensional parameter spaces in musical composition and performance, where the primary objective is to close the gap between human gestures and complex synthesis methods.
Reliability of any model-based failure detection and isolation (FDI) method depends on the amount of uncertainty in a system model. Recently, it has been shown that the use of joint torque sensing results in a simplified manipulator model that excludes hardly identifiable link dynamics and other nonlinearities such as friction, backlash, and flexibilities. In this paper, we show that the application of the simplified model in a fault detection algorithm increases reliability of fault monitoring system against modeling uncertainty. The proposed FDI filter is based on a smooth velocity observer of degree 2n where n stands for the number of manipulator joints. No velocity measurement and assumptions on smoothness of faults are used in the fault detection process. The paper focuses on actuator faults and investigates the effect of torque sensor noise on threshold selection. The FDI filter is further improved to become robust against an unknown bias in torque sensor reading. The effect of position sensor noise together with position sensor faults are also investigated. Simulation example on a 6-degrees of freedom manipulator is carried out to illustrate the performance of the proposed FDI method.