To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Complementary filtering is a frequency based method used to design data processing algorithms exploiting signals with complementary spectra. The technique is mostly used in sensor fusion architectures, but it may also be effective in the design of state estimators. In spite of its potential in several areas of robotics, the complementary filtering paradigm is poorly used as compared to alternative time domain methods. The first part of the paper aims at reviewing the basics of complementary filtering in sensor data processing and linear systems state estimation. The second part of the paper describes how to exploit the main ideas of complementary filtering to design a depth controller for an actuator redundant autonomous underwater vehicle (AUV). Unlike with alternative state space methods commonly used to address the design of control solutions for actuator redundant systems, the proposed approach allows to fully exploit the knowledge of frequency characteristics of actuators. Simulation results are reported to demonstrate the effectiveness of the proposed solution.
Artificial Intelligence is a rich and still-developing field with a number of musical applications. This paper surveys the use of Artificial Intelligence in music in the pages of Organised Sound, from the first issue to the latest, at the time of writing. Traditionally, Artificial Intelligence systems for music have been designed with note-based composition in mind, but the research we present here finds that Artificial Intelligence has also had a significant impact in electroacoustic music, with contributions in the fields of sound analysis, real-time sonic interaction and interactive performance-driven composition, to cite but three. Two distinct categories emerged in the Organised Sound papers: on the one hand, philosophically and/or psychologically inspired, symbolic approaches and, on the other hand, biologically inspired approaches, also referred to as Artificial Life approaches. The two approaches are not mutually exclusive in their use, and in some cases are combined to achieve ‘best of both’ solutions. That said, as Organised Sound is uniquely positioned in the electroacoustic music community, it is somewhat surprising that work addressing important compositional issues such as musical form and structure, which Artificial Intelligence can be readily applied to, is not more present in these pages.
The paper considers the problem of motion planning and posture control of multiple n-link doubly nonholonomic mobile manipulators in an obstacle-cluttered and bounded workspace. The workspace is constrained with the existence of an arbitrary number of fixed obstacles (disks, rods and curves), artificial obstacles and moving obstacles. The coordination of multiple n-link doubly nonholonomic mobile manipulators subjected to such constraints becomes therefore a challenging navigational and steering problem that few papers have considered in the past. Our approach to developing the controllers, which are novel decentralized nonlinear acceleration controllers, is based on a Lyapunov control scheme that is not only intuitively understandable but also allows simple but rigorous development of the controllers. Via the scheme, we showed that the avoidance of all types of obstacles was possible, that the manipulators could reach a neighborhood of their goal and that their final orientation approximated the desired orientation. Computer simulations illustrate these results.
Musique concrète has become a well-known word and concept. The history of the concept permits us to understand how it evolved and how its inventor, Pierre Schaeffer, felt about it and its impact. The terms have gone through several transformations, the main one being that into the term acousmatique, which implied a change of perspective in the reach of the concept. Today musique concrète is an active concept, not generally applied to describe contemporary musical creation, however embedded in musical thought and theory.
This article reflects on significant technological, musical and cultural processes that took place since Varèse envisioned the concept of organised sound: understanding these processes may provide a strategic vantage point from which to gain insight into our evolving attitude to modern aesthetic and technological development and thus formulate informed strategic decisions concerning our relationship with the latter. The discussion begins by focusing on technological processes, including accelerated change, cycles of enhancement, obsolescence, retrieval and reversal, and the non-neutrality of technology. This is followed by a focus on musical issues resulting from the decoupling of sound control from sound production, interactivity, the expansion towards time-based media and the possible active role of audiences in the realisation of music. Discussion is then posited within the wider cultural context of modernism and the postmodern era, the eroding popular/art divide and the obsolescence of music as a cultural artefact resulting from technological change. This leads to the conclusion, which identifies tentative learning outcomes that may be helpful in formulating future strategic decisions.
This article will, in a broad sense, discuss technology-based music from its early radio beginnings to the current participation practices, and seek to place both technical and musical development within broader trends of social development. The introduction of new technologies in industry, composition, mediation and consumption has, in a lasting manner, changed the way most of us listen to, participate in and make use of music in our daily lives. Electronic aesthetics has finally, following a development of nearly a hundred years, started to fulfil its initial promise of becoming widely accepted and popular outside of the narrow circles of musical expertise – a ‘democratic’ music unhindered by the hierarchies of the fine arts in their different configurations. But has it really fulfilled the original promise? Is it rather not so that both the music and its promise have changed over the years?
One thing is certain, our pre-adaption to aesthetic experiences has undergone extreme changes over the last twenty years or so. A paradigm shift brought about by digital media and distribution, as well as the networking of things, has directed large parts of humanity towards a new existence in the cross-section of technology and humanity, an existence where cyborgian qualities increase day by day.
The author reflects on the past decades with reference to predictions of several paradigm shifts offered in the 1990s, including a broad range of issues covering acoustics, psychoacoustics, the role of the composer, compositional models, environmental sound perception, soundscape composition, and the integration of music and context. Contemporary developments that were not predicted, such as the proliferation of compressed audio, the rapid development of sound studies, the elimination of electroacoustic music from state-funded broadcasting, and the proliferation of mobile listening and online sound databases, are also discussed.
This article puts forth the author’s views on a panorama of contemporary influences that presage the future of higher education. The author describes how these views have shaped a new curriculum and pedagogy at Rensselaer Polytechnic Institute.
This article describes the potential of apps as a platform for composers of electroacoustic music to present their work. It traces a history of changing concepts of structure in music from objects and symmetrical forms to dynamic systems that provide a basis for interactive instruments. Early examples of interactive instruments are presented and discussed. The opening up of music to all sounds is discussed in the context of the origins and growth of global communities within the music world. The structure of an app format for electroacoustic compositions is described and examples of various music apps are discussed from the perspective of the electroacoustic app format.
In this contribution to the twentieth anniversary issue of Organised Sound, I first recall the reasons for dedicating a periodical to organised sound which I mentioned in my answer to inquiries from Cambridge University Press before the journal existed. The digital era is still in its infancy and it will permit a host of new ways to organise sound. I suggest in particular the development of an intimate relationship between acoustic and digital sounds; the extension of structural notations of sound and music and their exploitation for novel musical transformation; the amplification of the use of functions and combinations of functions. I evoke some issues discussed in Organised Sound, elaborating in more detail some problematic topics: analysis and reconstruction of computer music, live electronic music performance, perception of musical sound and sensory aesthetics.
The question of notation regarding electroacoustic music has appeared episodically along the course of its development. With the advent of digital techniques, now universally used, through the development of musicological studies of the field, and with the surge of interactive devices between computers and performers, the need for various forms of representing sounds and data has never been so strong.
In celebration of the twentieth anniversary of Organised Sound, the author browses past issues of the journal to explore its multidisciplinary facets and potential for ‘knowledge transfer’ to cognate areas. It is pointed out that despite the journal’s apparent ‘house style’, the written texts contain subtle variations according to disciplinary concerns and author’s perspective, and should be understood as one strand of a multi-modal form of expression, to be augmented by remembered aural and bibliographic references and associated conference discussions as well as the more obvious visual and sonic accompanying material.
The aim of this article is to review the last twenty years of ‘machine listening’1 to sound and music, and to suggest a balanced approach to the human–machine relationship for the future. How might machine listening, and MIR2-based ideas of data storage, retrieval and presentation enhance both our embodied experience of the music and its more reflective study (analysis)? While the issues raised may be pertinent to almost any music, the focus will remain on electroacoustic music in its many forms, whether for interactive composition, performance or analytical endeavour. I suggest a model of listening with – that is, alongside – machines in such a way that our skills may be enhanced. What can we share with machines to mutual advantage?
This paper proposes expanding telematic electroacoustic music practice through the consideration of affective computing and integration with complex data streams. Current telematic electroacoustic music practice, despite the distances involved, is largely embedded in older music/sonic arts paradigms. For example, it is dominated by using concert halls, by concerns about the relationship between people and machines, and by concerns about geographically distributed cultures and natural environments. A more suitable environment for telematic sonic works is found in the inter-relationship between ‘players’ and broader contemporary networked life – one embedded in multiple real-time informational data streams. These streams will increase rapidly with the expansion of the Internet of Things (IoT), and with the increasing deployment of algorithmic decision-making and machine learning software. While collated data streams, such as news feeds, are often rendered visually, they are also partly interpreted through embodied cognition that is similar to music and sonic art interpretation. A meeting point for telematic electroacoustic music and real-time data sonification is in affective composition/performance models and data sonification. These allow for the sonic exploration of participants’ place in a matrix of increasingly networked relationships.
This paper proposes an alternative approach to the analysis and design of interaction in real-time performance systems. It draws on the idea that the connection between the human engagement with the interface itself (digital or analogue) and the resultant rich media output forms a proposed experiential dimension containing both technical and somatic considerations. The proposed dimension is characterised by its materiality and is referred to by the author as the techno-somatic dimension. The author proposes that the materiality of the techno-somatic dimension may be usefully examined as part of a re-consideration of the nature of interaction in systems where the input characteristics of the performer’s actions, the musician’s gesture, the dancer’s movements and so on are analysed and also drive the rich media content of the work in real time. The author will suggest that such a techno-somatic dimension exists in all human engagement with technologies, analogue or digital. Furthermore, the author is proposing that design and analysis efforts for new interactive systems should focus on the techno-somatic dimension; that, if this dimension is designed with care to produce a detailed and nuanced experience for the user, design specifications for the interface will automatically result; and that such an interface will produce the somatic and functional characteristics to produce the desired materiality and actional intentionality. For the purposes of this discussion, the author will focus principally on musical interfaces.