Hostname: page-component-89b8bd64d-n8gtw Total loading time: 0 Render date: 2026-05-09T09:25:47.576Z Has data issue: false hasContentIssue false

Towards common ground in notation for augmented instruments

Published online by Cambridge University Press:  23 February 2026

Cristohper Ramos Flores*
Affiliation:
ENES Morelia, UNAM, Mexico
Jorge Rodrigo Sigal Sefchovich
Affiliation:
ENES Morelia, UNAM, Mexico
*
Corresponding author: Cristohper Ramos Flores; Email: aiwiy@hotmail.com
Rights & Permissions [Opens in a new window]

Abstract

Music notation has evolved to accommodate music and musical instruments as they have changed over time. However, the rapid advancement of musical technology has not been accompanied by a corresponding development and consensus in notation. This paper examines the challenges faced by notation in representing music written for augmented instruments. We contend that a novel understanding of musical works is necessary and propose a work-concept that recognises the significance of the technology – medium – that composers develop alongside their creations. We emphasise the role of the score within this work-concept model and present an instrumental augmentation system as a case study. Finally, we propose notation guidelines for augmented instruments and argue that standardising notation could facilitate the discovery of common ground that guides the development of augmented instruments and music written for them.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press
Figure 0

Figure 1. (A) Molino’s model and (B) Nattiez’s model.

Figure 1

Figure 2. Tripartite model. The interactions flow in any direction.

Figure 2

Figure 3. Prototype of one of the Kuturani versions. It features a (1) gyroscope/accelerometer, (2) microphone, (3) Teensy 4.0 and audio board, (4,6) input and output gain control, (5) rotary encoder, (7) mems microphone (optional), (8) transducer, (9) LED display, (10) push buttons and (11) audio amplifier.

Figure 3

Figure 4. Two mapping strategies.

Figure 4

Figure 5. A sequential mapping strategy can map multiple events to one sounding result. It can also map the events to multiple parallel or sequential results.

Figure 5

Figure 6. Notation symbols used to represent pitch, duration, dynamics, spatialisation and timbre for the category of ‘Notation for expected sound’.

Figure 6

Figure 7. Example of ‘Notation for expected sound’ category in a musical context.

Figure 7

Figure 8. An example of the notation following the ‘Notation for interaction’ guidelines.

Figure 8

Figure 9. The positions of the sensors staff-lines correspond to their placement over the instrument, with the sensors near the high pitch area of the acoustic instrument on top and sensors closer to the low pitch are of the instrument on the bottom.

Figure 9

Figure 10. The notation follows ‘logic of physical action’. The hand descends to press the sensor and so does the notation despite the fact that the collected data value increased.

Figure 10

Figure 11. First five minutes of performer 1 part of Micahel Pisaro’s Ricefall (top) and Composition 3A-D for augmented instrument by Cristohper R.F. (bottom).