Hostname: page-component-77f85d65b8-5ngxj Total loading time: 0 Render date: 2026-03-29T20:13:21.561Z Has data issue: false hasContentIssue false

Capture and express, question and understand: Gloves in gestural electronic music performance

Published online by Cambridge University Press:  05 May 2022

Jan Schacher*
Affiliation:
Institute for Computer Music and Sound Technology, Zurich University of the Arts, Zurich, Switzerland Centre for Music and Technology, Sibelius Academy, University of the Arts Helsinki, Helsinki, Finland

Abstract

Gesture-based musical performance with on-body sensing represents a particular case of wearable connection. Gloves and hand-sensing interfaces connected to real-time digital sound production and transformation processes enable empty-handed, expressive musical performance styles. In this article, the origins and developments of this practice as well as a specific use case are investigated. By taking the technical, cognitive, and cultural dimensions of this media performance as foundation, a reflection on the value, limitations, and opportunities of computational approaches to movement translation and analysis is carried out. The insights uncover how the multilayered, complex artistic situations produced by these performances are rich in intersections and represent potent amplifiers for investigating corporeal presence and affective engagement. This allows to identify problems and opportunities of existing research approaches and core issues to be solved in the domain of movement, music, interaction technology, and performance research.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press
Figure 0

Figure 1. A performance of “new islands” by the author in the process of annotation. Detailed description and auto-ethnographical trace collections allow for in-depth understanding of effectiveness of composition and development strategies, bodily states, and proprioception as well as identification of decision moments in the flow of the performance (Software: Piecemaker to go “pm2go” by Motionbank [Forsythe, 2013] from 2016, since superseded by Motionbank’s online “Motion Bank System”).

Figure 1

Figure 2. System overview for “new islands”: The performer is equipped with a wireless headset microphone and a pair of wireless sensor gloves; in addition, a wireless sensor staff and other materials such as woodblocks and metals are used. The other half of the system consists of the receivers, an audio card, and software systems for sound transformation processes. The sound outcomes influence the performer’s perception and decisions, thus closing the action-perception loop.

Figure 2

Figure 3. Screenshot of the software system as used in the performance of “new islands” in January 2016. Note the two compass displays with calibration on the lower left, the two vertical audio signal chains on the right (dark blue boxes, each parameter mappable), and the presence of network, parsing, actions, gestures, compass, mapping, and timeline data-treatment modules at the center (collapsed, not showing details). Dynamic audio signal routing and sensor parameter mapping is controlled by a preset system, displayed as large number, mapped and controllable by the gloves or a time system.