Hostname: page-component-77f85d65b8-g4pgd Total loading time: 0 Render date: 2026-03-29T08:40:57.766Z Has data issue: false hasContentIssue false

Effort inference and prediction by acoustic and movement descriptors in interactions with imaginary objects during Dhrupad vocal improvisation

Published online by Cambridge University Press:  05 July 2022

Stella Paschalidou*
Affiliation:
Hellenic Mediterranean University, School of Music and Optoacoustic Technologies, Department of Music Technology and Acoustics, Greece
*
*Author for correspondence: Stella Paschalidou, Hellenic Mediterranean University, Greece. Email: pashalidou@hmu.gr

Abstract

In electronic musical instruments (EMIs), the concept of “sound sculpting” was proposed by Mulder, in which imaginary objects are manually sculpted to produce sounds, although promising has had some limitations: driven by pure intuition, only the objects’ geometrical properties were mapped to sound, while effort—which is often regarded as a key factor of expressivity in music performance—was neglected. The aim of this paper is to enhance such digital interactions by accounting for the perceptual measure of effort that is conveyed through well-established gesture-sound links in the ecologically valid conditions of non-digital music performances. Thus, it reports on the systematic exploration of effort in Dhrupad vocal improvisation, in which singers are often observed to engage with melodic ideas by manipulating intangible, imaginary objects with their hands. The focus is devising formalized descriptions to infer the amount of effort that such interactions are perceived to require and classify gestures as interactions with elastic versus rigid objects, based on original multimodal data collected in India for the specific study. Results suggest that a good part of variance for both effort levels and gesture classes can be explained through a small set of statistically significant acoustic and movement features extracted from the raw data and lead to rejecting the null hypothesis that effort is unrelated to the musical context. This may have implications on how EMIs could benefit from effort as an intermediate mapping layer and naturally opens discussions on whether physiological data may offer a more intuitive measure of effort in wearable technologies.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press
Figure 0

Figure 1. Zia Fariduddin Dagar in concert, 16.06.2007 (Mana, 2007).

Figure 1

Figure 2. Typical equipment setup, photo and layout from Palaspe/Panvel, school of Zia Fariduddin Dagar.

Figure 2

Figure 3. The sequential mixed methodology that was followed. Gesture images taken from Mulder (1998).

Figure 3

Table 1. Data overview for vocalists Afzal Hussain and Lakhan Lal Sahu

Figure 4

Figure 4. Critical pitches defined by three values (minPre-max-minPost) for double-sloped melodic glides.

Figure 5

Table 2. Best idiosyncratic LM

Figure 6

Figure 5. Boxplots for best (idiosyncratic) models LM 1 and 2 for singers Hussain (above) and Sahu (bottom), respectively, displaying the positive or negative correlation between each feature and the effort level, as well as the degree of confusion in the data distribution across effort level values (color coded for 1–10).

Figure 7

Table 3. Most overlapping LM

Figure 8

Figure 6. Boxplots for generic LM models 3 and 4 for singers Hussain (above) and Sahu (bottom), respectively, displaying the positive or negative correlation between each feature and the effort level, as well as the degree of confusion in the data distribution across effort level values (color coded for 1–10).

Figure 9

Table 4. Best idiosyncratic GLM in classifying interactions with rigid versus elastic objects

Figure 10

Figure 7. Boxplots for best (idiosyncratic) GLM models 1 and 2 for vocalists Hussain (above) and Sahu (bottom), respectively, displaying the positive or negative correlation between each feature and the gesture classes, as well as the degree of confusion in the data distribution across classes (color coded for elastic vs. rigid).

Figure 11

Table 5. Most overlapping GLM in classifying interactions with rigid versus elastic objects

Figure 12

Figure 8. Boxplots for generic GLM models 3 and 4 for vocalists Hussain (above) and Sahu (bottom), respectively, displaying the positive or negative correlation between each feature and the gesture classes, as well as the degree of confusion in the data distribution across classes (color coded for elastic vs. rigid).

Figure 13

Figure 9. Schematic of mapping movement and acoustic features to high–medium–low effort levels according to the more generic model LM 3.

Figure 14

Table A1. Abbreviation of all features that featured in the successful models presented in the analysis