Parameter mapping sonification is the most widely used technique for representing multi-dimensional data in sound. However, it is known to be unreliable when used for detecting information in some types of data. This is generally thought to be the result of the co-dependency of the psychoacoustic dimensions used in the mapping.
Positing its perceptual basis in a theory of embodied cognition, the most common approach to overcoming this limitation involves techniques that afford the interactive exploration of the data using gross body gestures. In some circumstances, such exploration is not possible and, even when it is, it may be neither necessary nor sufficient.
This article explores some other possible reasons for the unreliability of parameter mapping sonification and, drawing from the experience of expressive musical performance, suggests that the problem lies not in the parametric approach per se, nor in the lack of interactivity, but in the extent to which the parameters employed contribute to coherent gestalts. A method for how this might be achieved that relies on the use of micro-gestural information is proposed. While this is speculative, the use of such gestural inflections is well known in music performance, is supported by findings in neuroscience and lends itself to empirical testing.
Email your librarian or administrator to recommend adding this journal to your organisation's collection.