1. Introduction
This article aims to discuss the problems regarding analytical studies of spatialised music, especially acousmatic music. This genre, and electroacoustic music in general, is thriving: the practice is well developed, and its tradition goes back more than 70 years. Over this time, a plethora of stylistic approaches to its sound organisation and compositional approaches have been created, studied, analysed and discussed. This is also true regarding the spatial aspect of sound: electroacoustic composers have always felt the need to organise sounds not only in the spectral domain, but also in the spatial one. There are a great number of examples of compositions and composers that used spatialisation extensively, from both the instrumental and electroacoustic domains. It has been historically more challenging to use spatialisation in instrumental compositional practice, where instrumentalists or singers had to be positioned physically in space to achieve a convincing spatial effect. In the electroacoustic and acousmatic domain, thanks to loudspeakers and diffusion systems, this was achieved in a much simpler way, making spatialisation one of the most important aspects of both musical practice and of listening experience. However, while its use was prompted by both aesthetic and technological developments, the musicological understanding of spatialisation is still underdeveloped. Space, spatialisation and spatial audio are one of the microcosms under the umbrella of electroacoustic music. In this field, we are very good at creating techniques, technologies, compositional practices or ways of presenting our music, but are somewhat restrained when trying to develop a scholarship that frames it all.
This article focuses on three problems of spatial musicology, namely the vast and sometimes confusing terminology, the role of technology, and then the analysis itself. While these may seem separate issues, they are very much interlinked: the problem of terminology is connected to how technology has evolved over the years, its development based upon the new ways of reproduction and software systems. Likewise, the problem of analysis itself regards both terminology and technology: the way we speak about our spatial experience is mediated by technology but will also need to consider our perception and reception of the composer’s musical intentions.
I will introduce two ‘listening based’ philosophical approaches that can be used as analytical processes to describe and categorise spatialisation more easily. Spatial reduced listening is an extension of the Schaefferian écoute réduite (reduced listening), where the listening experience (as analysis) does not prioritise the combination between sound and its spatial position in time but rather only focuses on the latter. Spatial relativism, conversely, argues that it is more important to notice that sound moves (or not!) rather than its precise trajectory: this will allow for a simpler and more straightforward way to analyse space from an organic rather than from a Euclidian perspective (Catena and Frisk Reference Catena and Frisk2024).
2. The three main problems of spatialisation analysis in acousmatic music
While the electroacoustic arts as practices are flourishing, the related field of the musicological analysis of spatialisation is still lagging. This is even more noticeable in acousmatic music, where space takes a central role in its aesthetic (Smalley Reference Smalley1997). The three issues discussed here are arguably what hold back the musicological side of spatialisation in acousmatic music. The terminology to describe and define spatialisation is very often confusing; its technology is over-discussed and often interferes with the musicology, fragmenting the compositional thinking; the analysis itself needs to join technology and terminology, which is currently challenging, but must also coherently consider musical aspects of spatialisation and its dramaturgy. Moreover, there is no shared language to describe and notate spatialisation, but also no formalised way to listen to space analytically. Therefore, the development of a coherent musicological framework for the investigation and research of spatiality in acousmatic music is currently problematic.
2.1. The problem of terminology
It is always complicated to talk and write about space, spatiality, spatial audio and spatialisation both in acousmatic music and in the broader field of sonic arts. In just this last sentence I have cited four terms that seem to refer to the same general concept, but that without context may be misinterpreted. A plethora of terms are used to describe the distribution, motion and organisation of sound in space and what this means for us as listening beings.
The word ‘space’ itself generally means ‘physical extent or area; extent in two or three dimensions’ or, more simply ‘the interval between two or more points’.Footnote 1 However, it is a multidisciplinary term, since it can be found very commonly in several other fields of study, such as architecture or astronomy. The confusion that this term may generate can also be observed in the musical field itself; for example, ‘pitch space’ refers to perception of distance between sets of pitches or chords (Lerdahl Reference Lerdahl1988), but it bears no connection with spatialisation (such a connection may be constructed, but this is not relevant at this time). Or again, timbre space represents a perceived distance between two dissimilar timbres (McAdams Reference McAdams and Deutsch2013). This confusion is well detailed by Ojala (Reference Ojala2009: 345) referring to the abundance of names and terms found in the field for describing musical spatiality: acoustic space, external space, implied space, compositional space, informational space, logical space, virtual acoustic space and many more. Ojala also points at how the problem is worsened when many of these terms refer to the same significance (e.g., natural, physical, external, sound space, all referring to the acoustical environment) or, conversely, how a single term may refer to a multiplicity of significances, for instance ‘musical space’ (Ojala Reference Ojala2009). One could even argue that Denis Smalley has done a ‘disservice’ by over-analysing and conflating spaces in his ‘Space-form and acousmatic image’ (Smalley Reference Smalley2007). Anything has become spatial – social spaces, acoustic spaces, pitch spaces, personal spaces and so on – when most of the time they may not even refer to space in musical terms, and certainly not to spatialisation. I will refer more to Smalley’s work in section 2.3.
The term ‘spatial audio’ is also subject to a varying degree of interpretations, even more so with the explosion of home theatres and affordable multichannel systems. Holbrook defines it as a ‘set of tools and methods for how to represent and control sound material through a process of spatialisation’ (Holbrook Reference Holbrook2019). However, ‘spatial audio’ is a term taken for granted most of the time. Holbrook’s definition tends to point towards the technologies for spatial audio, rather than spatial audio itself: he describes the means but not the end. He also provides a definition for ‘spatialisation’, describing it as the ‘intentional placement of sounds in a given space’ (Holbrook Reference Holbrook2019). While I agree with Holbrook’s inclusion of intentionality in the act of spatialisation, the definition does not encompass the motion of sound nor its wider distribution (which I would later describe as ‘occupancy’). Consequently, I would rather define ‘spatialisation’ as ‘the conscious compositional act of moving or placing sounds inside or outside a given space with an intended musical aim’ (Catena and Frisk Reference Catena and Frisk2024). Following this logic, we can redefine spatial audio as the product of a spatialisation process that results in spatialised sound. While spatial listening is ‘one of our most deeply embedded cognitive capabilities’ (Kendall and Ardila Reference Kendall and Ardila2007), it is the conscious act of spatialisation, through synthesis, recording or diffusion, that produces ‘spatial audio’: without this process, everything we hear would be spatial audio and there would be no need for the term. However, today we also assume that ‘spatial audio’ refers to immersive sound scenes, either with multichannel systems (e.g., home theatres and in concert halls) or with binaural renderings. Finally, the term ‘spatiality’ can be defined as the physical attributes of sound in space and how they are perceived, described in acoustic terms rather than musical ones.
In the acousmatic practice, ‘space’ is employed in two unique ways: either ‘composed’ (Smalley Reference Smalley1997), fixed and revealed through sound in the physical space where the composition is reproduced, or ‘diffused’ live, where an interpreter enlarges the composition from a stereophonic (or even multiphonic) version onto a broader multichannel system (Harrison Reference Harrison1998). These two aesthetic differences have a tremendous impact on the way the space is considered compositionally. The live diffusion of a composition is a personal interpretation and orchestration of sound, projecting movements and positions inside a physical space through manual control of the mixing console (Vande Gorne Reference Vande Gorne2002). Even before its diffusion, a composer needs to consider the spatial and acoustic implications of a live distribution of their composition in a larger space.
The ‘composed space’, instead, revolves around impressing a spatial image onto a medium (today’s digital supports such as a hard drive) and does not need any interpreter; rather, it is fixed and played back from a multichannel speaker system. The acoustic attributes of the space are taken into consideration from another perspective; for example, in deciding where a piece should be played, given reverberation times, room dimensions, or speaker positioning. In this case, we talk about ‘listening space’ (Smalley Reference Smalley1997), which is defined as the physical space in which the acousmatic composition (or any music, really) is heard and experienced.
My particular interest, and by extension the scope of this article, lies in a broader interpretation of the ‘composed space’, since it bears a fixed, careful and sophisticated thought of spatial organisation, even outside electroacoustic and acousmatic music: I would refer to it as ‘organised space’. What Smalley does not include in his writing about ‘composed space’ (Smalley Reference Smalley1997) is that instrumental and vocal music can also have ‘scored’ spatialisation: in my analysis of Mozart’s Notturno for Four Orchestras, K. 286 (1777), I discussed how the composer organises and exploits space to structurally form the piece (Catena Reference Catena2022), but there are more examples dating back from the fourteen and fifteenth centuries and up to contemporary music (Solomon Reference Solomon2007). While I will here focus on acousmatic music, it is important to recognise that instrumental and vocal music can also share the same approach to spatialisation. The term ‘Organised Space’, consequently, can be defined as space as consciously composed, written and experienced, regardless of the specific approach to its organisation.
2.2. The problem of technology
Technology in the field of spatial audio defines the tools and practical means that a sonic artist or composer possesses to develop and express their own musical ideas through space. It is, by definition, what allows practitioners to physically distribute sound in an acoustic environment: this is one of the most crucial aspects to consider when thinking about acousmatic music, given its reliance on music technology. This includes both hardware and software tools: from speaker setups and physical controllers to computer software and plugins. In fact, it has been thanks to technological progress that we have many aesthetic and practical approaches to the spatial parameter. With the introduction of the loudspeaker in musical practice, sound diffusion and sound spatialisation have been made immensely easier, compared with purely instrumental composition. Stockhausen’s Gesang der Jünglinge (1956) or Kontake (1958–60), Xenakis’s Concrete PH (1958) in the Brussels Pavillion are notable examples of the early use of loudspeakers (or groups of them) to achieve a musical intent through spatialisation (Solomon Reference Solomon2007). Conversely, the later introduction of the Acousmonium to enlarge stereophonic or multichannel compositions (Barrett Reference Barrett2002; Stavropoulos Reference Stavropoulos2007) over a wider concert space that features many different spatially positioned speakers, highlights the historic importance of the spatial parameter in acousmatic music. This technical and technological development was continued during the 1970s: John Chowning’s research and experiments on localisation cues was implemented in software applications resulting in iconic pieces such as Turenas (1972) or Stria (1978); the conception of Ambisonics by Gerzon, where encoded audio signals are used to represent a sound-field rather than having each channel describe sound coming from a discrete direction, thus enabling the decoding onto an arbitrary speaker layout. Nowadays, a great number of different spatialisation technologies are available, even freely: the already cited Ambisonics in ‘Higher Order’ (which means greater spatial accuracy) (Zotter and Frank Reference Zotter and Frank2019), DBAP (distance based amplitude panning) (Lossius et al. Reference Lossius, Baltazar and de la Hogue2009), VBAP (Vector Based Amplitude Panning) (Pulkki Reference Pulkki2001), WaveField Synthesis (Baalman Reference Baalman2010) and many more. Concert halls and multichannel systems that adopt these technologies have multiplied during the years: from the historic BEAST (Birmingham ElectroAcoustic Sound Theatre) (Wilson and Harrison Reference Wilson and Harrison2010) or GRM’s Acousmonium (DeSantos et al. Reference DeSantos, Roads and Bayle1997) to Game of Life’s portable WaveField Synthesis system (Kang Reference Kang2024) or Pesaro’s Sonosfera, hosting a full sixth-order Ambisonics reproduction system.Footnote 2 Since the last issue of Organised Sound on spatialisation and multichannel music in 2010 (15(3)), for example, another major proprietary spatialisation technology has been released: Dolby Atmos (Sergi Reference Sergi2013). Not only that, but we are currently witnessing the rise of VR/AR/XR technologies for musical interaction (Turchet et al. Reference Turchet, Hamilton and Çamci2021): this will include the electroacoustic arts – and it does already – undoubtedly influencing the way in which composers will deploy sound through space. In the last 20 years, Higher Order Ambisonics (HOA) has become accessible for real-time sound processing, and microphone arrays that employ this technology are commercially available; for example, the Eigenmike (Elko et al. Reference Elko, Meyer and Backer2023). Similarly, new reproduction technologies such as the IKO or icosahedral loudspeakers (Zotter et al. Reference Zotter, Zaunschirm, Frank and Kronlachner2017) are becoming increasingly available, both in commercial forms and even for DIY users (Drack et al. Reference Drack, Zotter and Barrett2020), sparking a wide range of new approaches to sound spatialisation. This proliferation is indeed a positive and welcomed evolution that may enable the development of new techniques and ideas for the use of spatialisation in acousmatic music. However, the lack of a musicological underpinning that helps categorising, understanding and aligning the musical thinking behind the technology itself is risky: the compositional way of spatialisation is already very fragmented and its fragmentation will inevitably worsen soon.
Most of today’s research in the field of spatial audio deals with how we use these technologies and their acoustical fidelity (Blesser and Salter Reference Blesser and Salter2009) or, in the artistic practice, strategies for composing sound in space. There are many examples that detail how composers use or have used spatialisation tools and technologies in their artistic practice, but on the other hand there is little to no analytical work regarding spatialisation in acousmatic music (or in spatial music broadly speaking). Currently, there is no shared and coherent framework for such analysis nor a consistent terminology to describe spatialisation. This ‘technological imbalance’ is also noted by Sharma et al.:
Electroacoustic music hosts two diametrically opposite cultures: on one side we find the exact sciences of acoustics, informatics, and engineering all of which define conditions of sound production, the very instruments of executing any compositional design. On the other side we find the culture of music appreciation by the ear. Whereas the first aspect is heavily loaded with well-defined verbal concepts that are shared among a community of specialists, the aural, musical aspect that embodies musical thought and projects it to the audience is almost devoid of a consistent terminology as far as electroacoustic music is concerned. (Sharma et al. Reference Sharma, Frank and Zotter.2015)
The view on spatialisation and spatial audio is currently too focused on its technical and technological side, while the musical and musicological studies are very much underdeveloped. Consequently, it is very common for musicians and composers to not have a shared ‘language’ when referring to the spatial parameter of sound. Relying on the technological side of spatialisation of sound to find such language would be short-sighted, given its speed in development: on the other hand, our aural perception, reception, and their phenomenological categorisation would be much more grounded in the real-world experience of the listeners. Moreover, it would be impossible to ‘adjust’ the analytical approach depending on the technology used in every single piece. The first reason is that sometimes technical information is difficult or impossible to obtain – for example, the composer has passed, has lost the information, or is unwilling to share it; second, for every new technology a new musicological perspective would be necessary, which is not feasible. It would not be useful to establish an analytical framework on the fact that a piece uses object-based spatialisation, amplitude panning or even different multichannel speaker layouts; conversely, the resulting spatial reception from the listeners is much more valuable. Furthermore, mixed technological approaches to spatialisation are very common, adding to the difficulty of such analysis.
Should we ‘ignore’ technology altogether, then? Smalley argues that abstaining from taking technology into account is necessary, but he does so in spectromorphological terms:
In spectromorphological thinking we must try to ignore the electroacoustic and computer technology used in the music’s making. Surrendering the natural desire to uncover the mysteries of electroacoustic sound-making is a difficult but necessary and logical sacrifice. (Smalley Reference Smalley1997)
Is this even possible while listening to spatial and, even more so, in acousmatic music? Arguably, we cannot fully detach ourselves from technological aspects of the music: it is impossible not to notice how many speakers are part of the reproduction system or their layout in the room, for instance. However, it would be ideal to avoid technological listening (Nyström Reference Nyström2021) in any form while analysing spatialisation.
2.3. The problem of analysis itself
Electroacoustic and acousmatic music analysis is essentially a listening practice. We can gather valuable information about music only by repeated listenings, understanding its organisation either intuitively or through a framework that helps this process. The most obvious example of this is Smalley’s ‘spectromorphology’ theory: by ‘explaining sound-shapes’, he provides a structured analytical method that can be used to study a wide variety of electroacoustic musics (Smalley Reference Smalley1997). This has proved academically invaluable, also shown by the fact that the original Organised Sound article is currently the most cited.Footnote 3
Pierre Schaeffer, musique concrète pioneer, has introduced into electroacoustic music studies another fundamental concept: a basic unit used to describe sound events, called sound object (objet sonore). Similarly to Smalley’s success of ‘spectromorphology’, Schaeffer’s sound object and his Traitè des objets musicaux have assumed primary importance in electroacoustic music studies. These two examples of scholarships are proof of how frameworks for analysis can pivot the way we study and understand music and its parameters. It is beyond doubt that both Smalley’s and Schaeffer’s theories have profoundly challenged the way timbre has been regarded, even in two separate historical periods.
This has not yet been the case in the musicology of spatialisation: there is no established ‘spatial object’ that is widely used and accepted, even though there have been some attempts to formalise analytical approaches. Smalley’s writing on ‘space-form’, for instance, features the arguably most extensive glossary on spatiality in acousmatic music. Smalley’s approach in his work is very ‘ecological’: he states that this ‘is only fitting since we cannot separate space itself from what produces it, nor from our experiences of space in nature and culture’ (Smalley Reference Smalley2007: 54). However, while I agree that this is true in a natural environment (i.e., not in a concert setting), acousmatic composers often intentionally make music to unsettle our perception and our senses, or to construct an impossible spatial scenario. Again, another crucial point made by Smalley is that ‘space-form can only be found rewarding if it has taken on a significant formative role in the music, and this is not the case with every acousmatic work: we have to pick and choose’ (ibid.). I would go on further by saying that spatiality in an acousmatic work must have a significant role, but that the method of analysis must also be adequate to the type of spatialisation employed in the work. Smalley’s ‘space-form’ can be fitting for soundscape works which are more open ended and less ‘prescriptive’, as shown with his Orbieu analysis for instance, but may not be the best choice for more ‘structured’ works. This is because it does not provide concrete examples of the analysis of spatialisation in a compositional framework, nor does it describe the relationship between spatialisation and spectromorphology. For instance, Smalley details his idea of ‘vectorial space’ as ‘space traversed by the trajectory of a sound, whether beyond or around the listener’ (ibid.: 56); but how and why is that sound moving? Does the movement assume a compositional importance? Why was a specific sound chosen by the composer to follow that trajectory, or to move in the first place? In many acousmatic works these may be fundamental questions.
Smalley declines to use an ‘object-based’ approach by saying:
I am no longer happy with relying primarily on an investigative process that elaborates a taxonomy of spectromorphologies, and then proceed to try and work out how they are related and act over time. Such a methodology is based on inherited traditional assumptions handed down from tonal music – that we uncover building blocks of musical material (themes, motives), and attentively follow their transformations and development, arriving at a view of how material progress creates the dynamic tensions of temporal experience. (Ibid.: 54)
However, if an acousmatic work is built around the relationships between a specific spatial motion and its spectromorphology, for instance, then we absolutely need to describe and understand the building blocks of that musical event. The development of technologies for spatialisation confirm this: we are moving to the point where industry standards for spatial audio (e.g., Dolby Atmos) are ‘object-based’, and those that are not (e.g., Higher Order Ambisonics) still aim to recreate a strong point source feeling. While I previously argued to ignore the technologies for spatialisation during analysis, they can showcase trends and practices that may provide useful information for the development of analytical framework of spatialisation. However, this is not to say that I would avoid Smalley’s framework altogether, but rather that a different perspective – one that complements the ‘space-form’ – is useful in filling the analytical gap that currently exists in this musicological field.
This is why in Catena (Reference Catena2022) I suggested the idea of extending both the Schaefferian sound object and Godøy’s gestural sonorous object (Godøy Reference Godøy2006) to accommodate the needs of an atomic unit to describe spatialisation, which I called the spatial sonorous object (Catena Reference Catena2022). This unit needs to be intentional, constituted in our mind and consciousness by our mental activity (Godøy Reference Godøy2006) but also sonorous, rooted in sound as primary material of existence. The spatial sonorous object can be defined as any coherent sound movement or placement that holds a musical function and can be identified by its sounding properties (Catena Reference Catena2022).
By its definition, the spatial sonorous object can be described by three primary components:
-
Spatial morphologies: movements, placements, or occupancies in the spatial scene.Footnote 4 This aspect refers to the perceivable positions of sounds in space. Their main difference is the presence or absence of motion; a spatialised sound either moves or it does not. However, if it does not move, then it is possible to appreciate its wider diffusion in the listening space (occupancy) or a strong sense of localisation in a specific point (placement) (Catena and Frisk Reference Catena and Frisk2024). ‘Occupancy’ can also be described by the complete or partial loss of directionality, where the sound is coming diffusely from around the listener without a strong sense of localisation, different from the concept of ‘placement’.
-
Spatial function: the purpose and role of spatial morphologies inside the compositional structure. The concept of function applied here is similar to that of traditional tonal harmony: each chord has a purpose inside a hierarchical system. While there is no such hierarchical system in acousmatic music, the role that spatial sonorous objects may possess is comparable to those of chords in tonal music: they may be structural (start or end a musical passage or section) or directional (increase or release tension). But spatial functions can also be referential, where spatial motions and placements reference a perceived characteristic of the sonic material, or even be a secondary, where the spatial morphologies are only a complementing element in the musical framework (Catena Reference Catena, Estibero, Payling and Cotter2024)
-
Sound-space association: how sound influences spatialisation and vice versa. This is not to be confused with Smalley’s ‘source-bonded space’,Footnote 5 which only regards the possible link formed between the perceived space and its spectromorphology. Sound-space associations integrate this link by also investigating the conscious musical act (i.e., spatialisation) that is enacted by the composer. This association is concerned with listener’s expectations (i.e., we expect that a sound moves or that is placed in a certain position) but can also be totally imagined by the composer while spatialising the sound. For instance, we expect that high-pitched sounds are placed spatially higher, but this expectation can also be subverted by the composer.
The idea of spatial sonorous object is well detailed and further discussed in Catena (Reference Catena, Estibero, Payling and Cotter2024): here I explain the methodology used for its conceptualisation, based on case studies and examples of both instrumental (Mozart and Xenakis) and acousmatic music (Wishart and Harrison). The concept underlying spatial sonorous objects is that we should not only investigate the perceivable properties of spatialisation (i.e., spatial morphologies), but also consider compositional choices (the why of spatialisation), describing how they influence the musical experience that is eventually projected to the listeners.
With this in mind, we can also see how the technology used to spatialise sound is not taken into consideration: while it is true that the tools employed to make the music are important from an ‘artisanal’ perspective, from a listening and phenomenological one they become secondary. It is the phenomenon of sound, its spatial behaviour and reception that need to be analysed. Thoresen and Hedman describe this crucial aspect of the spatial sonorous object:
Schaeffer’s approach to the world of sound is characterised by a phenomenological attitude: it seeks to describe and reflect upon experience, rather than explain; it posits the actual life world experience of sound as its primary object of research; it clarifies a number of different listening intentions by which the same physical object may be constituted as various objects in the listener’s mind. (Thoresen and Hedman Reference Thoresen and Hedman2007)
Similarly, the concept of spatial sonorous object seeks to describe, rather than explain. I regard the descriptive and explanatory stages of analysis as two very separate levels: one observes and categorises the listening experience, while the other explores musical ideas, meanings and relationships in the compositional environment. More importantly, it is the life world experience of sound and space that become the focus of research and analysis. When the three parameters of the spatial sonorous object are taken into consideration holistically, then the object becomes more than the sum of its parts.
As already described in the previous sections, this article only focuses on multichannel fixed acousmatic music. In this genre, spatialisation and space are composed in the studio, the laboratory of the electronic musician. It is in this ‘aseptic’ place where acousmatic music is generally created; consequently, this is also true for the spatialisation of the piece. One of the most common problems with acousmatic music reproduction is the acoustical and physical settings of the room in which the concert will take place. Too much reverb and the sounds will be muddy and confused, too little and they may sound ‘dead’: the spatial intentions of the piece could be lost here as well. While analysing acousmatic music, we should strive to eliminate all variables that may hinder this stage: acoustics is indeed one of these variables. Headphone listening is becoming viable, but it is not yet realistic enough to replace multichannel systems: music must be listened to and analysed in situ, or its effectiveness may diminish, and musical intentions lost.
Another key aspect that is often underestimated in both analytical and compositional practice is the dramaturgy of music. This concerns why we spatialise, not just how, linking back to the issue of the aesthetic versus the technological.
3. Spatial reduced listening and spatial relativism as analytical processes
I will introduce here two listening modes for the analysis of spatialisation that I have proposed and employed recently: spatial reduced listening and spatial relativism. These two processes were inferred by a research question that I asked myself for years while working with spatial audio: is it only ‘space for space’s sake’? Do composers spatialise sound just because they have access to large multichannel systems and spatial audio tools, or can these perhaps be the trigger for a musical aesthetic? To answer this question, the modes I am introducing will prove to be useful: by listening selectively and by being analytically ‘pragmatic’, it would be possible to describe in the most generalistic way possible the spatial events, also avoiding the technological problem described earlier. By doing so, the spatial sonorous object becomes a feasible analytical tool not only in the acousmatic domain, but also for other musical genres that are written on a fixed support (e.g., a musical score).
3.1. Spatial reduced listening
The Schaefferian theory of écouite réduite (reduced listening) has already been introduced here but also studied and critiqued since its inception and publication. As Chion points out, reduced listening is an ‘anti-natural’ process, as it treats sound for its own sake (as an object), rather than satisfying the instinctive curiosity of cause and meaning (Chion Reference Chion1983: 33). While this may seem an odd listening mode, I argue that it is a needed and beneficial means of analysis. From a spatialisation perspective, it is fundamental to be able to focus on a single aspect of the spatial event to understand its parts individually at first, and then approach it holistically. The features of the spatial sonorous object should be identified, investigated individually and then combined into a single entity that describes the spatial event. It is only by focusing on each individually, listening selectively, that it is possible to dissect the spatial object and reassemble it later for analysis and contextualisation.
This brings to a very important distinction that needs to be made in this analytical phase. As previously noted, this framework seeks more to describe than explain. In other words, the spatial sonorous object is a low-level descriptor of spatial events, taking into consideration only experiential and listening aspects that are readily apparent and perceivable by the listeners: this is an ontological classification. With the spatial sonorous object, we are trying to investigate the very nature and existence of spatialisation in the musical context. However, while this is indeed the case for spatial morphologies and sound-space associations, a certain level of explanation must be in place for spatial functions. A degree of interpretation and understanding of ‘high-level’ compositional structure is needed to fully understand and describe the role of a spatial event musically. This enacts an analytical hierarchy, from the fully descriptive to the fully explanatory: the more the analysis is descriptive, the more spatial reduced listening will be necessary (Figure 1).

Figure 1. Graph of the analytical framework. By ‘ordinary listening’ we intend a listening mode where sound is treated as a vehicle, the opposite of ‘reduced listening’ (Chion Reference Chion1983: 33).
Spatial morphologies and sound-space associations occupy the lower level of this diagram: the movement, placements and the relationships between sound and its motion can be regarded as fully descriptive and as such need spatial reduced listening. They can be understood and categorised even outside a musical structure and only their perceived shape (for spatial morphologies) or relation (for sound-space associations) are needed for their description. For example, we do not need to contextualise in a musical structure the moving sound of a rapidly passing train: we can easily identify that it is moving and that it is something that we expect that sound will do in real-world scenario.
While the spatial sonorous object has already been discussed, the higher analytical level regards the relationship between them inside a compositional structure: this can be a topic for further research in the future. Their musical connection regards the dramaturgy of the music, the why of the spatialisation, which I consider as important as their ontological classification. However, not only are the relationships between spatial sonorous objects musically relevant, but studying the details and implications of the spatial behaviour are also crucial. For instance, how the quality of the motion influences other compositional choices (e.g., rhythm or pitch organisation), or how sound-space associations can be used to ‘trick’ the listeners and more.
3.2. Spatial relativism
One of the key questions posed in the editorial of Organised Sound 15(3) on multichannel music and audio has inspired much of my work:
The idea of continuing the paradigm of diffusion, even for multichannel works, suggests that, far from being spatially fixed (or absolutist in terms of the location of a sound or the perception of an idealised listener in an ideal location), one might adopt a more relativist approach, where the important image for the listener to perceive is not that the sound moves from here to there, but that it moves. (Harrison and Wilson Reference Harrison and Wilson2010)
In fact, I proposed another concept called spatial agnosticism (Catena and Frisk Reference Catena and Frisk2024) that takes a relativist approach to spatial composition, where the spatial morphologies are ‘independent from speaker layouts and encoding technologies’ (ibid.). While I believe that the idea of spatial relativism was originally introduced from a compositional and creative standpoint, it becomes very useful analytically, too. Spatial relativism is arguably rooted in human perception: without any visual cues, it becomes arduous to precisely track spatial trajectories in space, with very few exceptions, such as circular or side-to-side motions (Schumacher et al. Reference Schumacher, Espinoza, Mardones, Vergara, Aránguiz and Aguilera2022). Trajectories with a star, square or spiral shape are found to be correctly perceived on average 25 per cent of the time, or even less (ibid.). These perceptual ‘challenges’ involve many aspects, from pitch to timbre, up to spatial settings and acoustics: however, on a musical level, they should encourage a relativist approach to spatialisation, both compositionally and analytically. If we cannot fully and precisely perceive spatial motions, are they even relevant as such? To quote Harrison and Wilson again, is it more important that the sound moves from here to there, or that it moves? For instance, Trevor Wishart gives interesting examples of ‘spatial counterpoint’ and potential combination of motions (Wishart Reference Wishart1996: 203), but takes a very ‘geometric’ or ‘Euclidian’ approach. Wishart’s sound movements are often very intricate (ibid.: 215), making it impossible to perceive the clear path of the sound source in a real-world musical scenario. In my work Travelling without moving, for example, I composed many spatial trajectories that fly over the listeners, that pass from left to right and vice versa, but that would convey the sense of motion that I wished to communicate even if the whole spatial setting were rotated by 90 degrees. This approach highlights the meaning and musical importance of the spatial scene by not focusing on geometrical coordinates unless they have a very specific dramaturgical and compositional role. Similarly, spatial relativism can be applied to static sound, both very localised or diffused: I described them as placements or occupancies, depending on their spatial width (Catena and Frisk Reference Catena and Frisk2024). How crucial is their positioning in space? Is it more important that they are not moving or that they come from a very specific location? A similar relativist approach can also be used for an alternative musicological attitude. Analysing similar spatial events as a group rather than every single object makes the study of a spatial scene much easier.
4. Analytical example: Jonty Harrison’s ‘Internal Combustion’
As an example, Jonty Harrison’s Internal Combustion is very fitting not only for showcasing the use of the spatial sonorous object but also for exemplifying spatial reduced listening and spatial relativism. Internal Combustion is a part of a four-cycle work called ReCycle, where each composition focuses on one of the four elements – earth, fire, wind or water – all with environmental references. Internal Combustion is about fire, and uses recordings of trains, ships, bicycles, motorbikes, car engines and traffic to highlight the damage that some of these means of transport inflict upon the environment (Harrison Reference Harrison2006). Except for an introduction, the piece is divided into four main sections (echoing the cycle of a four-stroke internal combustion engine), each with their distinct sonic signature. All the sections are divided by dramatic ‘entrances’ or ‘departures’, meaning strong structural elements that highlight formal openings and closures.
-
The introduction (0:00–1:15) is characterised by the sounds of passing cars, moving from one end to the other of the listening space. The fact that the sounds of cars move – a real car would – is to be expected from a spatialisation and perceptual perspective. This introductory section is closed punctually by a strong braking sound.
-
The first section (1:15–3:15) presents the sound of ignition positioned all around the listener, contrasted by a completely different sonic material, as if something had caught fire. In this section only short and rapid movements are present and the spatial organisation exploits positioning to obtain envelopment. This clash between movement and placement, static and dynamic, is one of the key factors in Internal Combustion that will be repeated throughout all the piece’s duration. At 3:10, the ringing of a bicycle bell interrupts the rumbling of the car engine, hinting at the environmental issues that the composer wants to communicate.
-
The second section (3:28–6:06) gradually fades in from the previous one. This section is denoted using highly processed sounds, spatially roaming in the listening space following several trajectories. Again, several interruptions from bicycle bells, cars honking, and various ‘concrete’ sounds create a spatial contrast between their static placement and the trajectories of the synthesised sounds. This contrast is terminated by the arrival of a train – plausibly the Basel train described by the author in the programme notes (Harrison Reference Harrison2006) – which occupies the whole listening space.
-
The third section (6:06–8:05) mixes between both concrete and processed sounds. There is a clear alternation between a crackling sound – like the crackling of fire – sprinkled throughout the listening space, and more organically moving processed and unprocessed sonic material. This opposition happens regularly through the section, contrasting once again spatial trajectories with statically positioned sounds. The section is closed by a decelerating and rotating motor sound.
-
The fourth and last section (8:05–end) starts with frantic ignition sounds, like someone was trying to furiously turn their car on. This distressed texture fades out, and after the departure of a flight, crackling fire sounds enter the scene: the timbral storytelling seems to hint at a broken-down car that has caught fire, leaving the passenger on foot. The crackling sound eventually leaves the scene in favour of a soundscape, gradually fading out into silence.
In all sections of the piece and for every spatial scene, it is more beneficial to study the overall characteristic of the events and group them as a single spatial sonorous object, rather than investigate every trajectory or placement by itself. From a spatial relativist perspective, it is more musically relevant to understand and note that the sounds are moving rather than the specific and precise trajectory of each movement. Consequently, it is easy to recognise and describe spatial sonorous objects within Internal Combustion: the introduction is a very fitting example. The sound of passing cars is constant throughout the entire section, and while the specific trajectories for each movement change slightly, they share a similar type of lateral motion. It is clear how the spatial morphology of this spatial sonorous object implements motions: is it crucial to define whether the car sound moves from left to right or vice versa? In this case not at all. By going a step forward, we can also identify its sound-space associations: it is apparent that sound of cars passing by and their correlated spatial motion is to be expected from a perceptual perspective. Finally, it is possible to describe its spatial function as referential: the spatial motion is used to refer to the behavioural aspect of cars passing by and their expected movement. It is always crucial to keep in mind that with this analytical process we are focusing on spatialisation and the ways in which sound correlates with the compositional act of distributing and moving sound in a listening space; in other words using spatial reduced listening. We can conclude that the first spatial sonorous object of Internal Combustion is moving: it has a referential function and an expected sound-space association. The closing sound of the introduction – an approaching car that suddenly breaks, occupying the whole spatial scene – is an interesting and different example of spatial sonorous object. The same analytical method can be used in this case but with contrasting results. The spatial morphology of this object is defined by a lack of both motion and directionality, where the same sound is coming from all eight speakers, and consequently by all directions: this fits the definition of occupancy. It is also a closure, since it formally ends the introduction and opens a new section of the work: this aspect matches the definition of structural function. Finally, its spatial morphology is that of a diffused sound, and a car rapidly approaching towards a listener has a strong directional character, therefore creating a contrast between expectation and spatialisation. Its sound-space association, therefore, is unexpected. In summary, the second spatial sonorous object occupies the spatial scene, has a structural function and its sound-space association is unexpected.
The same analytical process can be applied to the whole piece, yielding a new perspective and producing interesting findings. One of the most striking aspects that emerged after this analysis is that the spatial experience of Internal Combustion is built upon is the stark contrast between static positioning and dynamic motion. Often, during the piece, static placement of sound sources – namely crackling sounds or concrete material such as bicycle bells or car ignitions – are contrasted with moving sources such as passing cars or heavily processed sounds following several spatial trajectories. It is possible to contextualise this difference thanks to the process of spatial reduced listening: the positions or movements are abstracted by the sound material that mediates them (i.e., we focus only on the movements and positions), making it possible to analyse the spatial events holistically.
5. Conclusions and looking at the future
Spatial reduced listening and spatial relativism are two listening modes concerned with the investigation of musical spatial events’ behaviour in a manner that only focuses on listening experience and avoids technological aspects. They are an attempt to unify a very fragmented and diversified spatialisation musicology under the common banner of the listening experience. This field, already under the umbrella of the all-encompassing electroacoustic musicology, is too often left aside, and spatialisation regarded only as a ‘space for space’s sake’ practice with no common or shared terminology, language, or notation. While the current focus of spatial audio is too often technology, and thus the maker, why do we not shift the attention to the listener, and thus the taker? How do listeners appreciate spatialisation musically? This musicological field must also delve into the ‘experiential’ and needs to develop appropriate analytical tools to find a common ground: the spatial sonorous object tries to set a precedent for this kind of analysis.
Where do we go from this? We are currently in a time where technology allows the composer to realise their ideas in any way desirable: computational limitations have not been a problem for at least 15 years, and it is now possible to have very affordable multichannel systems and spatial audio technologies. It is time that a language of spatialisation is developed: it will need to balance technology, technique and musical experience. By doing so, the natural evolution of a language is a notation to describe it (Catena and Dorigatti Reference Catena and Dorigatti2024). With a shared notation, the research of spatial dramaturgy and compositional intent is made much easier. We return to the ‘scholarship’: to develop one and, consequently, facilitate the understand of ‘Organised Space’, we need both the framework and the language to outline it all.
Acknowledgements
This research is part of the author’s PhD project, and it is currently funded by the Midlands4Cities (M4C) doctoral consortium and by the Arts and Humanities Research Council (AHRC).