Hostname: page-component-6b88cc9666-mbxxb Total loading time: 0 Render date: 2026-02-16T12:29:10.278Z Has data issue: false hasContentIssue false

Xenomusicology

Published online by Cambridge University Press:  16 February 2026

Nick Collins*
Affiliation:
Durham University, UK
Rights & Permissions [Opens in a new window]

Abstract

Speculative xenomusicology explores alternative music theories, imagining the physical and cognitive affordances of alien musical life. Exoplanets are actively studied in astronomy, and though there is no direct evidence of xenobiology, particularly of more advanced musical intelligences, potential alien music may still be considered in advance in the same way that exobiologists speculate on the conditions for alien life. In particular, a generative system is presented which creates imagined xenomusic based on altering human memory constraints and links the organisation of the sound to the parallel generation of an alien language. Microtonal pitch, complex rhythm, timbral material and spatialisation within putative alien architectures are all considered. This alien ‘analysis by synthesis’ can provide new musical adventures and new understanding of the possibilities of music theoretical space, regardless of any eventual ontological resolution of xenocultures.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press

1. Introduction

Xenomusicology might be defined as the study of non-human and potentially extraterrestrial music, undertaken to gain new perspectives on the biological and cultural assumptions of human music-making. This article explores the seemingly remote possibility of alien music (‘xenomusic’) from an inescapably anthropomorphic basis. Yet, the theoretical consideration of xenomusic is potentially productive in opening up new paths in musical space, forcing a critical reconsideration of human cognitive and physical affordances, generalisations of human music theory and the problematic status of any musical universal (Wallin et al. Reference Wallin, Merker and Brown2000; Savage et al. Reference Savage, Brown, Sakai and Currie2015). Indeed, an over 50-year-old article by von Hoerner (Reference Von Hoerner1974) tackled the issue of musical universals with respect to the potential for overlap between alien and human music, although the theoretical basis was limited to equal tempered pitch materials (positing universal preference for 5, 12 and 31 notes per octave scales) and not conceived in terms of extended electronic music resources. Whilst any putative real alien music would likely be extremely far divorced from our own sensory apparatus and may not involve a standard medium of sound propagation, the translation of such an abstract entity to human perception is an interesting case of composer-led sonification (Mermikides Reference Mermikides2025). To imagine a more distant alien music, the foundations of music as a time-based art form must be abstracted in new ways. This is a rather different undertaking than the clichéd sci-fi media music of Hollywood and TV (Summers Reference Summers2013), whatever the historic twentieth-century cultural associations of electronic music and other worlds (Taylor Reference Taylor2001). It is a project that befits a wider investigation of ‘organised sound’.

Unfamiliar exoplanets and unknown xenobiology may lead to music transmitted in the vibrations of alternative physical media and materials, intended for different auditory conditions and physiological apparatus, even at different cognitive rates, all impacting on the perceptibility to human ears of xenomusical information. Exoplanets may have rather different acoustic situations, for example, with higher fluid loading effects and faster speeds of sound in denser atmospheres impacting on formants and vibrational fundamentals. On the surface of Venus, if you could speak without immediate dissolution amidst the sulphuric acid, you would have heavier vocal folds for a deeper source tone but higher formants from the higher speed of sound for a child’s vocal timbre (Leighton and Petculescu Reference Leighton and Petculescu2009a, Reference Leighton and Petculescu2009b). With a sparse Martian atmosphere, sound can disappear within 10 metres (Leighton and Petculescu Reference Leighton and Petculescu2009a).

Whilst the full modelling of exoplanet acoustics, alien architectural acoustics and alien physiological and cognitive affordances is challenging, we can proceed pragmatically by dropping certain preconditions of human music theory and altering certain constraints of human cognition. We can also keep using certain standard techniques of computer music, such as frequency modulation synthesis or convolution reverb, making the assumption that these are mathematical universals of signal processing and would be discovered by alien electronic musicians. Below, we utilise a potential physical universal, the stiff string equation (Fletcher and Rossing Reference Fletcher and Rossing2012). We don’t diverge from standard acoustic equations too far, in particular not worrying about propagation distances (the listening position is as close by as needed) and assume that a translation to human perceptible audio fluctuation is possible, even if the information may not be homomorphic to human cultural conventions.

There is a reasonable amount of previous work in sonification and astronomy (Diaz-Merced et al. Reference Diaz-Merced, Candey, Brickhouse, Schneps, Mannone, Brewster and Kolenberg2011; Ballora Reference Ballora2014; Misdariis et al. Reference Misdariis, Özcan, Grassi, Pauletto, Barrass, Bresin and Susini2022; Zanella et al. Reference Zanella, Harrison, Lenzi, Cooke, Damsma and Fleming2022). Case studies have included sonification of orbits of solar system bodies (Tomlinson et al., Reference Tomlinson, Winters, Latina, Bhat, Rane and Walker2017), exoplanetary data (Quinton et al. Reference Quinton, McGregor and Benyon2020), star field mechanics (Bardelli et al. Reference Bardelli, Ferretti, Ludovico, Presti and Rinaldi2021) and gravitational waves (St George et al. Reference St George, Kim and Bischof2018; Valle and Korol Reference Valle and Korol2022). These translations from astrophysical phenomena to the human sensory apparatus are highly mediated, from incredibly low frequencies on a cosmic scale, to areas of the electromagnetic spectrum entirely outside of human sensory experience. Similarly, alien music may subsist far from standard atmospheric pressure air molecule collisions. As necessary, we posit that some translation through sonification techniques has taken place to enable hearing xenomusic. We map through to frequencies within human hearing range and at rhythmic rates to some degree perceivable at human perceptual rates favouring a 2–3 second perceptual present (Pöppel and Wittman Reference Pöppel, Wittman, Wilson and Keil1999; Pöppel Reference Pöppel2004; London Reference London2012).

Astronomical phenomena have often inspired music in general (Fraknoi Reference Fraknoi2012). A fascination with space has suffused many works incorporating electronic resources, from Johanna Beyer’s Music of the Spheres (1938) scored for three electrical instruments to Laurie Spiegel’s Harmonices Mundi (1977) a sonic realisation of Kepler’s early seventeeth-century solar system data as featured on the Voyager Golden Record (Nelson and Polansky Reference Nelson and Polansky1993). Other examples might include Joe Meeks’ studio effects on the highly successful satellite homage instrumental Tel Star by The Tornados (1962), Robert Normandeau’s electroacoustic suite Clair de Terre (‘Earthlight’, 1996), Stockhausen’s 8-channel parts for Sirius (1977), the avant-garde afrofuturism of Sun Ra and the techno pioneers (van Veen Reference van Veen2013), or the overt rave era name checking of Orbital, Star Sounds Orchestra, and The Prodigy’s Out of Space (1992), itself sampling the key lines ‘I’m gonna send him to outer space to find another race’ from Max Romeo’s Chase The Devil (1976). Electronic music saw remarkable growth during the post-WW2 space age, and an inescapable association between high technology and electroacoustic music was forged in the popular imagination (Taylor Reference Taylor2001), well evidenced by the ambiguous and ambitious electronic sound design and ‘electronic tonalities’ of the sci-fi B-movie Forbidden Planet (1956) composed by Louis and Bebe Barron (Wierzbicki Reference Wierzbicki2005), or the more recent soundtrack to Arrival (2016) composed by Jóhann Jóhannsson.

The well-known Search for Extraterrestrial Intelligence (SETI), the offshoot Communication with Extraterrestrial Intelligence (CETI) and Messaging ExtraTerrestrial Intelligence (METI) consider both detection and potential communication with an alien civilisation (Sagan and Drake Reference Sagan and Drake1975; Tarter Reference Tarter2001; Cabrol Reference Cabrol2016). Music as a communicative demonstration to reach alien species underlies the Voyager Record (Nelson and Polansky Reference Nelson and Polansky1993) and has been taken as a compositional impetus to ‘sonic xenolinguistic expressions’ by Willard Van De Bogart (Reference Van De Bogart2018). The domain of xenolinguistics concerns the study of possible alien languages (alien communication protocols) and the design of languages that may be maximally effective in communicating human concepts to other species (Vakoch and Punske Reference Vakoch and Punske2023; Oberhaus Reference Oberhaus2024). Just as Hans Freudenthal’s LINCOS project is designed to gradually convey more complicated mathematical concepts leading up to small models of human societal structures, the SETI researcher Vakoch (Reference Vakoch2010) has considered the attempt to convey the complexity of human aesthetics to aliens, through increasingly complicated musical signals carried by radio waves. Vakoch argues that human music could be demonstrated from small musical examples up to more complicated pieces, though the writer is not conversant with more extended concepts of sound-based music and restricts themselves to basic pitch materials, in the tradition of the ARP synthesiser/mothership duet of Close Encounters of the Third Kind (1977). He admits that ‘even without comprehending the musical intent of such a message, the recipients might gain some understanding of human information-processing capabilities’ and acknowledges that human-specific cognitive-physical constraints are at work.

Vakoch does not reach as sceptical a position as the philosopher Gregory Stacey, who questions whether aliens would ever find our music comprehensible (Reference Stacey and Playford2024). Stacey bases his argument on grounds of a failure to bridge human and alien cognition of imagination and emotion, notwithstanding Xenakis’ interest in non-emotional intellectual music (Xenakis Reference Xenakis1992), or Ian Cross’ ‘floating intentionality’ (Cross Reference Cross2003) allowing for very different subjective emotional responses in a listener. Like many philosophers of music fixating more on Western classical examples, he prioritises the status of pitch materials above the timbral play of much electroacoustic music and does not sufficiently consider that advanced signal processing, presumably available to an alien civilisation as universal engineering principles, could reveal significant structure of interest in a musical signal. Nonetheless, he acknowledges we can learn more about human music through the attempt to reach alien minds, a primary goal of the present work.

The evolutionary psychology of music seeks justifications for the importance of music in human culture, including social bonding, sexual selection, a safe zone for aural training and emotion regulation (Forde Thompson Reference Forde Thompson2009); such evolutionarily critical aspects may well provide a common link to alien thought. Whatever role emotional signals play in alien physiology and musical practice (or whether they attempt to suppress emotion like Star Trek’s fictional Vulcans), such factors as reproduction, social cohesion and communicative training are likely to be universal drivers for advanced life. In as much as just-so stories in evolutionary psychology are speculative in the absence of surviving hard evidence, just-so stories can be spun about alien mores in exo-evolutionary psychology.

In human cognition, the interconnection of music and language is well established, though separate processing modules do co-exist in the brain (Patel Reference Patel2008). Foundational musilanguage is a training ground for adult speech through motherese and neonate babbling and may have been an early developmental state of language in the evolutionary past (Mithen Reference Mithen2006). It makes sense to consider if xenolanguage (whether acoustically delivered or not) may have a similar parallel to xenomusic; in the present project, a common memory constraint is used to generate both a musical system and a xenolanguage. Because the project is configured as both generative composition and an investigation of the bounds of music theoretic and alien cognitive space, it is not an entirely scientific inquiry but a combination of musical inspiration from an exoplanetary theme with a consideration of new possible generalisations of music.

The article now discusses a specific software project to implement a xenomusic/xenolanguage generator in Section 2, before a general discussion in Section 3 and a short summary conclusion.

2. The xenophone software

A generative music system was created to investigate xenomusic. It is implemented in the SuperCollider audio synthesis programming language (McCartney Reference McCartney2002; Wilson, Cottle and Collins Reference Wilson, Cottle and Collins2025) and available as open-source software under the GNU GPL 3 licence from github (https://github.com/sicklincoln/Xenophone), alongside an accompanying mini-album of example outputs (https://sicklincoln.bandcamp.com/album/xenomusic). Being a generative system, the output is different on each run, enabling a sampling of many potential alien cultural possibilities.

The project consists of two parts: 1) a music generator founded in a model of time and frequency content based in shorter or longer spans of working memory than a human norm (though its output is certainly interpretable by human ears) and 2) a highly abstracted alien language generator used to make ‘written’ documentation of the musical tradition; the symbols are translated from alien symbols to ascii to give some chance of human appreciation, but the grammar and vocabulary are highly unusual, again founded in different orders of working memory allowing for smaller or larger numbers of units and grammatical constructs. A given generation uses common cognitive memory constraints for the music and language components; each generation is posited as a particular alternative alien culture. A ‘universe’ mode is included with the software to drive it through an unending list of xenocultures over time. The human baseline for memory has been suggested as 7 ± 2 items in classic work by Miller (Reference Miller1956) though a reduced capacity such as 4 ± 1 has been more recently proposed in the context of linguistics (Green Reference Green2017).

Realtime playback and non-realtime (NRT) rendering modes are provided in the software. The NRT mode allows the creation at speed of a corpus of outputs, used below for a comparative analysis of the timbral space of software generations to existing electronic music. The software can also generate video frames, by creating long tracts of alien language and gradually displaying it a character at a time, to accompany live playback of the associated music or create promotional videos: an example video is available on YouTube (https://www.youtube.com/watch?v=HLLSm9mdlsw).

The software and example album have been publicly available since 2023 and were a featured part of Bob Sturm’s 2023 AI Music Generation Challenge which investigated how algorithmic composition systems could be founded in alternative musical cultures (https://github.com/boblsturm/aimusicgenerationchallenge2023). A concert including three short xenomusical pieces took place on 11 December 2024 as part of the First International Conference in AI Music Studies (https://boblsturm.github.io/aimusicstudies2024/).

We provide a few more details of the musical and linguistic output generation in Sections 2.1 and 2.2, respectively, and an evaluation of the outputs in Section 2.3.

2.1. The music generator

The music generator is a probabilistic rules-based system. There is no extant corpus of alien recordings to learn from, so the compositional task involves speculative modelling, based on certain xenocognitive and music theoretic assumptions. The creation of an output piece is made dependent on an integer M, corresponding to working memory size or the number of different parameter levels a listener is sensitive to; M is varied away from a human norm. Each new alien music has its own randomly generated tuning system, rhythmic grid, and level of polyphony. There is an in-built assumption that a metrical frame is in operation giving a fixed overall repetition rate; it is possible to conceive of an alien music culture breaking such as assumption, for example, alternating or constantly varying metrical frames, or having a cognitive capacity to cope with multiple simultaneous frames. Nonetheless, some baseline assumptions are fixed here in order to proceed to the current xenomusic. Since the code is open and the assumptions are explicit, later researchers can consider revising any of the axioms. The overall algorithm for music generation is broken down here into steps:

  1. 1. Choose a memory size M from 2 to 30 items (humans would be 7 ± 2 or 4 ± 1).

  2. 2. Choose a set of M differently perceptible durations.

  3. 3. Make a metrical cycle, using selection from the basic durations.

  4. 4. Set up an importance grid over the cycle with both positive (favour) and negative (avoid) ratings at each non-isochronous timestep.

  5. 5. Choose a number of voices from 2 to 15 (level of polyphony).

  6. 6. Generate rhythms for different voices (with the possibility to invert the importance grid). Importance gets downweighted for grid positions the more voices choose that position already, to promote counterpoint. Each run through the cycle, subtle variations are possible.

  7. 7. Choose a number of discernible frequencies (based on memory size M).

  8. 8. Choose some frequency subsets for different sections of the piece.

  9. 9. Choose pitch selection principles, including options for serialism and random selection.

  10. 10. Generate music per cycle, utilising the rhythms and pitch selection criteria for each voice.

There is an option to adjust the timescale of generation more slowly or more quickly, further outside of human norms.

Synthesis is via a set of resonant filterbanks (with modes randomly picked) for more percussive sounds, and stretched string (stiff string equation) or frequency modulation algorithms for pitched voices. The justification for these particular sound synthesis algorithms is that aliens making electronic music would surely discover such mathematical principles too, regardless of their home world acoustics. Each voice can be percussive or tonal, but there is a chance that the whole output piece will be tonal only, or percussive only. Global effects are applied, including the potential for convolution reverb with an impulse response selected from a catalogue supplied (readily changed in the codebase).

Because there is a relatively free microtonal choice of frequency content, and the set of rhythmic durations can be quite large with longer time cycles, the output can easily sound complex to a human listener.

2.2. The language generator

As well as the primary function of music generation, the algorithmic scope is broadened to consider the creation of accompanying documentation of the culture. A link between music and language is posited based on a common cognitive processing constraint, the working memory size M. Just as alien music may not be based on atmospheric pressure changes, alien communication might take place within many mediums. Nonetheless, memory size provides an information-theoretic basis for symbolic processing that can be output through many possible physical, chemical and biological channels. For human consumption of the alternative xenolanguage, text and image are utilised here for convenience.

A xenolanguage is devised afresh alongside each music generation, based on these steps:

  1. 1. Make a set of ‘letters’ (2 to 50 different symbols, but not more than memory size M).

  2. 2. Make a set of 500 to 10,000 ‘words’ from letters (each word has 1 to 10 letters).

  3. 3. Choose from 2 to 10 (but not more than memory size M) ‘types’ of word (with randomised weightings for usage).

  4. 4. Create a (non-recursive) grammar setting permissible sentence constructions, as lists of ‘types’ of ‘word’ (2 to 20 type slots per sentence, 2 to 200 sentence constructions).

  5. 5. Choose a number of ‘tenses’, each giving 1–3 character modifiers to words (3 to 20 tenses, but not more than memory size M).

  6. 6. Generate sentences according to a selected sentence template from the grammar and tense, with words from the appropriate type at each position, modified by the current tense.

We sidestep the potential complexity of general xenolanguages, which may have a greater degree of generative recursion, concentrating on a basic hierarchical creation of output sentences from a common pool of letters, words, types and sentence templates. This pragmatic approach avoids the infinitely long sentences possible with some Chomskyian hierarchy formal grammars and is more akin to usage-based grammar (Diessel Reference Diessel2017). It is clear that the number of tokens of various types could be further changed, guided by the memory size capacity posited for a given culture.

Figure 1 compares output text generated from four different runs (four different xenolanguages), plotted in pictures as generated by the programme. For a given generated xenolanguage, the writing can run in any direction forwards and backwards or up and down, including in some cases at non-horizontal or vertical angles. Each run is also accompanied by sending a large block of generated text (a few pages worth) to the SuperCollider print window. Image generation can also be carried out a few characters at a time to create an image sequence suitable for turning into a video of the gradual writing of a long block of alien text in the xenolanguage. Note that the rhythmic revelation of the text is not tied to the specific xenomusic generator’s output, and the primary point of connection of the xenolanguage and xenomusic is the working memory constraint M.

Figure 1. Four examples of xenolanguage outputs.

2.3. Evaluation of xenomusic with respect to human norms

To qualitatively describe typical software generations, the xenomusic outputs are often spacious and disconcerting soundscapes showing some affinity to 1950s experimental electronic music, and some timbral overlap through frequency modulation and Karplus–Strong style physical modelling with US electroacoustic music such as that of John Chowning and David Jaffe (Chafe Reference Chafe1999). Whether or not strict serialist principles underlie the generation, the parameter-based paradigm necessarily undertaken in a rules-based system of this type automatically brings in an austerity, reminiscent of Stockhausen’s more widened concept of serialist music as music that systematically parameterises different musical dimensions (Maconie Reference Maconie2005). The pacing of outputs can vary from slow development to frenzied movement. The musical language is inherently microtonal, and there is no theoretical concept of equal temperament tuning built in (in opposition to the claims of Von Hoerner Reference Von Hoerner1974).

To further analyse outputs more quantitatively, 1000 volume normalised two-minute outputs of the xenomusic software were created through an NRT rendering mode. They were then subject to MIR audio content analysis using the SCMIR SuperCollider Music Information Retrieval library (Collins Reference Collins2011), alongside 1000 general pieces from electronic music history (from the years 1950–99), sampled at random from a dataset (Collins, Manning and Tarsitani Reference Collins, Peter Manning and Tarsitani2018). The 2000 works were subject to audio analysis with respect to timbral descriptors, harmonic/percussive balance, and rhythmic and metrical content, with features averaged within works to obtain 32 audio feature means per piece. Feature values were normalised with respect to interquartile range normalisation, which is more immune to outliers than max-min normalisation, with quartiles obtained from an initial sweep through the corpus.

Because the electronic music corpus is highly diverse, from historic sound-based music of musique concrete and elektronisches Musik, to decades of electroacoustic music, and decades of synthesiser instrumentals, synth pop and electronic dance music, it covers many timbral preoccupations, and both rhythmic beat-based and more arrhythmic music, as well as some 12TET pitch materials and some more microtonally diverse. This makes it a strong comparison case, unrestricted to a single silo of human music-making.

Figure 2 plots the location of the 1,000 xenomusic pieces in comparison to the electronic music history corpus works, in a 3D space formed from three more discriminative timbral features, the midrange spectral power (energy from a band-pass filter with centre frequency 3000Hz and one octave bandwidth between -3 dB points 2121.3 Hz to 4242.6 Hz), the spectral centroid and the 50 per cent spectral percentile. Whilst there are some points of contact between the corpora, many of the xenomusics are seen to inhabit new areas of timbral space; the outputs tended to be less bright than general corpus works, perhaps a result of a less percussive basis in general. Figure 3 illustrates the separability of the two corpora with respect to a timbral feature, the second Mel-Frequency Cepstral Coefficient (MFCC, a measure of spectral envelope, see Lerch Reference Lerch2022), the relative balance of percussive to tonal information and the (average) density of onsets within two-second windows in each piece. Most pieces separate well, though there is a region of overlap.

Figure 2. 3D scatter plot of xenomusic pieces against electronic music corpus pieces with respect to three timbral features.

Figure 3. Scatterplot in 3D of xenomusic vs electronic music corpus with respect to the second MFCC, percussive/tonal balance and onset density.

An analysis was carried out of proximity to other works based on Euclidean similarity between the 32 feature vectors representing each piece; of the top 10 closest pieces for a given work, 77 per cent of the EM corpus and 88.5 per cent of the xeno music had no work from the opposite corpus nearby. Actual proximate works from the corpora revealed similar results to the qualitative report above; earlier electronic music or certain FM-rich electroacoustic works were mainly the culprits in appearing nearer xenomusic outputs. In a few cases, more unlikely proximities were uncovered; Kraftwerk’s The Model was close to a xenomusic that was just percussive sounds with pitched filtering, in a medium rate tempo that sounded slightly like the main riff from the piece; in the main, the more rhythmic works in the electronic corpus were avoided. Works of Bernard Parmegiani and Daphne Oram from the 1960s and 1970s did appear in multiple proximate cases, but the majority of the 2 million dissimilarity scores between the two corpora (as opposed to within corpora) were high. Listening to xenomusic outputs highlighted some interesting relatively austere microtonal electronic works with a refreshing pacing unlike more familiar rhythmic structures (the reader is encouraged to listen themselves to the example outputs mini-album linked at the close of the article to make their own decision). That there are some cases of similarity between the corpora despite finding some novel timbral-rhythmic locations is an indication that there is more to explore in terms of xenomusical generation.

3. Discussion

The observable universe is estimated to contain 100 billion galaxies, each with on average 100 billion stars (Turner Reference Turner2009). Planet density is at least one planet per star system, on average (Cassan et al. Reference Cassan, Kubas, Beaulieu, Dominik, Horne and Greenhill2012). This gives a potential 1023 planets on which life could ever have arisen. Of course, 10 per cent or less are in a basic habitable zone, and a much smaller proportion will have any chance of microbial life evolving, let alone more complex organisms able to appreciate organised vibration as a life affirming activity. But even dropping many orders of magnitude, the numbers could suggest quite a few potential extraterrestrial musical civilisations. Some professional astronomers estimate the density of advanced civilisations at zero to one per galaxy (Cabrol Reference Cabrol2016) which still leads to billions of potential alternative musical civilisations, even if contact is implausible.

The universe mode of the xenophone software creates a new ‘civilisation’ or ‘planet’s music’ every 30 seconds to 5 minutes in an infinite listening mode. If we assume 50 are created per hour, 438300 are generated per year. Optimistically assuming 10^11 extraterrestrial civilisations, on average one per galaxy (whether or not some empires span many worlds with sufficient musical homogeneity to substantially reduce the variety per planet), the software would take 228154 years to play back in realtime enough other worlds of music to cover these.

There may be a negligible chance of two sufficiently advanced civilisations arising within a close enough distance at the same time to make contact practical (Cabrol Reference Cabrol2016). Even if contact is never made, the theoretical consideration of alien music is both an investigation of possible music outside of Earth cultures and a mirror from which to examine the preconditions and biases of human musical cognitive preference. Particularly for electroacoustic music, used to operating at the margins of organised sound, we can stretch the bounds of our theoretical frames as we play with cognitive, physiological and acoustic constraints. For the strangest exoacoustics and perceptual rates and densities far removed from our standard operating limits, there will remain a mediating layer of sonification mapping back to human audition, but this does not mean we cannot reach new zones of interesting musical behaviour through the study.

Expansion of the xenomusical generator described here could proceed in many directions. Exobiologists recognise an acute interlinking of the evolution of complex life and the planetary environment in which that life evolves (ibid.). A more complex generator would model the local star system’s physics, a specific planet’s geology, chemistry and biology, before tackling the evolutionary physiology and psychology of one or more lifeforms involved in alien music-making. In Section 2, the xenolanguage and xenomusic generators were coupled by the common working memory constraint M but could be more deeply intertwined. Alongside the perceptual and physiological affordances of another species, alien acoustics would be an interesting point of inquiry, perhaps investigated by acoustic ray tracing in strange alien architectures. We might also imagine that future human beings may expand their own auditory capabilities, through technological modifications (such as cochlear implants with greater frequency range than 20Hz-20KHz) or in distant evolutionary time as the species encounters and adapts within other world environments, opening up new sonification routes for non-standard acoustic, electromagnetic, gravitational or other as yet unknown communicative media.

4. Future xenomusicology

This work is an initial step into generative xenomusicology (alien analysis by synthesis), but many future developments can be envisaged. The field of zoomusicology on Earth (Doolittle and Gingras Reference Doolittle and Gingras2015), examining the musical capacity of animal species such as birds or humpback whales, may provide some further ideas on going beyond human music-making to a more general musical biology. The current project is a human imagining possible abstractions of music that are sufficiently mathematical to also be imagined on other worlds as musical constructive starting points. We can imagine an alien musicologist trying to imagine human music, or an alien imagining a human trying to make alien music, and deeper levels yet. We might proceed from the physics of an exoplanet (informed by the catalogue of exoplanets) to its geography and acoustics, then to its xenobiology, musical species and xenoculture; and the modelled culture may be a far richer hubbub of ideas, akin to the mass production of music on Earth, just within as yet unknown music theories. The reader may have many ideas of alternate compositional paths in response to the assumptions in this text, such as considering alien music that eschews any notion of a singular metrical framework or returning cycle, exploring new kinds of rhythmically free, or polymetrically complicated structure. Stranger rhythmic structures highlight alternative experiences of time that may underlie alien phenomenology; with or without scaling to human cognitive rates, they are experienced as otherworldly music and pinpoint new listening experiences.

The present project is an exercise of imagination in possible alternative music theories. In ethical terms, it is hard to conceive of the appropriation of music for which there is no direct scientific evidence of existence, let alone actual recordings. Nonetheless, real alien contact would be an area absolutely replete with ethical issues and implications for the whole of humanity; from a musical perspective, xenomusicology is ethnomusicology on a grand universal scale, challenging further existing discussion of and contention around musical universals. Alien species would deviate from human physiology to such an extent that any basic psychoacoustics in human communication would need revisiting, whilst certain musimathematical principles, especially in electronic music, may still prove points of surprising connection. To consider such possibilities in advance is the justification for projects such as this, and to do so far in advance of actual alien contact prudent; even if humanity turns out to be alone in the cosmos as a more advanced musical intelligence, the invention of alien music as a way to reconsider our own musical behaviours may make us less lonely in the long term. Whilst navigating a speculative tendency, the xenomusicological project reflects back on our understanding of the capacity and limits of human musical endeavour, mapping out new angles on musical creativity and listening.

Acknowledgements

The author thanks the two anonymous reviewers of the manuscript for helpful comments and suggestions for revision.

Xenophone software

Code of the generative system (SuperCollider 3 under GNU GPL 3) and example xenodocuments: https://github.com/sicklincoln/Xenophone.

Xenomusic album

A short 18-minute album of 14 pieces of example xenomusic from the project is available for free listening on bandcamp: https://sicklincoln.bandcamp.com/album/xenomusic.

References

Ballora, M. 2014. Sonification Strategies for the Film Rhythms of the Universe. The 20th International Conference on Auditory Display (ICAD-2014).Google Scholar
Bardelli, S., Ferretti, C., Ludovico, L.A., Presti, G. and Rinaldi, M. 2021. A Sonification of the zCOSMOS Galaxy Dataset. International Conference on Human-Computer Interaction. Cham: Springer International Publishing, 171–88.Google Scholar
Cabrol, N. A. 2016. Alien Mindscapes—A Perspective on the Search for Extraterrestrial Intelligence. Astrobiology 16 (9): 661–76.10.1089/ast.2016.1536CrossRefGoogle ScholarPubMed
Cassan, A., Kubas, D., Beaulieu, J.P., Dominik, M., Horne, K. and Greenhill, J. 2012. One or More Bound Planets Per Milky Way Star from Microlensing Observations. Nature 481 (7380): 167–69.10.1038/nature10684CrossRefGoogle ScholarPubMed
Chafe, C. 1999. A Short History of Digital Sound Synthesis by Composers in the U.S.A. Creativity and the Computer, Recontres Musicales Pluridisciplinaires, Lyon.Google Scholar
Collins, N. 2011. SCMIR: A SuperCollider Music Information Retrieval Library. Proceedings of ICMC2011, International Computer Music Conference, Huddersfield.Google Scholar
Collins, N., Peter Manning, P. and Tarsitani, S. 2018. A New Curated Corpus of Historical Electronic Music: Collation, Data and Research Findings. Transactions of the International Society for Music Information Retrieval 1(1): 3443.10.5334/tismir.5CrossRefGoogle Scholar
Cross, I. 2003. Music as Biocultural Phenomenon. Annals of the New York Academy of Sciences (The Neurosciences and Music) 999: 106–11.10.1196/annals.1284.010CrossRefGoogle ScholarPubMed
Diaz-Merced, W. L., Candey, R. M., Brickhouse, N., Schneps, M., Mannone, J. C., Brewster, S. and Kolenberg, K. 2011. Sonification of Astronomical Data. Proceedings of the International Astronomical Union 7(S285): 133–36.10.1017/S1743921312000440CrossRefGoogle Scholar
Diessel, H. 2017. Usage-Based Linguistics. Oxford Research Encyclopedia of Linguistics. https://oxfordre.com/linguistics/view/10.1093/acrefore/9780199384655.001.0001/acrefore-9780199384655-e-363 (accessed 30 May 2025).10.1093/acrefore/9780199384655.013.363CrossRefGoogle Scholar
Doolittle, E. and Gingras, B. 2015. Zoomusicology. Current Biology 25(19): 819–20.10.1016/j.cub.2015.06.039CrossRefGoogle ScholarPubMed
Eklund, R. and Lindström, A. 1998. How to Handle “Foreign” Sounds in Swedish Text-to-Speech Conversion: Approaching the ‘Xenophone’ Problem. Proceedings of The International Conference on Spoken Language Processing, 30 November–5 December 1998, Sydney, Australia. Paper 514, Vol. 7, 2831–34.Google Scholar
Fletcher, N. H. and Rossing, T. D. 2012. The Physics of Musical Instruments. Springer Science & Business Media.Google Scholar
Forde Thompson, W. 2009. Music, Thought, and Feeling: Understanding the Psychology of Music. New York and Oxford: Oxford University Press.Google Scholar
Fraknoi, A. 2012. Music Inspired by Astronomy: A Resource Guide Organized by Topic. Astronomy Education Review 11(1).10.3847/AER2012043CrossRefGoogle Scholar
Green, C. 2017. Usage-based Linguistics and the Magic Number Four. Cognitive Linguistics 28(2): 209–37.10.1515/cog-2015-0112CrossRefGoogle Scholar
Leighton, T.G. and Petculescu, A. 2009a. The Sound of Music and Voices in Space. Part 1: Theory. Acoustics Today 5(3): 1726.10.1121/1.3238122CrossRefGoogle Scholar
Leighton, T.G. and Petculescu, A. 2009b. The Sound of Music and Voices in Space. Part 2: Modeling and Simulation. Acoustics Today 5(3): 2729.10.1121/1.3238123CrossRefGoogle Scholar
Lerch, A. 2022. An Introduction to Audio Content Analysis: Music Information Retrieval Tasks and Applications. John Wiley & Sons.10.1002/9781119890980CrossRefGoogle Scholar
London, J. 2012. Hearing in Time: Psychological Aspects of Musical Meter, 2nd edn. New York and Oxford: Oxford University Press.10.1093/acprof:oso/9780199744374.001.0001CrossRefGoogle Scholar
Maconie, R. 2005. Other Planets: The Music of Karlheinz Stockhausen. Oxford: Scarecrow.Google Scholar
McCartney, J. 2002. Rethinking the Computer Music Language: SuperCollider. Computer Music Journal 26(4): 61–8.10.1162/014892602320991383CrossRefGoogle Scholar
Mermikides, M. 2025. Hidden Music: The Composer’s Guide to Sonification. Cambridge: Cambridge University Press.10.1017/9781009258555CrossRefGoogle Scholar
Miller, G. 1956. The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information. Psychological Review 63(2): 8197.10.1037/h0043158CrossRefGoogle ScholarPubMed
Misdariis, N., Özcan, E., Grassi, M., Pauletto, S., Barrass, S., Bresin, R. and Susini, P. 2022. Sound Experts’ Perspectives on Astronomy Sonification Projects. Nature Astronomy 6(11): 1249–55.10.1038/s41550-022-01821-wCrossRefGoogle Scholar
Mithen, S. 2006. The Singing Neanderthal. London: Phoenix.Google Scholar
Nelson, S. and Polansky, L. (1993). The Music of the Voyager Interstellar Record. Journal of Applied Communication Research 21(4): 358–76. https://doi.org/10.1080/00909889309365379.CrossRefGoogle Scholar
Oberhaus, D. 2024. Extraterrestrial Languages. Cambridge, MA: MIT Press.Google Scholar
Patel, A. D. 2008. Music, Language, and the Brain. New York: Oxford University Press.Google Scholar
Pöppel, E. 2004. Lost in Time: A Historical Frame, Elementary Processing Units and the 3-Second Window. Acta Neurobiologiae Experimentalis 64: 295301.10.55782/ane-2004-1514CrossRefGoogle Scholar
Pöppel, E. and Wittman, M. 1999. Time in the Mind. In Wilson, R. A. and Keil, F. (eds.) The MIT Encyclopedia of the Cognitive Sciences, Cambridge, MA: MIT Press, 841–3.Google Scholar
Quinton, M., McGregor, I. and Benyon, D. 2020. Sonification of an exoplanetary atmosphere. Proceedings of the 15th International Audio Mostly Conference, 191–8.Google Scholar
Sagan, C. and Drake, F. 1975. The Search for Extraterrestrial Intelligence. Scientific American 232(5): 8089.10.1038/scientificamerican0575-80CrossRefGoogle Scholar
Savage, P. E., Brown, S., Sakai, E. and Currie, T.E. 2015. Statistical Universals Reveal the Structures and Functions of Human Music. Proceedings of the National Academy of Sciences, 112(29): 8987–92.10.1073/pnas.1414495112CrossRefGoogle ScholarPubMed
St George, J., Kim, S. Y. and Bischof, H. P. 2018. Sonification of Simulated Black Hole Merger Data. Proceedings of the International Conference on Modeling, Simulation and Visualization Methods (MSV). The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp), 3–9.Google Scholar
Stacey, G. 2024. Music of the Spheres: Could Aliens Understand Music?. In Playford, R. (ed.) Exophilosophy: The Philosophical Implications of Alien Life. New York: Routledge, 8296.10.4324/9781003440130-8CrossRefGoogle Scholar
Summers, T. 2013. Star Trek and the Musical Depiction of the Alien Other. Music, Sound, and the Moving Image 7(1): 1952.10.3828/msmi.2013.2CrossRefGoogle Scholar
Tarter, J. 2001. The Search for Extraterrestrial Intelligence (SETI). Annual Review of Astronomy and Astrophysics 39(1): 511–48.10.1146/annurev.astro.39.1.511CrossRefGoogle Scholar
Taylor, T. D. 2001. Strange Sounds: Music, Technology, and Culture. New York: Routledge.Google Scholar
Tomlinson, B.J., Winters, R.M., Latina, C., Bhat, S., Rane, M. and Walker, B.N., 2017, June. Solar system sonification: exploring earth and its neighbors through sound. The 23rd International Conference on Auditory Display (ICAD 2017), 128–134.10.21785/icad2017.027CrossRefGoogle Scholar
Turner, M. S. 2009. The Universe. Scientific American 301(3): 3643.10.1038/scientificamerican0909-36CrossRefGoogle ScholarPubMed
Vakoch, D. A. 2010. An Iconic Approach to Communicating Musical Concepts in Interstellar Messages. Acta Astronautica 67(11–12): 1406–09.10.1016/j.actaastro.2010.01.006CrossRefGoogle Scholar
Vakoch, D. A. and Punske, J. eds. 2023. Xenolinguistics: Towards a Science of Extraterrestrial Language. Taylor & Francis.10.4324/9781003352174CrossRefGoogle Scholar
Valle, A. and Korol, V. 2022. For Lisa. Music Composition from Gravitational Waves. Proceedings of the 19th Sound and Music Computing Conference. SMC Network, 46–53.Google Scholar
Van De Bogart, W. G. 2018. Orchestrations of Consciousness in the Universe: Consciousness and Electronic Music Applied to Xenolinguistics and Adnyamathanha Aboriginal Songs. Technoetic Arts: A Journal of Speculative Research 16(1): 113–31. https://doi.org/10.1386/tear.16.1.113_1.CrossRefGoogle Scholar
van Veen, T. 2013. “Music is a Plane of Wisdom”—Transmissions from the Offworlds of Afrofuturism. Guest Editor’s Introduction to the Special Issue on Afrofuturism. Dancecult 5(2): 26. https://dj.dancecult.net/index.php/dancecult/article/view/404/389.10.12801/1947-5403.2013.05.02.01CrossRefGoogle Scholar
Von Hoerner, S. 1974. Universal Music? Psychology of Music 2(2): 1828.10.1177/030573567422003CrossRefGoogle Scholar
Wallin, N.L., Merker, B. and Brown, S. (eds.) 2000. The Origins of Music. Cambridge, MA: MIT Press.Google Scholar
Wierzbicki, J. 2005. Louis and Bebe Barron’s Forbidden Planet: A Film Score Guide. Lanham, Maryland: Scarecrow Press, Inc.10.5771/9781461669432CrossRefGoogle Scholar
Wilson, S., Cottle, D. and Collins, N. (eds.) 2025. The SuperCollider Book, 2nd edn. Cambridge MA: MIT Press.Google Scholar
Xenakis, I. 1992. Formalized Music. Stuyvesant, NY: Pendragon Press.Google Scholar
Zanella, A., Harrison, C. J., Lenzi, S., Cooke, J., Damsma, P. and Fleming, S. W. 2022. Sonification and Sound Design for Astronomy Research, Education and Public Engagement. Nature Astronomy 6(11): 1241–48.10.1038/s41550-022-01721-zCrossRefGoogle Scholar
Figure 0

Figure 1. Four examples of xenolanguage outputs.

Figure 1

Figure 2. 3D scatter plot of xenomusic pieces against electronic music corpus pieces with respect to three timbral features.

Figure 2

Figure 3. Scatterplot in 3D of xenomusic vs electronic music corpus with respect to the second MFCC, percussive/tonal balance and onset density.