To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A certain confusion may befall us when we praise pioneers, especially while they are still with us. This hazard was apparent to the troubadour and know-hit wonder Jonathan Coulton, when he wrote one of the great tunes of popular science, ‘Mandelbrot Set’:
Mandelbrot's in heaven
At least he will be when he's dead
Right now he's still alive and teaching math at Yale
The song was released in October 2004, giving it a nice run of six years before its lyrics were compromised by Benoît Mandelbrot's passing in 2010. Even thus betrayed to history, ‘Mandelbrot Set’ still marks the contrast between extraordinary and ordinary lives, dividing those who change the world, in ways tiny or otherwise, from those who sing about them or merely ruminate. The life of ideas, perhaps like ontogeny, works through sudden transformations and upheavals, apparent impasses punctuated by instant, lateral shift. Understanding is catastrophic. Genius finds ‘infinite complexity […] defined by simple rules’, as Coulton also sings, though any such simplicity depends crucially on the beholder. Cosmic rules may have gorgeous clarity to a mind like Mandelbrot's. For the rest of us, the complexities of the universe are more often bewildering. Nothing is more bewildering, of course, than genius.
How does one write the story of a computer system? To trace a technical history, one must first assume that there is a technical ‘object’ to trace – a system or an artefact that has changed over time. This technical artefact will constitute a series of artefacts, a lineage or a line. At a cursory level, technical ‘evolution’ seems obvious in the world around us; we can see it in the fact that such lineages exist, that technologies come in generations. Computers, for example, adapt and adopt characteristics over time, ‘one suppressing the other as it becomes obsolete’ (Guattari 1995, 40). But are we to understand this lineage from a sociological, an archaeological or a zoological perspective? And what is a technical artefact?
I need to address these questions here for two reasons. First, because it is impossible to write a technical history without defining how that history will be constructed, and second, because these questions also concerned Douglas Engelbart, one of the early pioneers whose work we investigate in this book. The relationship between human beings and their tools, and how those tools extend, augment or ‘boost’ our capacity as a species, is integral to the history of hypertext and the NLS system in particular.
Traditionally, history has ignored the material dimension of technical artefacts. Historians are interested in tracing cultural formations, personalities and institutions, and especially the social ‘constructions’ they erect around themselves. Technical artefacts don't have their own history; they are perceived as the products of culture.
Everything is deeply intertwingled. I have always loved that particular Nelsonism (and there are many to choose from); it has stuck with me for twenty years. In this book we have looked at several early, pre-Web attempts to represent intertwingularity, to represent the deep connections that criss-cross and interpenetrate human knowledge. They were built at different times and from different technical components; two of them (Memex and Xanadu) were never built at all. As a series of machines, they are a motley crew. With the exception of Storyspace, they are also obsolete – had Bruce Sterling continued his Dead Media Project beyond 2001, they would belong properly to that collection.
There are, however, hypertexts created in the '80s that are still read and edited today. George Landow's Victorian Web, originally created in Intermedia, then ported to Storyspace and the Web, is one of the oldest scholarly hypertexts. It is still used as course material in Victorian literary studies. Numerous hypertexts created in Storyspace, among them afternoon and Victory Garden, are still read, studied and argued about in critical literature. We have inherited more than just technical designs from the history of hypertext – we have inherited works of literature.
The '80s was also a period of great critical foment for hypertext theory. Theorists began enthusiastically exploring hypertext from a literary perspective in the late '80s, claiming that the interactive nature of hypertext invites us to reconfigure our conceptions of ‘text’, ‘narrative’ and ‘author’ in a fashion more suited to the nature of the medium (Landow 1992, also Landow and Delaney 1995).
Dr Douglas Engelbart is a softly spoken man. His voice is low yet persuasive, as though ‘his words have been attenuated by layers of meditation’, his friend Nilo Lindgren wrote in 1971 (cited in Rheingold 2000, 178). I struggled to hear him, being partially deaf myself, but that didn't matter; he has been describing the same vision in great detail to journalists, historians and engineers for over 60 years. The words change slightly in each interview, but the vision remains clear and sweeping, like a horizon line on a bright summer's day. Engelbart wants to improve the model of the human, to ‘boost our capacity to deal with complexity’ as a species (Engelbart 1999).
To get what he means by ‘boost our capacity’ as a species, we must first understand his philosophical framework. This is important for two reasons. Firstly, this framework profoundly influenced his own approach to invention in the '60s and '70s. Secondly, it represents a fascinating (and novel) theory of technical evolution, a topic we have already started to explore in this book.
Engelbart believes that human beings live within an existing technical and cultural system, an ‘augmentation’ system. We are born with a particular set of genetic capabilities, and then we build on these innate capabilities using tools, techniques, skills, language and technology. There is no ‘naked ape’; from the moment we are born we are always already augmented by language, tools and technologies.
This book would not have been possible without the cooperation of its subjects. Like technology historian Steve Shapin and sociologist Thierry Bardini, I write in the ‘confident conviction that I am less interesting than the subjects I write about’ (Shapin cited in Bardini 2000, xiv). I have had the great privilege of meeting many of the people you will read about in these pages – except for Vannevar Bush, who died in 1974 – the colourful anecdotes, magical visions and prescient ideas you will find here have come directly from them or from their work. At times I felt like a media studies bowerbird, procuring brightly coloured memories, sticky notes and cryptic computer manuals from various computing science professors, writers and visionaries across the globe. In that sense, this book may be read as a simple history book. There is no need to be self-reflexive or clever when presented with such a treasure trove; it is intrinsically interesting and needs no posthistorical garnish (I'll confine that to the preface). That said, I do not claim to present you with the final word on hypertext history; this is an edited selection, a woven structure of deeply interconnected stories contributed in large part by the people who lived them.
I have spent months, years in fact, arranging this collection; it was first assembled as a PhD thesis in 1999, before I had children and consequently when I had the luxury of time. Time to do things like roam around Brown University gathering documents and stories from Andries van Dam and the Hypertext Editing System (HES) team; time to rummage through the Vannevar Bush archives at the Library of Congress looking for interesting correspondence; time to interview Doug Engelbart and feel embarrassingly starstruck; and time to travel to Keio University in Japan to meet Ted Nelson.
Michael Joyce has kept a journal for many years. Before he begins to write, he inscribes the first page with an epigram: Still flowing. As anyone who has read Joyce's fictions or critical writing will attest, his work is replete with multiple voices and narrative trajectories, a babbling stream of textual overflow interrupted at regular intervals by playful, descriptive whorls and eddies. If there is a common thread to be drawn between his hyperfictions, his academic writing and his novels, then it is this polyglot dialogue, as Robert Coover terms it, a lyrical stream of consciousness. Joyce can tell a story. And he has told many stories: four books, 40 scholarly essays, and at last count (my count), a dozen fictions. What courses through this work is a gentle concern, or even fixation, with how stories are told – with ‘how we make meaning, as if a caress’ (Joyce 2004, 45). The metaphor of water is an appropriate one. Like Ted Nelson, who had his first epiphany about the nature of ideas and the connections between them as he trailed his hand in the water under his grandfather's boat, Joyce has long been concerned with how to represent a multiplicity of ideas and their swirling interrelationships, with how stories change over time. In the essay ‘What I Really Wanted to Do I Thought’, about the early development of Storyspace, Joyce writes:
What I really wanted to do, I discovered, was not merely to move a paragraph from page 265 to page 7 but to do so almost endlessly. I wanted, quite simply, to write a novel that would change in successive readings and to make those changing versions according to the connections that I had for some time naturally discovered in the process of writing and that I wanted my readers to share. (1998, 31)
Teaching live electronic music techniques to instrumental performers presents some interesting challenges. Whilst most higher music education institutions provide opportunities for composers to explore computer-based techniques for live audio processing, it is rare for performers to receive any formal training in live electronic music as part of their study. The first experience of live electronics for many performers is during final preparation for a concert. If a performer is to give a convincing musical interpretation ‘with’ and not simply ‘into’ the electronics, significant insight and preparation are required. At Birmingham Conservatoire we explored two distinct methods for teaching live electronics to performers between 2010 and 2012: training workshops aimed at groups of professional performers, and a curriculum pilot project aimed at augmenting undergraduate instrumental lessons. In this paper we present the details of these training methods followed by the qualitative results of specific case studies and a post-training survey. We discuss the survey results in the context of tacit knowledge gained through delivery of these programmes, and finally suggest recommendations and possibilities for future research.
Although the pedagogy of music technology more closely resembles that of other academic subjects, the teaching of electroacoustic composition involves a significant degree of creativity, and thus relies on different creativity-specific parts of the brain and memory systems (Lehmann 2007). This paper reviews recent neuroscientific research that may assist differentiation between effective pedagogical approaches of these two subjects where knowledge is stored in separate, discrete and sometimes competing long-term memory locations (Cotterill 2001). It argues that, because of these differences, the learning of music technology and electroacoustic composition is best kept separate, at least in the beginning stages. These points are underscored by an example of a demonstrably failed pedagogical model for teaching electroacoustic composition contrasted with a subsequent highly successful model employed in the same university music programme; an experience that may translate well to other learning environments.
Given the growing acceptance of information and communication technology (ICT) as integral to today's middle and secondary school classrooms, electroacoustic music would seem on the surface to be a central feature of the music curriculum. However, models that approximate actual practices of electroacoustic music in the classroom are rare, with many schools focusing squarely on ICT, either as tools to facilitate traditional musical contexts or to explore innovative uses of that technology. Also, with the exception of some notable recent developments, there are few initiatives to bring middle and secondary students, or their teachers, into contact with the practices of electroacoustic music communities. The purpose of this article is to explore this problematic gap between the education and electroacoustic music communities in an attempt to identify some of the issues that lie at the foundation of an effective curriculum. The position taken is that these foundational matters need to be addressed prior to any discussion of ‘best practices’ for middle and secondary electroacoustic music education.
The conceptual starting point for an ‘action–sound approach’ to teaching music technology is the acknowledgment of the couplings that exist in acoustic instruments between sounding objects, sound-producing actions and the resultant sounds themselves. Digital music technologies, on the other hand, are not limited to such natural couplings, but allow for arbitrary new relationships to be created between objects, actions and sounds. The endless possibilities of such virtual action–sound relationships can be exciting and creatively inspiring, but they can also lead to frustration among performers and confusion for audiences. This paper presents the theoretical foundations for an action–sound approach to electronic instrument design and discusses the ways in which this approach has shaped the undergraduate course titled ‘Interactive Music’ at the University of Oslo. In this course, students start out by exploring various types of acoustic action–sound couplings before moving on to designing, building, performing and evaluating both analogue and digital electronic instruments from an action–sound perspective.
This paper observes the conditions of reception and understanding of music using the theoretical concepts of learning (Chevallard 1985; Brousseau 1998) adapted to the teaching of these various musics (Terrien 2006). We verify, in the light of an epistemological questioning, the nature of electroacoustic music, and if the didactic transposition (Verret 1975; Chevallard 1985) applied to Yan Maresz's Metallics allows us to understand the phenomena of music (listening, intention-reception: issues of perception and interpretation), and identify issues of language. Our contribution is a tool for reflection on a teaching approach that relies on new teaching methods in the teaching of this music.
At Penn State, music technology is something of a stranger in a strange land. As a programme, it began in the early twenty-first century, when the necessity of the moment was an anticipated revision to the guidelines from the National Association of Schools of Music (NASM), the North American accrediting body. Music schools were charged with ensuring that music majors were exposed to ‘relevant technologies’. It was left largely to individual institutions to interpret what this meant. At Penn State, a course was created to address this guideline, and it generated interest among students. This course then spawned a series of related courses. These courses eventually created enough of a curricular presence to warrant creating an undergraduate minor. We now expect that the minor will spawn an undergraduate major. The music technology programme's locus lies not solely within the School of Music; rather, it overlaps as an interdisciplinary area with a variety of programmes throughout the university's offerings. These overlaps are a unique feature of the programme. It is an unusual arrangement, but it is a product of its time and place. Three populations of students have coalesced, and the pedagogical challenge has been to create a curriculum that can serve all of them. The programme might be thought of as series of concentric spheres; each is centred around the same general concept structure, but with expanding breadth for different levels of student involvement.
EarSketch is an all-in-one approach to supporting a holistic introductory course to computer music as an artistic pursuit and a research practice. Targeted to the high school and undergraduate levels, EarSketch enables students to acquire a strong foundation in electroacoustic composition, computer music research and computer science. It integrates a Python programming environment with a commercial digital audio workstation program (Cockos’ Reaper) to provide a unified environment within which students can use programmatic techniques in tandem with more traditional music production strategies to compose music. In this paper we discuss the context and goals of EarSketch, its design and implementation, and its use in a pilot summer camp for high school students.
Originally designed by Xenakis to free him from traditional music notation while allowing a faithful execution of his musical thought, UPIC (Unité Polyagogique Informatique du CEMAMu) was quickly diverted from its original functions. Even if Xenakis recommended to apprentice composers who came to study with him ‘listen to a lot of music and write’ (Serrou 2003: 20), this machine, since its inception, has enabled a large number of people to access music composition, because it does not require any preliminary theoretical training. Based on this observation we ask how UPIC, capable of converting a drawing into sound in real-time, upset the perception of musical pedagogy not only in Europe but also worldwide, through the many workshops/concerts offered to a wide audience. Exchanges and emulation around this invention are discussed as well.
After describing the technical development of this tool and, by extension, Xenakis's pedagogical thinking, we will highlight some of the most significant encounters between the machine and the public thanks to many unpublished sources found in the archives of the Centre Iannis Xenakis (CIX) recently deposited at the University of Rouen. We will highlight the pedagogical correlation between sound theory, gesture and image involved in the composition of a UPIC score. We will also approach other software applications that combine drawing and sound.