21 results
The Neural Code of Pitch and Harmony
- Gerald D. Langner
- Assisted by Christina Benson
-
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015
-
Harmony is an integral part of our auditory environment. Resonances characterised by harmonic frequency relationships are found throughout the natural world and harmonic sounds are essential elements of speech, communication and, of course, music. Providing neurophysiological data and theories that are suitable to explain the neural code of pitch and harmony, the author demonstrates that musical pitch is a temporal phenomenon and musical harmony is a mathematical necessity based on neuronal mechanisms. Moreover, he offers new evidence for the role of an auditory time constant for speech and music perception as well as for similar neuronal processing mechanisms of auditory and brain waves. Successfully relating current neurophysiological results to the ancient ideas of Pythagoras, this unique title will appeal to specialists in the fields of neurophysiology, neuroacoustics, linguistics, behavioural biology and musicology as well as to a broader audience interested in the neural basis of music perception.
Contents
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp v-x
-
- Chapter
- Export citation
Frontmatter
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp i-iv
-
- Chapter
- Export citation
11 - The neural code of harmony
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 162-180
-
- Chapter
- Export citation
-
Summary
‘The essential basis of music is melody.’
Helmholtz, The Sensation of Tones, 3rd edition, 1913The pitch helix
The pitch helix is a highly relevant concept in the psychology of music. Two nineteenth- century German mathematicians, Friedrich Wilhelm Opelt (1794 – 1863) and Moritz Wilhelm Drobisch (1802 – 1896), were the first to suggest a helical model to represent the sensation of pitch, octave equivalence and recurrence (Opelt, 1852; Drobisch, 1855).
Opelt started his career as a weaver, but later became a director of the Sächsisch- Bayrische Staatseisenbahn. Among astronomers he is still famous for his maps of the moon. Drobisch was a professor of mathematics and philosophy at the University of Leipzig and still has an enduring reputation in empirical psychology and logic.
The concept of a helical organization underlying the sensation of pitch can be related to neuronal mechanisms of temporal processing in the auditory system. Moreover, as the final chapter of this book will show, a helical organization is not restricted to mechanisms of hearing. Anatomical evidence for a variety of helical structures in non-auditory brain areas support a theory of harmonic processing of oscillatory brain signals even beyond the level of the hearing system.
But first we will have to understand the role of harmony for periodicity processing and pitch perception and the related concept of the pitch helix. It seems trivial that the pitches of two tones are perceived as very similar if their fundamental frequencies are nearly the same. Tones should sound similar, even if they do not activate the same neurons, provided they at least activate adjacent neurons in the pitch maps of the brain. However, two tones which differ in frequency by 100% (an octave) suddenly become similar again; in fact, they are often confused and may be quite difficult to distinguish if played, for example, by different instruments.
Index
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 222-227
-
- Chapter
- Export citation
12 - The oscillating brain
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 181-201
-
- Chapter
- Export citation
-
Summary
‘No sooner had the warm liquid mixed with the crumbs touched my palate than a shudder ran through me and I stopped, intent upon the extraordinary thing that was happening to me. An exquisite pleasure had invaded my senses, something isolated, detached, with no suggestion of its origin.’
Marcel Proust, In Search of Lost Time'‘Grandmother cell’ and ‘cocktail-party problem’
In 1971 Otto Creutzfeldt, the director of the Department of Neurobiology of the Max- Planck Institute for Biophysical Chemistry in Göttingen, had sent Christoph von der Malsburg (Fig. 12.1c) and me – the two young physicists in his group – to the Sorbonne in Paris. We were supposed to join a neurophysiology course organized by the International Brain Research Organization. The Parisian atmosphere and the lively discussions about the functioning of the brain with a dozen young neuroscientists of various nationalities in the cafés of the Quartier Latin would certainly inspire our research in the following years. Back in Göttingen, both Christoph and I studied the temporal aspects of neuronal feature detection; I started to investigate the coding of amplitude modulations in the midbrain of guinea fowl, while Christoph worked on neuronal nets in the visual cortex, and also on a solution for the ‘cocktail-party problem’. In other words, he tried to answer the question of how our brain manages to isolate a particular voice under the noisy conditions of a cocktail party (see Chapter 6). But in his paper entitled ‘A neural cocktail-party processor’ the acoustic example of feature analysis served only as a pertinent example for the more general ‘ binding problem ’ (von der Malsburg and Schneider, 1986).
It is obvious that not only the auditory but all sensory systems have to coordinate features distributed over space and time – features like size, position, direction and colour in vision or loudness, position, pitch and timbre in hearing.
4 - The pitch puzzle
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 35-45
-
- Chapter
- Export citation
-
Summary
The telephone theory
Although a broadband harmonic sound, for example a human vowel (see Figs. 2.8, 2.9), activates many sensory cells along the basilar membrane, it is typically perceived as an entirety and has just a single pitch. We neither hear the rapid amplitude fluctuations, due to its periodic envelope, nor realize that it is composed of a fundamental frequency and perhaps dozens of harmonics, which all have quite different pitches when heard separately. Helmholtz was able to single out individual harmonics of tones by means of his spherical resonators (see Fig. 3.5), but he could not explain the fusion of these harmonics into single tones with just one pitch. This puzzle remained a challenge for scientists from the nineteenth century to the present time (Stumpf, 1890; Ebeling, 2008).
To overcome this, and other drawbacks of Helmholtz's resonance theory, the Scottish physiologist William Rutherford (1839–1899) proposed an alternative idea. In 1886, in a lecture to the British Association for the Advancement of Science in Birmingham, he postulated that the ear functions like a telephone, the pioneering device that had been invented only ten years earlier by Alexander Graham Bell (1847–1922). While Helmholtz had compared the basilar membrane to a piano, Rutherford, on the basis of his own anatomical investigations, had come to the conclusion that the basilar membrane must vibrate as a whole, just like the membrane of a telephone.
Consequently, the sensory cells of the inner ear should pick up the mechanical sound vibrations, transform them into electrical currents and transmit the resulting signal via ‘cables’ (the auditory nerve fibres) to the receiver (the brain). As a result, the essential auditory information would be coded exclusively in the time domain and the nerve fibres would have to transfer merely temporal signals to the brain for further temporal processing.
A potential problem with this theory, however, is that it would require a very high transmission rate to the brain.
7 - Periodicity coding in the brainstem
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 88-104
-
- Chapter
- Export citation
-
Summary
Periodicity coding in the auditory nerve
Temporal coding
As we know from the previous chapter, a pure tone activates a certain place on the basilar membrane maximally, and hair cells at this place code the frequency of that signal by their position along the membrane. However, we also know that this labelled line coding, or ‘place information’, alone is not sufficient to explain the acuity of our perception and that the central auditory system obviously must make use of additional information that is supplied by the temporal firing patterns of the nerve fibres.
We also saw that the limited frequency resolution of our cochlea is actually advantageous for the temporal analysis of – for example – a harmonic sound. When its frequency range is sufficiently broad, neural responses in many frequency channels will be elicited by a waveform that is a superposition of harmonic components of the harmonic sound. As a consequence, hair cells signal the presence and intensity of the harmonics to which they are tuned to the central auditory system, but they also transmit temporal information about this superposition. Because all the components of a harmonic sound are integer multiples of its fundamental frequency, the periodicity of their superposition will correspond to that of the fundamental frequency (or a multiple thereof, if certain harmonics are missing). As a result, the fundamental of a harmonic sound is encoded by the temporal discharge patterns of auditory nerve fibres even if it is physically absent. This explains why, after a corresponding temporal analysis in the central auditory system, we are able to perceive the pitch of what Schouten called the ‘missing fundamental’.
If the fundamental can be coded in the auditory nerve, the question arises as to how the corresponding temporal information is decoded in the nervous system and how the properties of the nerve and the neurons in the brain contribute to this temporal analysis.
3 - The discovery of the missing fundamental
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 24-34
-
- Chapter
- Export citation
-
Summary
‘Wodurch kannüber die Frage, was zueinem Tone gehöre, entschieden werden, als eben durch das Ohr?’
‘How else can the question of what belongs to a tone be decided, than by the ear?’
(August Seebeck, Über die Definition des Tones, 1844)The sound of sirens
At the beginning of the nineteenth century, the field of experimental acoustics was floundering. One of the main problems at this time was a purely technical one – there was no easy method to vary the frequency of tones in a precise and reproducible way. Experimental research relied on the use of tuning forks or oscillating strings, or on simple musical instruments such as flutes and, in practice, these methods were imprecise and somewhat difficult to handle. Therefore no great progress could be made in understanding the essence and perception of tones.
A turning point for acoustical research came with the invention of the siren as a scientific instrument. Originally introduced by Charles Caignard de la Tour (1819), the siren made it possible to generate tones of a precise and quantifiable frequency. It was this new-found accuracy and reproducibility in tone production which opened the door for acoustics to become both an independent and an exact science.
Several prominent researchers invented their own versions of the siren for acoustical experimentation – one of them, a young German physicist named August Seebeck (Fig. 3.1), constructed a siren which incorporated a number of improvements over Caignard's original design (Fig. 3.2 shows a simple version). It allowed the creation of continuous, clear, steady tones over a large range of pitches. The tones were easily reproducible and could also be quantitatively defined by means of a mechanical counter. Moreover, since the tones were related to the geometric arrangement of the holes on a rotating disc, octaves and other harmonic intervals could be simultaneously generated with high precision.
References
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 202-221
-
- Chapter
- Export citation
1 - Historical aspects of harmony
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 1-10
-
- Chapter
- Export citation
-
Summary
‘Musicaest exercitium arithmeticae occultum nescientis se numerare animi.’
‘Music is a hidden arithmetical exercise of the soul, which does not know that it is dealing with numbers.’
(Gottfried Wilhelm Leibniz, 1646–1716)The origin of music
For thousands of years, music has played an essential role in social interactions, rituals and ceremonies, although its exact origins are shrouded in mystery. From an evolutionary viewpoint, the desire to produce musical sounds is not unique to man. We are all familiar with the sound of birds singing. Some non-human primates also sing; monkeys in the rainforests of Asia and gibbons in the jungles of Thailand produce haunting musical calls: their duets probably serve to strengthen pair bonding, but singing may also serve to alarm other group members (Chung and Geissmann, 2000; Geissmann, 2002). If musical sounds are important in primate communication, they must also have been essential in early human communication. Perhaps as suggested by Charles Darwin (Darwin, 2004), singing may even have preceded speech and might have been the primary method of human communication.
One can only speculate as to why our ancestors produced their first musical instruments; it may have been an attempt by early man to imitate natural sounds, such as the wind blowing through a hollow reed or the singing of a bird. The earliest known instruments are flutes, made from bone or mammoth ivory, dating back tens of thousands of years to the Palaeolithic Age. For example, in the Geißenklösterle cave near Blaubeuren in Germany, archaeologists uncovered the oldest known musical instrument – a primitive, but carefully constructed, swanbone flute estimated to be at least 35 000 years old (Fig 1.1; Hahn and Münzel, 1995; Münzel et al., 2002). Similar flutes have been found at other locations in Europe, suggesting that music was certainly a part of Stone Age life (see also Section 5.5).
But more sophisticated forms of music also have ancient roots.
Foreword
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp xiii-xiv
-
- Chapter
- Export citation
-
Summary
Human sensing abilities have been shaped and refined over long evolutionary periods. Hearing has adapted to serve us well in many different tasks and situations, helping us to orient ourselves and to survive in the world. The general properties of peripheral and central mechanisms of hearing are highly conserved across vertebrates due to very similar environmental conditions. Species specific variations do exist, such as the use of ultrasound for orientation in bats and cetaceans, but they are usually founded on quantitative and not qualitative differences to generally applicable principles of hearing and brain mechanisms. Basic hearing tasks for survival include detecting, localizing and identifying sound sources in cluttered or noisy environments. Another critical role of vertebrate hearing is the control and analysis of communication sounds which, in humans, lead to the highly developed ability of speech production and perception. Speech, like many other sounds involving resonance phenomena, contains harmonic elements, i.e. frequency components that are integer multiples of a common ‘fundamental’ frequency. These sounds can evoke a perceptual phenomenon, periodicity or virtual pitch that is distinct from other perceptual dimensions of sounds. A most human endeavour, the production and enjoyment of music, is fundamentally based on this perceptual phenomenon. Studies of the brain mechanisms that lead to this perception, its psychophysical manifestation and, eventually, cognitive and emotional benefits have progressed for more than a century, as is outlined in this volume, but still many aspects remain unresolved.
A helpful aspect in resolving this matter may be found in the fact that humans have surrounded themselves with an environment of their own creation. Based on our ability to use tools we have created artificial soundscapes that serve, entertain and move us. Unsurprisingly, many of those sound aspects have been, often inadvertently, chosen to match or most effectively engage our biological sound analysis system. Examples include the choice for frequency transitions in ambulance sirens to catch our attention, or the relationship of voices in polyphonic music.
2 - Sound and periodicity
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 11-23
-
- Chapter
- Export citation
-
Summary
‘A musical sound is produced by pulses or waves which follow each other at regular intervals with sufficient rapidity of succession.’
(John Tyndall, ‘Sound’, Lecture II, Longmann & Co., 1893)Sound is movement
The Ancient Greeks had only a limited understanding of the physical nature of sound, but they did realize that sound is produced by the movement of an object or of parts of an object. They also knew that the pitch of the sound is related to certain physical attributes of the sound-emitting object, such as its weight or its length. Archytas of Tarentum, in the fourth century BC, may have been the first to make the connection between the vibration frequency of an object and the pitch of the resulting sound. Aristotle (384 – 322 BC) noticed the phenomenon of resonance in strings, and even observed that the tone of a vibrating string contains the octave of its fundamental pitch. Yet for thousands of years the mechanisms behind the creation of harmonic sounds, i.e. the periodic oscillations of strings or other vibrating structures, remained a mystery.
In addition, very little was known of the means by which sound propagates, how it is transmitted through the air to reach our ears. Aristotle compared echoes with balls bouncing back from a wall and correctly concluded that sound must travel by means of waves. Almost 2000 years later, Leonardoda Vinci (1452 – 1519) also concluded that sound propagates just like waves in water – he compared acoustical echoes to the reflection of waves on the banks of a pond.
In the seventeenth century, the British scientist Robert Boyle (1626 – 1691) convincingly demonstrated for the first time that sound needs a medium, such as air, to propagate. From his experiments it became clear that the vibration of a membrane, a tube or a string elicits corresponding oscillations of adjacent molecules in the air.
9 - Theories of periodicity coding
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 122-142
-
- Chapter
- Export citation
-
Summary
Synchronization and harmony
In early 1976 I was standing chest-deep in the water of a flooded island, not far from the mouth of the Rio Negro, whose black water could still be seen alongside the white floods of the Amazon. The reason for this (for me, at least) unusual location was that my colleagues and I, accompanied by experts and fishermen from the Instituto Nacional de Pesquisas da Amazônia in Manaus, were fishing with nets and had to remove the water plants in which our prey – various species of electric fish–was hiding.
These fish are able to orientate and communicate in this dull environment using only their weak electric signals, and we wanted to study the amazing neuronal networks that make this possible. The result was worth all the trouble. One of the things we discovered later in the laboratory was that a certain electric fish (Sternarchorhamphus), which generates electrical signals with frequencies well above 1000 Hz, was able to couple its sinusoidal signals in phase with our electronic generators (Fig. 9.1; Langner and Scheich, 1978).
Another observation was that their phase-coupling could be one to one, but could also be in other harmonic relationships. This finding was in line with the near octave (1:2) relationship of the signals from males and females of another electric fish, Sternopygus macrurus, which was observed during courtship behaviour (Hopkins, 1974). Since our own haul produced a variety of species of electric fish that use different frequencies, we were able to see that Sternarchorhynchus also phase-coupled to signals of other species. Some of these discharged signals were several hundred Hertz below their own frequency and for a short time the two frequencies would stay in a harmonic relation, e.g. 2:3, 2:5 or 1:3 (mosquitoes use – for the same purpose – their wing-beat frequency; Warren et al., 2009; Gibson et al., 2010).
5 - The auditory time constant
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 46-59
-
- Chapter
- Export citation
-
Summary
‘Panpipes and flutes obey the same law.’
From a Chinese fairy tale by Tung Chou Li Kuo Tse, about 1600
(Hornbostel, 1928)A quantum effect of pitch shift
In the previous chapter the pitch shifts measured by Schouten were analysed by employing periods instead of frequencies for the signal as well as for the perceived pitches. The resulting linear approximations of the pitch shifts in the time domain revealed auditory mechanisms which are obviously able to extract integer multiples of the period of the carrier.
Moreover, it also provided evidence for perceptual time constants that are not related to the periods of the signal. Actually, these constants may be put down to just one, since they are also integer multiples, not of a signal period but of an intrinsic auditory time constant of 0.40 ms. It will be shown in the following that this value is characteristic for the hearing system and seems to function as a kind of auditory benchmark, not only in human speech and music, but also in animal communication (see also Chapter 9).
The presumed auditory time constant becomes even more plausible by another perceptual effect that may be called the ‘third’ or ‘quantal effect of pitch shift’ (Langner, 1981, 1983). When experimental subjects compared the residual pitch of an AM signal (see Fig. 4.4) with that of a pure tone, they frequently reported that, although they heard the pitch increase with the carrier frequency f c, it did so not in a continuous, but in a staircase like manner. The pitch steps were small but of equal size when measured in the time domain – that is, as temporal intervals.
As an example, Fig. 5.1 shows results from such measurements (Langner, 1981). Two subjects compared the pitch of an AM signal with varying carrier frequency to that of pure tones.
6 - Pathways of hearing
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 60-87
-
- Chapter
- Export citation
-
Summary
From the cochlea to the cortex
Acoustic signals transferred from the outer to the inner ear elicit neural signals in the cochlea that travel in the form of nerve impulses along the auditory pathways. They follow ascending fibre tracts from the brainstem to the midbrain and from there through the thalamus to the cerebral cortex (Fig. 6.1). Each of these fibre tracts contains many thousands of nerve fibres (axons) that connect the nerve cells of a series of auditory processing centres (nuclei).
The auditory nerve connects the cochlea with the entrance station to the central auditory system, the cochlear nucleus (CN). From here, acoustic information travels to the inferior colliculus via both direct and indirect routes. This midbrain nucleus is a major processing centre for all auditory information on its way to the cortex. It receives input from the CN on both sides, from the nuclei of the lateral lemniscus and also from the superior olivary complex (containing binaural information). As we will see (Chapters 8–11), the inferior colliculus also plays a central role in periodicity processing and therefore also in this book.
The next level of the auditory pathway is the medial geniculate body, located in the thalamus. This nucleus is often considered as the gateway to the auditory cortex that forms part of the cortical temporal lobe. Here fundamental acoustic features like timbre, pitch, loudness and localization, which have already been analysed in the lower processing centres, are processed further. Finally, fibres of descending (efferent) pathways course downwards from the cortex and other auditory areas to in fluence the lower processing centres. They provide a negative feedback, or inhibitory control, which in fluences the sensitivity and selectivity of these nuclei.
The ear
The receiving system
Our ear consists of three main parts: the outer, middle and the inner ear (Fig. 6.2). The pinna and ear canal of the outer ear collect sound waves and guide them to the eardrum (tympanic membrane).
Preface
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp xi-xii
-
- Chapter
- Export citation
-
Summary
Sound is a vital tool for humans and animals. We communicate with each other through speech, we convey emotion by laughing or crying, but we also purposefully create sounds using our voices or musical instruments just because we perceive them to be appealing or beautiful. The pitch, rhythm and melody of speech and music can communicate emotions like fear, pleasure and anger quite quickly and efficiently. Moreover, as humans we seem to have a powerful urge to fill the world with sounds of our own creation, with the result that these days music surrounds us virtually everywhere. The need to make, listen and dance to music stretches back to the very beginnings of our history: for many thousands of years music has played an essential role in our social interactions, rituals and ceremonies. The sixth-century Roman philosopher and great musical theorist Boethius stated quite simply:
it appears beyond doubt that music is so naturally united with us that we cannot be free from it even if desired.
We all know that some combinations of musical tones sound particularly good when played together or subsequently; we call these ‘consonant’ or ‘harmonious’, while others sound harsh or ‘dissonant’. If asked what combinations of sounds they find pleasant, or at least interesting, people from different cultural back- grounds may not completely agree. Different forms of music prevail in different regions of the world, and musical instruments and composition have become progressively more sophisticated as civilization advances. Nevertheless, there are certain combinations of tones that seem to have universal appeal. They are preferred everywhere and form the basis of musical systems throughout the world. Clearly, there must be some universal rules that are crucial to our perception of musical harmony.
The question of what these rules are and what might be the role of whole numbers dates back to the time of the ancient Greeks.
8 - Periodicity coding in the midbrain
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 105-121
-
- Chapter
- Export citation
-
Summary
Coding of complex sounds
Processing of species-specific vocalizations
In the early 1970s, the customary silence of the green woods surrounding the MaxPlanck Institute for Biophysical Chemistry in Göttingen-Nikolausberg was disturbed by harsh animal cries. The strange grating sounds emanated from a hutch on the roof of one of the ivory towers of this renowned institute. Underneath, seemingly oblivious to the cacophony overhead, scientists in Professor Otto Creutzfeldt's neurophysiology department were busy investigating how the brain perceives and analyses olfactory, somato sensory, visual and auditory information. The hutch that was the source of these unusual cries housed helmeted guinea fowl (Fig. 8.1), the subjects of neurophysiologic research for a team of four young scientists, Vreni Maier, Henning Schiech, Rainer Koch and myself, who were fascinated by their vocalizations.
We selected these long-necked relatives of pheasants for our research because we wanted to investigate the processing of species-specific vocalizations in their central auditory system. Our working hypothesis was that recognition of complex communication sounds might involve feature detectors, i.e. neurons which respond preferentially, if not exclusively, to particular combinations of acoustic parameters characteristic for these sounds.
As our investigations revealed (Maier, 1982; Scheich et al., 1983), besides formants comparable to those in human vowels, another important feature of guinea fowl communication sounds is periodic amplitude modulation (AM) (Fig. 8.2). As harmony, formant structure and more or less rapid AMs are characteristic of many communication sounds, the neuronal coding of guinea fowl communication is an excellent model for auditory coding mechanisms in general, including human speech processing.
For our experiments we selected the auditory midbrain nucleus MLD (mesencephalicus lateralis, pars dorsalis), which in birds corresponds to the mammalian inferior colliculus. Our aim was to record from single neurons in the midbrain while the birds were sitting in a soundproof booth, listening to a tape recording of species-specific vocalizations.
10 - Periodotopy
- Gerald D. Langner, Technische Universität, Darmstadt, Germany
- Assisted by Christina Benson
-
- Book:
- The Neural Code of Pitch and Harmony
- Published online:
- 05 May 2015
- Print publication:
- 23 April 2015, pp 143-161
-
- Chapter
- Export citation
-
Summary
Spatial representation of pitch
Mapping from time to place
Tonotopic maps are typical examples of how perceptual parameters can be represented in the nervous system. Information about acoustic frequency is represented by an orderly topographic arrangement of neurons according to their individual frequency tuning (CF). As we have seen in the last chapters, disc-shaped cells in the inferior colliculus are usually tuned in two ways. They have a preference for a certain CF, because they receive their input from a particular location in the cochlea. In addition they act as coincidence detectors that are tuned to the periodic envelopes of harmonic signals. These neurons therefore preferentially respond to signals in a more or less narrow frequency range, especially if these signals are modulated with the ‘right’ (best) modulation frequency (BMF).
As a result of coincidence detection, temporal information is to a certain extent transferred into a rate (activation) code. On the other hand, synchronization deteriorates strongly for modulation frequencies above 300 Hz in the auditory midbrain and above 100 Hz in the cortex (Langner, 1992). It seems therefore reasonable to assume that the loss of temporal information must somehow be compensated for and that it could possibly be transformed into a spatial code. Accordingly, we were able to show that, as a result of the time analysis in the brainstem and the inferior colliculus, periodicity is represented topographically in a periodotopic map (Schreiner and Langner, 1988).
A black-box model of a subpart of the periodicity processing network (Fig. 10.1) shows how the periodicity analysis may result in a common map for frequency and periodicity. On the left side the frequency analysis of a small part of the cochlea is shown as a filter bank with many parallel channels (hair cells). The time information from these parallel channels is transferred with different delays via the dorsal and ventral parts of the cochlear nucleus (CN) to one of about 30 neuronal layers in the inferior colliculus.
Contributors
-
- By Ashraf Abdelhay, Ulrich Ammon, Angelelli Claudia V, David F. Armstrong, Peter Backhaus, Richard B. Baldauf Jr, Carol Benson, Richard D. Brecht, Stephen J. Caldas, Jasone Cenoz, Mary Carol Combs, Florian Coulmas, Helder De Schutter, Fernand de Varennes, Alexandre Duchêne, John Edwards, Gibson Ferguson, Ofelia García, Durk Gorter, Federica Guerini, Monica Heller, Gabrielle Hogan-Brun, Björn H. Jernudd, Kendall A. King, Verena Krausneker, Joseph Lo Bianco, Busi Makoni, Makoni Sinfree B, Pedzisai Mashiri, A. W. Teresa L. McCarty, Svitlana Melnyk, Jiří Nekvapil, Hoa Thi Mai Nguyen, Christina Bratt Paulston, Susan D. Penfield, Robert Phillipson, Meital Pinto, Adam Rambow, Denise Réaume, William P. Rivers, David Robichaud, Julia Sallabank, Bernard Spolsky, Stephen L. Walter, Jonathan M. Watt, Sherman Wilcox, Colin H. Williams, Sue Wright
- Edited by Bernard Spolsky, Bar-Ilan University, Israel
-
- Book:
- The Cambridge Handbook of Language Policy
- Published online:
- 05 June 2012
- Print publication:
- 01 March 2012, pp xii-xiv
-
- Chapter
- Export citation