In 1963, an article entitled ‘The Digital Computer as a Musical Instrument’ appeared in the journal Science, in which Max Mathews, the father of computer music, declared the birth of computer-generated sound. For the first time, this article described the possibility of creating sounds by using computers, and explained how ‘there are no theoretical limitations to the performance of the computer as a source of musical sounds, in contrast to the performance of ordinary instruments’ (Mathews, 1963).
Sound synthesis can be defined as the production and manipulation of sounds using mathematical algorithms. A useful classification of sound synthesis techniques was proposed by Julius O. Smith (Smith 1991) who proposes four categories: processed recordings, abstract algorithms, spectral models and physical models. Synthesis techniques such as wavetable synthesis and granular synthesis belong, according to Smith, to the category of processed recordings. Considering these techniques as merely synthesis would contradict the idea that synthetic sounds are generated from scratch, while these techniques require some initial sonic material. Abstract algorithms include techniques such as amplitude, ring, frequency modulation and waveshaping. Spectral models simulate sounds as they are received and perceived by the ear, including techniques such as source-filter synthesis, additive synthesis, the phase vocoder and subtractive synthesis. Smith's last synthesis category involves physical models, which simulate the source of sound production. We shall consider all of these categories in this chapter.