To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Informally we can describe prosody as the part of human communication which expresses emotion, emphasises words, reveals the speaker's attitude, breaks a sentence into phrases, governs sentence rhythm and controls the intonation, pitch or tune of the utterance. This chapter describes how to predict prosodic form from the text while Chapter 9 goes on to describe how to synthesize the acoustics of prosodic expression from these form representations. In this chapter we first introduce the various manifestations of prosody in terms of phrasing, prominence and intonation. Next we go on to describe how prosody is used in communication, and in particular explain why this has a much more direct affect on the final speech patterns than with verbal communication. Finally we describe techniques for predicting what prosody should be generated from a text input.
Prosodic form
In our discussion of the verbal component of language, we saw that, while there were many difficulties in pinning down the exact nature of words and phonemes, broadly speaking words and phonemes were fairly easy to find, identify and demarcate. Furthermore, people can do this readily without much specialist linguistic training – given a simple sentence, most people can say which words were spoken, and with some guidance people have little difficulty in identifying the basic sounds in that sentence.
The situation is nowhere near as clear for prosody, and it may amaze new comers to this topic to discover that there are no widely agreed description or representation systems for any aspect of prosody, be it to do with emotion, intonation, phrasing or rhythm.
In this chapter we turn to the topic of speech analysis, which tackles the problem of deriving representations from recordings of real speech signals. This book is of course concerned with speech synthesis – and at first sight it may seem that the techniques for generating speech “bottom-up” as described in Chapters 10 and 11 may be sufficient for our purpose. As we shall see, however, many techniques in speech synthesis actually rely on an analysis phase, which captures key properties of real speech and then uses these to generate new speech signals. In addition, the various techniques here enable useful characterisation of real speech phenomena for purposes of visualisation or statistical analysis. Speech analysis then is the process of converting a speech signal into an alternative representation that in some way better represents the information which we are interested in. We need to perform analysis because waveforms do not usually directly give us the type of information we are interested in.
Nearly all speech analysis is concerned with three key problems. First, we wish to remove the influence of phase; second, we wish to perform source/filter separation, so that we can study the spectral envelope of sounds independently of the source that they are spoken with. Finally, we often wish to transform these spectral envelopes and source signals into other representations that are coded more efficiently, have certain robustness properties, or more clearly show the linguistic information we require.
This final chapter concerns cryptography, the principle of securing information against access or tampering by third parties. Classical cryptography refers to the manipulation of classical bits for this purpose, while quantum cryptography can be viewed as doing the same with qubits. I describe these two approaches in the same chapter, as in my view the field of cryptography should be understood as a whole and appreciated within such a broader framework, as opposed to focusing on the specific applications offered by the quantum approach. I, thus, begin by introducing the notions of message encryption, message decryption, and code breaking, the action of retrieving the message information contents without knowledge of the code's secret algorithm or secret key. I then consider the basic algorithms to achieve encryption and decryption with binary numbers, which leads to the early IBM concept of the Lucifer cryptosystem, which is the ancestor of the first data encryption standard (DES). The principle of double-key encryption, which alleviates the problem of key exchange, is first considered as an elegant solution but it is unsafe against code-breaking. Then the revolutionary principles of cryptography without key exchange and public-key cryptography (PKC) are considered, the latter also being known as RSA. The PKC–RSA cryptosystem is based on the extreme difficulty of factorizing large numbers. This is the reason for the description made earlier in Chapter 20 concerning Shor's factorization algorithm.
This appendix gives a brief guide to the probability theory needed at various stages in the book. The following is too brief to be intended as a first exposure to probability, but rather is here to act as a reference. Good introductory books on probability include Bishop, and Duda, Hart and Stork.
Discrete probabilities
Discrete events are the simplest to interpret. For example, what is the probability of
it raining tomorrow?
a 6 being thrown on a die?
Probability can be thought of as the chance of a particular event occurring. We limit the range of our probability measure to lie in the range 0 to 1, where
lower numbers indicate that the event is less likely to occur, 0 indicates it will never occur;
higher numbers indicate that the event is more likely to occur, 1 indicates that the event will definitely occur.
We like to think that we have a good grasp of both estimating and using probability. For simple cases such as “will it rain tomorrow?” we can do reasonably well. However, as situations get more complicated things are not always so clear. The aim of probability theory is to give us a mathematically sound way of inferring information using probabilities.
Discrete random variables
Let some event have have M possible outcomes. We are interested in the probability of each of these outcomes occurring.
The previous chapter introduced the concept of coding optimality, as based on variable-length codewords. As we have learnt, an optimal code is one for which the mean codeword length closely approaches or is equal to the source entropy. There exist several families of codes that can be called optimal, as based on various types of algorithms. This chapter, and the following, will provide an overview of this rich subject, which finds many applications in communications, in particular in the domain of data compression. In this chapter, I will introduce Huffman codes, and then I will describe how they can be used to perform data compression to the limits predicted by Shannon. I will then introduce the principle of block codes, which also enable data compression.
Huffman codes
As we have learnt earlier, variable-length codes are in the general case more efficient than fixed-length ones. The most frequent source symbols are assigned the shortest codewords, and the reverse for the less frequent ones. The coding-tree method makes it possible to find some heuristic codeword assignment, according to the above rule. Despite the lack of further guidance, the result proved effective, considering that we obtained η = 96.23% with a ternary coding of the English-character source (see Fig. 8.3, Table 8.3). But we have no clue as to whether other coding trees with greater coding efficiencies may ever exist, unless we try out all the possibilities, which is impractical.
This mathematically intensive chapter takes us through our first steps in the domain of quantum computation (QC) algorithms. The simplest of them is the Deutsch algorithm, which makes it possible to determine whether or not a Boolean function is constant for any input. The key result is that this QC algorithm provides the answer at once, whereas in the classical case it would take two independent calculations. I describe next the generalization of the former algorithm to n qubits, referred to as the Deutsch–Jozsa algorithm. Although they have no specific or useful applications in quantum computing, both algorithms represent a most elegant means of introducing the concept of quantum computation parallelism. I then describe two most important QC algorithms, which nicely exploit quantum parallelism. The first is the quantum Fourier transform (QFT), for which a detailed analysis of QFT circuits and quantum-gate requirements is also provided. As will be shown in the next chapter, a key application of QFT concerns the famous Shor's algorithm, which makes it possible to factor numbers into primes in terms of polynomials. The second algorithm, no less famous than Shor's, is referred to as the Grover quantum database search, whose application is the identification of database items with a quadratic gain in speed.
Deutsch algorithm
Our exploration of quantum algorithms shall begin with the solution of a very basic problem: finding whether or not a Boolean function f(x) is a constant.
This chapter is about coding information, which is the art of packaging and formatting information into meaningful codewords. Such codewords are meant to be recognized by computing machines for efficient processing or by human beings for practical understanding. The number of possible codes and corresponding codewords is infinite, just like the number of events to which information can be associated, in Shannon's meaning. This is the point where information theory will start revealing its elegance and power. We will learn that codes can be characterized by a certain efficiency, which implies that some codes are more efficient than others. This will lead us to a description of the first of Shannon's theorems, concerning source coding. As we shall see, coding is a rich subject, with many practical consequences and applications; in particular in the way we efficiently communicate information. We will first start our exploration of information coding with numbers and then with language, which conveys some background and flavor as a preparation to approach the more formal theory leading to the abstract concept of code optimality.
Coding numbers
Consider a source made of N different events. We can label the events through a set of numbers ranging from 1 to N, which constitute a basic source code. This code represents one out of N! different possibilities. In the code, each of the numbers represents a codeword.
The speech-production process was qualitatively described in Chapter 7. There we showed that speech is produced by a source, such as the glottis, which is subsequently modified by the vocal tract acting as a filter. In this chapter, we turn our attention to developing a more-formal quantitative model of speech production, using the techniques of signals and filters described in Chapter 10.
The acoustic theory of speech production
Such models often come under the heading of the acoustic theory of speech production, which refers both to the general field of research in mathematical speech-production models and to the book of that title by Fant. Although considerable previous work in this field had been done prior to its publication, this book was the first to bring together various strands of work and describe the whole process in a unified manner. Furthermore, Fant backed his study up with extensive empirical studies with X-rays and mechanical models to test and verify the speech-production models being proposed. Since then, many refinements to the model have been made, as researchers have investigated trying to improve the accuracy and practicalities of these models. Here we focus on the single most widely accepted model, but conclude the chapter with a discussion on variations on this.
As with any modelling process, we have to reach a compromise between a model that accurately describes the phenomena in question and one that is simple, effective and suited to practical needs.
This chapter gives an outline of the related fields of phonetics and phonology. A good knowledge of these subjects is essential in speech synthesis because they help bridge the gap between the discrete, linguistic, word-based message and the continuous speech signal. More-traditional synthesis techniques relied heavily on phonetic and phonological knowledge, and often implemented theories and modules directly from these fields. Even in the more-modern heavily data-driven synthesis systems, we still find that phonetics and phonology have a vital role to play in determining how best to implement representations and algorithms.
Articulatory phonetics and speech production
The topic of speech production examines the processes by which humans convert linguistic messages into speech. The converse process, whereby humans determine the message from the speech, is called speech perception. Together these form the backbone of the field know as phonetics.
Regarding speech production, we have what we can describe as a complete but approximate model of this process. That is, in general we know how people use their articulators to produce the various sounds of speech. We emphasise, however, that our knowledge is very approximate; no model as yet can predict with any degree of accuracy how a speech waveform from a particular speaker would look like given some pronunciation input.
This chapter is concerned with the issue of synthesising acoustic representations of prosody. The input to the algorithms described here varies but in general takes the form of the phrasing, stress, prominence and discourse patterns which we introduced in Chapter 6. Hence the complete process of synthesis of prosody can be seen as one whereby we first extract a prosodic form representation from the text, as described in Chapter 6, and then synthesize an acoustic representation of this form, as described here.
The majority of this chapter focuses on the synthesis of intonation. The main acoustic representation of intonation is the fundamental frequency (F0), such that intonation is often defined as the manipulation of F0 for communicative or linguistic purposes. As we shall see, techniques for synthesizing F0 contours are inherently linked to the model of intonation used, so the whole topic of intonation, including theories, models and F0 synthesis, is dealt with here. In addition, we cover the topic of predicting intonation form from text, which was deferred from Chapter 6 since we first require an understanding of theories and models of intonational phenomena before explaining this.
Timing is considered the second important acoustic representation of prosody. Timing is used to indicate stress (phones are longer than normal), phrasing (phones become noticeably longer immediately prior to a phrase break) and rhythm.
Intonation overview
As a working definition, we will take intonation synthesis to be the generation of an F0 contour from higher-level linguistic information.
Speech and hearing are closely linked human abilities. It could be said that human speech is optimised toward the frequency ranges that we hear best, or perhaps our hearing is optimised around the frequencies used for speaking. However whichever way we present the argument, it should be clear to an engineer working with speech transmission and processing systems that aspects of both speech and hearing must often be considered together in the field of vocal communications. However, both hearing and speech remain complex subjects in their own right. Hearing particularly so.
In recent years it has become popular to discuss psychoacoustics in textbooks on both hearing and speech. Psychoacoustics is a term that links the words psycho and acoustics together, and although it sounds like a description of an auditory-challenged serial killer, actually describes the way the mind processes sound. In particular, it is used to highlight the fact that humans do not always perceive sound in the straightforward ways that knowledge of the physical characteristics of the sound would suggest.
There was a time when use of this word at a conference would boast of advanced knowledge, and familiarity with cutting-edge terminology, especially when it could roll off the tongue naturally. I would imagine speakers, on the night before their keynote address, standing before the mirror in their hotel rooms practising saying the word fluently. However these days it is used far too commonly, to describe any aspect of hearing that is processed nonlinearly by the brain. It was a great temptation to use the word in the title of this book.
This chapter contains a number of final topics, which have been left until last because they span many of the topics raised in the previous chapters.
Databases
Data-driven techniques have come to dominate nearly every aspect of text-to-speech in recent years. In addition to being affected by the algorithms themselves, the overall performance of a system is increasingly dominated by the quality of the databases that are used for training. In this section, we therefore examine the issues in database design, collection, labelling and use.
All algorithms are to some extent data-driven; even hand-written rules use some “data”, either explicitly or in a mental representation wherein the developer can imagine examples and how they should be dealt with. The difference between hand-written rules and data-driven techniques lies not in whether one uses data or not, but concerns how the data are used. Most data-driven techniques have an automatic training algorithm such that they can be trained on the data without the need for human intervention.
Unit-selection databases
Unit selection is arguably the most data-driven technique because little or no processing is performed on the data, rather it is simply analysed, cut up and recombined in different sequences. As with other database techniques, the issue of coverage is vital, but in addition we have further issues concerning the actual recordings.
This chapter describes the principle of compression in quantum communication channels. The underlying concept is that it is possible to convey “faithfully” a quantum message with a large number of qubits, while transmitting a compressed version of this message with a reduced number of qubits through the channel. Beyond the mere notion of fidelity, which characterizes the quality of quantum message transmission, the description brings the new concept of typicality in the space defined by all possible “quantum codewords.” The theorem of Schumacher's quantum compression states that for a qubit source with von Neumann entropy S, the message compression factor R has S − ε for the lower bound, where ε is any nonnegative parameter that can be made arbitrarily small for sufficiently long messages (hence, R ≈ S is the best possible compression factor). An original graphical and numerical illustration of the effect of Schumacher's quantum compression and the evolution of the typical quantum-codeword subspace with increasing message length is provided.
Quantum data compression and fidelity
In this chapter, we have reached the stage where it is possible to start addressing the issues that are central to information theory, namely, “How efficiently can we code information in a quantum communication channel?” both in terms of economy of means – the concept of data compression – and accuracy of transmission – the concept of message integrity or minimal data error, referred to here as fidelity.
In effect, the concept of information is obvious to anyone in life. Yet the word captures so much that we may doubt that any real definition satisfactory to a large majority of either educated or lay people may ever exist. Etymology may then help to give the word some skeleton. Information comes from the Latin informatio and the related verb informare meaning: to conceive, to explain, to sketch, to make something understood or known, to get someone knowledgeable about something. Thus, informatio is the action and art of shaping or packaging this piece of knowledge into some sensible form, hopefully complete, intelligible, and unambiguous to the recipient.
With this background in mind, we can conceive of information as taking different forms: a sensory input, an identification pattern, a game or process rule, a set of facts or instructions meant to guide choices or actions, a record for future reference, a message for immediate feedback. So information is diversified and conceptually intractable. Let us clarify here from the inception and quite frankly: a theory of information is unable to tell what information actually is or may represent in terms of objective value to any of its recipients! As we shall learn through this series of chapters, however, it is possible to measure information scientifically. The information measure does not concern value or usefulness of information, which remains the ultimate recipient's paradigm.