To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter we study the worst case setting. We shall present results already known as well as showing some new results. As already mentioned in the Overview, precise information about what is known and what is new can be found in the Notes and Remarks.
Our major goal is to obtain tight complexity bounds for the approximate solution of linear continuous problems that are defined on infinite dimensional spaces. We first explain what is to be approximated and how an approximation is obtained. Thus we carefully introduce the fundamental concepts of solution operator, noisy information and algorithm. Special attention will be devoted to information, which is most important in our analysis. Information is, roughly speaking, what we know about the problem to be solved. A crucial assumption is that information is noisy, i.e., it is given not exactly, but with some error.
Since information is usually partial (i.e., many elements share the same information) and noisy, it is impossible to solve the problem exactly. We have to be satisfied with only approximate solutions. They are obtained by algorithms that use information as data. In the worst case setting, the error of an algorithm is given by its worst performance over all problem elements and possible information. A sharp lower bound on the error is given by a quantity called radius of information. We are obviously interested in algorithms with the minimal error.
In the process of doing scientific computations we always rely on some information. In practice, this information is typically noisy, i.e., contaminated by error. Sources of noise include
previous computations,
inexact measurements,
transmission errors,
arithmetic limitations,
an adversary's lies.
Problems with noisy information have always attracted considerable attention from researchers in many different scientific fields, e.g., statisticians, engineers, control theorists, economists, applied mathematicians. There is also a vast literature, especially in statistics, where noisy information is analyzed from different perspectives.
In this monograph, noisy information is studied in the context of the computational complexity of solving mathematical problems.
Computational complexity focuses on the intrinsic difficulty of problems as measured by the minimal amount of time, memory, or elementary operations necessary to solve them. Information-based complexity (IBC) is a branch of computational complexity that deals with problems for which the available information is
partial,
noisy,
priced.
Information being partial means that the problem is not uniquely determined by the given information. Information is noisy since it may be contaminated by error. Information is priced since we must pay for getting it. These assumptions distinguish IBC from combinatorial complexity, where information is complete, exact, and free.
Since information about the problem is partial and noisy, only approximate solutions are possible. Approximations are obtained by algorithms that use this information.
This chapter deals with the average case setting. In this setting, we are interested in the average error and cost of algorithms. The structure of this chapter is similar to that of the previous chapter. That is, we first deal with optimal algorithms, then we analyze the optimal information, and finally, we present some complexity results.
To study the average error and/or cost, we have to replace the deterministic assumptions of the worst case setting by stochastic assumptions. That is, we assume some probability distribution µ on the space F of problem elements as well as some distribution of the information noise. The latter means that information is corrupted by random noise. Basically, we consider Gaussian distributions (measures) which seem to be most natural and are most often used in modeling.
In Section 3.2, we give a general formulation of the average case setting. We also introduce the concept of the (average) radius of information which, as in the worst case, provides a sharp lower bound on the (average) error of algorithms.
Then we pass to linear problems with Gaussian measures. These are problems where the solution operator is linear, µ is a Gaussian measure, and information is linear with Gaussian noise. In Section 3.3, we recall the definition of a Gaussian measure on a Banach space, listing some important properties. In Sections 3.4 to 3.6 we study optimal algorithms.
In the modern world, the importance of information can hardly be overestimated. Information also plays a prominent role in scientific computations. A branch of computational complexity which deals with problems for which information is partial, noisy and priced is called informationbased complexity.
In a number of information-based complexity books, the emphasis was on partial and exact information. In the present book, the emphasis is on noisy information. We consider deterministic and random noise. The analysis of noisy information leads to a variety of interesting new algorithms and complexity results.
The book presents a theory of computational complexity of continuous problems with noisy information. A number of applications is also given. It is based on results of many researchers in this area (including the results of the author) as well as new results not published elsewhere.
This work would not have been completed if I had not received support from many people. My special thanks go to H. Woźniakowski who encouraged me to write such a book and was always ready to offer his help. I appreciate the considerable help of J.F. Traub. I would also like to thank M. Kon, A. Werschulz, E. Novak, K. Ritter and other colleagues for their valuable comments on various portions of the manuscript.
I wish to express my thanks to the Institute of Applied Mathematics and Mechanics at the University of Warsaw, where the book was almost entirely written.
In Chapters 2 to 5, we fixed the set of problem elements and were interested in rinding single information and algorithm which minimize an error or cost of approximation. Depending on the deterministic or stochastic assumptions on the problem elements and information noise, we studied the four different settings: worst, average, worst-average, and average-worst case settings.
In this chapter, we study the asymptotic setting in which a problem element f is fixed and we wish to analyze asymptotic behavior of algorithms. The aim is to construct a sequence of information and algorithms such that the error of successive approximations vanishes as fast as possible, as the number of observations increases to infinity.
The asymptotic setting is often studied in computational practice. We mention only the Romberg algorithm for computing integrals, and finite element methods (FEM) for solving partial differential equations with the meshsize tending to zero. When dealing with these and other numerical algorithms, we are interested in how fast they converge to the solution.
One might hope that it will be possible to construct a sequence φn(yn) of approximations such that for the element f the error ∥S(f) − φn(yn)∥ vanishes much faster than the error over the whole set of problem elements (or, equivalently, faster than the corresponding radius of information). It turns out, however, that in many cases any attempts to construct such algorithms would fail. We show this by establishing relations between the asymptotic and other settings.
There is a rich literature on the design of formal languages for music representation on computers. Over the last thirty years, several generations of software technology have been applied to this problem, including structured software engineering, artificial intelligence, and object-oriented (O-O) software technology. This article introduces the basic notions of O-O software technology, and investigates how these might be useful for music representation. In particular, the author's Smalltalk music object kernel (Smoke) music representation language is described and examples given that illustrate the most important of Smoke's features.
It is rare to see music and technology being used in combination in therapy and special education. This article is an account of work in a special school as part of a festival of popular music. The style of the music was dance/rave. This was made accessible using a specialised range of MIDI devices to enable students with physical and learning disabilities to participate. There are many benefits to be derived from studying popular music. In special education this can help with physical coordination and social skills. Most important, young people with special needs are given access to youth cultures from which, traditionally, they have tendedto be excluded.
It is common to oppose formalist and referentialist approaches to music. However, in Francis Dhomont's work Points de fuite, these approaches appear complementary when we consider the relationship between sounds and sources. Adopting the analytical approach of the American theorist Leonard B. Meyer, we show how the syntactic flow of Points de fuite generates formal implications through the impact of tension and relaxation archetypes. The piece explores metaphors based upon recurrent anecdotal events – the recorded signifiers of the source. These extra-musical elements define the work's structure to such an extent that they eliminate the traditional gap between formalism andreferentialism in music.
This article approaches the definition of the important term 'acousmatic' by reference to its origins in the sound studios of the French National Radio. The links from France to Québec are outlined and the Québecois acousmatic school, largely based in Montreal, is introduced. Aspects of a typical piece are discussed, and the author is able to answer the title question positively.
Within the context of discussing contemporary music the European tendency to overvalue abstraction is questioned. The use of environmental sounds in electroacoustic music is highlighted as an example of the questionable value of abstraction. Attention is then focused on a recent Truax composition, Powers of Two (1995) as a work of electroacoustic music theatre. The historical musical and poetic references, as well as the sound sources adopted for the work, are discussed, and placed within the human framework of relationship embodied in the piece. A concluding section summarises the work as an attempt to create a contemporary myth from historical sources, and as a dramatic expression employing electroacoustic forces.
Since the mid-1980s commercial digital samplers have become widespread. The idea of musical instruments which have no sounds of their own is, however, much older, not just in the form of analogue samplers like the Mellotron, but in ancient myths and legends from China and elsewhere. This history of both digital and analogue samplers relates the latter to the early musique concrète of Pierre Schaeffer and others, and also describes a variety of one-off systems devised by composers and performers.