To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper describes TAO, a system for sound synthesis by physical modelling based on a new technique called cellular sound synthesis (CSS). The system provides a general mechanism for constructing an infinite variety of virtual instruments, and does so by providing a virtual acoustic material, elastic in nature, whose physical characteristics can be fine-tuned to produce different timbres. A wide variety of sounds such as plucked, hit, bowed and scraped sounds can be produced, all having natural physical and spacial qualities. Some of the musical and philosophical issues considered to be most important during the design and development of the system are touched upon, and the main features of the system are explained with reference to practical examples. Advantages and disadvantages of the synthesis technique and the prototype system are discussed, together with suggestions for future improvements.
'Organised sound' - the term coined by Edgard Varèse for a new definition of musical constructivism - denotes for our increasingly technologically dominated culture an urge towards the recognition of the human impulse behind the 'system'. Such is the diversity of activity in today's computer music, we need to maintain a balance between technological advances and musically creative and scholarly endeavour, at all levels of an essentially educative process. The model of 'life-long learning' makes a special kind of sense when we can explore our musical creativity in partnership with the computer, a machine now capable of sophisticated response from a humanly embedded intelligence.
We describe new applications of the theory of automata to natural language processing: the representation of very large scale dictionaries and the indexation of natural language texts. They are based on new algorithms that we introduce and describe in detail. In particular, we give pseudocodes for the determinisation of string to string transducers, the deterministic union of p-subsequential string to string transducers, and the indexation by automata. We report on several experiments illustrating the applications.
This paper addresses the problem of distribution of words and phrases in text, a problem of great general interest and of importance for many practical applications. The existing models for word distribution present observed sequences of words in text documents as an outcome of some stochastic processes; the corresponding distributions of numbers of word occurrences in the documents are modelled as mixtures of Poisson distributions whose parameter values are fitted to the data. We pursue a linguistically motivated approach to statistical language modelling and use observable text characteristics as model parameters. Multi-word technical terms, intrinsically content entities, are chosen for experimentation. Their occurrence and the occurrence dynamics are investigated using a 100-million word data collection consisting of a variety of about 13,000 technical documents. The derivation of models describing word distribution in text is based on a linguistic interpretation of the process of text formation, with the probabilities of word occurrence being functions of observable and linguistically meaningful text characteristics. The adequacy of the proposed models for the description of actually observed distributions of words and phrases in text is confirmed experimentally. The paper has two focuses: one is modelling of the distributions of content words and phrases among different documents; and another is word occurrence dynamics within documents and estimation of corresponding probabilities. Accordingly, among the application areas for the new modelling paradigm are information retrieval and speech recognition.
We discuss the random generation of strings using the grammatical formalism AGFL. This formalism consists of context-free grammars extended with a parameter mechanism, where the parameters range over a finite domain. Our approach consists in static analysis of the combinations of parameter values with which derivations can be constructed. After this analysis, generation of sentences can be performed without backtracking.
This book has its origins in conversations I had with André Fuhrmann at the meetings of the International Congress of Logic, Methodology, and Philosphy of Science in Uppsala in 1991. These exchanges led to a joint essay (Fuhrmann and Levi, 1994) that discussed some peculiarities of conditional reasoning when induction is taken into consideration. One sin begets another and I began contemplating the writing of a long paper combining the version of Ramsey test conditionals I had already advocated with ideas on inductive inference I had discussed ever since I had published Gambling with Truth (Levi, 1967).
As a preliminary, however, it seemed desirable to spell out somewhat more elaborately than I had done before the view of Ramsey test conditionals I favor. At around the same time, John Collins and I started a reading group at Columbia that included as regular participants Markko Ahtisaari, Horacio Arlo Costa, John Danaher, Scott Shapiro, and, for a brief period when he was visiting Columbia, André Fuhrmann. John Collins had presented to the group his account of the structural differences between revision of belief as understood by Alchourrón, Gardenfors, and Makinson (1985) and revision by imaging in a sense parasitic on the possible-worlds semantics for conditionals pioneered by D. Lewis (1973). In the course of his presentations, Collins defended the view that imaging was best suited to characterizing suppositional reasoning whereas AGM revision is suited to changing beliefs. I had already argued (Levi, 1991) that the AGM formalism was inadequate as an account of rational belief change. And I was quite convinced that imaging was ill suited to capture suppositional reasoning – especially in applications to practical deliberations.
Keeping a firm grip on the difference between fact and fiction entails much more than distinguishing between what is judged true, what is judged false, and what hangs in suspense. Any agent who has a clear “sense of reality” distinguishes between what he or she fully believes to be true and what he or she supposes to be true for the sake of the argument. But both what is fully believed and what is supposed furnish the basis for a tripartite distinction between what is judged true, what is judged false, or what hangs in suspense.
Given an agent's state of full belief at a given time, some propositions are judged possibly true because they are not ruled out by the state of full belief and others are judged impossible. In this epistemic sense of serious possibility, h is judged true relative to the agent's state of full belief K if and only if ∼h is not a serious possibility, h is judged false if and only if ∼h is judged true, and the question of the truth of h hangs in suspense if and only if both h and ∼h count as serious possibilities.
If a would-be investor is uncertain as to whether the government will propose an investment tax credit for long-term investments in U.S. firms, the uncertainty will have some relevance to the investor's conclusion as to how to make investments.