To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The technologies demonstrated at the InSTIL and EUROCALL 2000 conferences were very inspiring. They gave participants the sense that the technologies of their wildest imaginations are at last materializing, particularly in long awaited advances in speech technologies. Some challenges, however, remain ahead as attempts are made to put these technologies to use in CALL. Past experience demonstrates for example that software designed for recognition of a proficient speaker’s language is different than that required for learner language. It is also evident that while language use may be critical for language acquisition, language use does not necessarily indicate language acquisition. These points were made by Marty, who was working with speech software for French teaching a few years before the current excitement:
...[W]e should keep in mind that the present research and development is aimed only at producing speech easily understandable by natives (e.g., English for native speakers of English) and that the potential markets are industrial (e.g., replacing visual indicators or visual alarms with audio warnings) and in home products (especially toys). Until our needs for improved FL instruction are better understood, it is not likely that those devices will have the voice quality we need. (Marty, 1981:52).
If Marty had attended the InSTIL and EUROCALL conferences in 2000, no doubt he would have been very, very impressed. Even though plenty of work remains, we do seem to have very good voice quality in speech synthesis. The question today is how can we best use these emerging technologies, and so Marty’s suggestionthat we must better understand our needs in foreign language teaching remains very relevant. What are the needs for foreign language teaching in the 21st century? Thepapers at EUROCALL 2000 as well as other work in technology, business, and languageteaching suggest that we should be prepared for change in the coming years, but what kind of change? The turn of the century seems an appropriate time to examine some of the speculation on the future of language teaching in general, as well as how technology fits into that future. This paper considers these general questions, and then suggests ways in which links might be made between work in second language acquisition (SLA) and CALL in order to put technologies to use for L2 teaching.
Every time you create an M-file, you are writing a computer program using the MATLAB programming language. You can do quite a lot in MATLAB using no more than the most basic programming techniques that we have already introduced. In particular, we discussed simple loops (using for) and a rudimentary approach to debugging in Chapter 3. In this chapter, we will cover some further programming commands and techniques that are useful for attacking more complicated problems with MATLAB. If you are already familiar with another programming language, much of this material will be quite easy for you to pick up!
√ Many MATLAB commands are themselves M-files, which you can examine using type or edit (for example, enter type isprime to see the M-file for the command isprime). You can learn a lot about MATLAB programming techniques by inspecting the built-in M-files.
Branching
For many user-defined functions, you can use a function M-file that executes the same sequence of commands for each input. However, one often wants a function to perform a different sequence of commands in different cases, depending on the input. You can accomplish this with a branching command, and as in many other programming languages, branching in MATLAB is usually done with the command if, which we will discuss now. Later we will describe the other main branching command, switch.
MATLAB is exceptionally strong in linear algebra, numerical methods, and graphical interpretation of data. It is easily programmed and relatively easy to learn to use. As suchit has proven invaluable to engineers and scientists who are working on problems that rely on scientific techniques and methods at which MATLAB excels. Very often the individuals and groups that so employ MATLAB are primarily interested in the numbers and graphs that emerge from MATLAB commands, processes, and programs. Therefore, it is enough for them to work in a MATLAB Command Window, from which they can easily print or export their desired output. At most, the production technique described in Chapter 3 involving diary files is sufficient for their presentation needs.
However, other practitioners of mathematical software find themselves with two additional requirements. They need a mathematical software package embedded in an interactive environment — one in which the output is not necessarily “linear”, that is, one that they can manipulate and massage without regard to chronology or geographical location. Second, they need a higher-level presentation mode, which affords graphics integrated with text, with different formats for input and output, and one that can communicate effortlessly with other software applications. Some of MATLAB's competitors have focused on such needs in designing the interfaces (or front ends) behind which their mathematical software runs. MATLAB has decided to concentrate on the software rather than the interface — and for the reasons and purposes outlined above, that is clearly a wise decision.
Improving the feedback quality of a computer-based system for pronunciation training requires rather detailed and precise knowledge about the place and the nature of actual mispronunciations in a student’s utterance. To be able to provide this kind of information, components for the automatic localisation and correction of pronunciation errors have been developed.This work was part of a project aimed at integrating state-of-the-art speech recognition technology into a pronunciation training environment for adult, intermediate level learners. Although the technologies described here are in principle valid for any language pairs, the current system focuses on Italian and German learners of English.
Intelligent feedback on learners’ full written sentence productions requires the use of Natural Language Processing (NLP) tools and, in particular, of a diagnosis system. Most syntactic parsers, on which grammar checkers are based, are designed to parse grammatical sentences and/or native speaker productions. They are therefore not necessarily suitable for language learners. In this paper, we concentrate on the transformation of a French syntactic parser into a grammar checker geared towards intermediate to advanced learners of French. Several techniques are envisaged to allow the parser to handle ill-formed input, including constraint relaxation. By the very nature of this technique, parsers can generate complete analyses for ungrammatical sentences. Proper labelling of where the analysis has been able to proceed thanks to a specific constraint relaxation forms the basis of the error diagnosis. Parsers with relaxed constraints tend to produce more complete, although incorrect, analyses for grammatical sentences, and several complete analyses for ungrammatical sentences. This increased number of analyses per sentence has one major drawback: it slows down the system and requires more memory. An experiment was conducted to observe the behaviour of our parser in the context of constraint relaxation. Three specific constraints, agreement in number, gender, and person, were selected and relaxed in different combinations. A learner corpus was parsed with each combination. The evolution of the number of correct diagnoses and of parsing speed, among other factors, were monitored. We then evaluated, by comparing the results, whether large scale constraint relaxation is a viable option to transform our syntactic parser into an efficient grammar checker for CALL.
In this chapter we describe an effective procedure for working with MATLAB, and for preparing and presenting the results of a MATLAB session. In particular we will discuss some features of the MATLAB interface and the use of script M-files, function M-files, and diary files. We also give some simple hints for debugging your M-files.
The MATLAB Interface
MATLAB 6 has a new interface called the MATLAB Desktop. Embedded inside it is the Command Window that we described in Chapter 2. If you are using MATLAB 5, then you will only see the Command Window. In that case you should skip the next subsection and proceed directly to the Menu and Tool Bars subsection below.
The Desktop
By default, the MATLAB Desktop (Figure 1-1 in Chapter 1) contains five windows inside it, the Command Window on the right, the Launch Pad and the Workspace browser in the upper left, and the Command History window and Current Directory browser in the lower left. Note that there are tabs for alternating between the Launch Pad and the Workspace browser, or between the Command History window and Current Directory browser. Which of the five windows are currently visible can be adjusted with the View: Desktop Layout menu at the top of the Desktop. (For example, with the Simple option, you see only the Command History and Command Window, side-by-side.) The sizes of the windows can be adjusted by dragging their edges with the mouse.
It was time to put my money where my mouth was. The system was running well, more or less. Roger was busy with his summer job and then his fall classes, and thus the program wasn't destined to improve in the foreseeable future. The longer we waited, the more likely it was that some external event – maybe a hardware problem, maybe Milford changing its WWW site – would put us out of business for good. My On the Wire account was stocked with $250. It was time to put the system to the test.
A Gambler's Diary
On July 29, 1998, Maven made its first six bets for $3 each. I bided my time until late that night when I could call in to see how I did.
Your account balance is $263.50.
I was a winner! I could stop now and forever be ahead. But I was a winner! And I wanted to keep on winning.
The first few days I kept the bet amounts small as we ironed out timing problems with the autodialer. Still, risking even $20 gave me some pause.
Your account balance is $242.50.
This gave me a sinking feeling. I'd lost my winnings, and more. Was there a bug with my program? Would I ever go ahead again?
Your account balance is $264.40.
Your account balance is $261.40.
Your account balance is $258.40.
Your account balance is $272.80.
The autodialer was clearly working. The program seemed to be making money.
Uncertain reasoning and uncertain argument, as we have been concerned with them here, are reasoning and argument in which the object is to establish the credibility or acceptability of a conclusion on the basis of an argument from premises that do not entail that conclusion. Other terms for the process are inductive reasoning, scientific reasoning, nonmonotonic reasoning, and probabilistic reasoning. What we seek to characterize is that general form of argument that will lead to conclusions that are worth accepting, but that may, on the basis of new evidence, need to be withdrawn.
What is explicitly excluded from uncertain reasoning, in the sense under discussion, is reasoning from one probability statement to another. Genesereth and Nilsson [Nilsson, 1986; Genesereth & Nilsson, 1987], for example, offer as an example of their “probabilistic logic” the way in which constraints on the probability of Q can be established on the basis of probabilities for P and for P → Q. This is a matter of deduction: as we noted in Chapter Five, it is provable that any function prob satisfying the usual axioms for probability will be such that if prob(P) = r and prob(P → Q) = s then prob(Q) must lie between s + r − 1 (or 0) and s. This deductive relation, though often of interest, is not what we are concerned with here. It has been explored by Suppes and Adams [Suppes, 1966; Adams, 1966] as well as Genesereth and Nilson.
In this vision paper I will discuss a few questions concerning the use of generative processes in composition and automatic music creation. Why do I do it, and does it really work? I discuss the problems involved, focusing on the use of interactivity, and describe the use of interactive evolution as a way of introducing interactivity in composition. The installation MutaSynth is presented as an implementation of this idea.
In its original rendition, Degrees of Separation: “Grandchild of Tree” (1998) is performed with cactus, outboard digital effects, and CD playback, with simple lighting. The work is a metaphor which portrays subtle transformations (or transmutations) in human existence precipitated by pervasive new technology. A new version consists of a performance with a video component and all electronics fully automated in MAX/MSP on a Macintosh Powerbook. Development of this work, and subsequent versions, have proven invaluable for my own approach to the music I compose, and for the understanding of my position in the contemporary world of computers, technology and art. This paper attempts to describe these discoveries through outlining the levels of symbolism and metaphor in the work as realised through source abstraction (both visually and aurally), spatialisation and (re-)contextualisation. It begins with the question: What is a cactus doing in the concert hall?
This paper describes an exploration of utilising the World Wide Web for interactive music. The origin of this investigation was the intermedia work Telemusic #1, by Randall Packer, which combined live performers with live public participation via the Web. During the event, visitors to the site navigated through a virtual interface, and while manipulating elements, projected their actions in the form of triggered sounds into the physical space. Simultaneously, the live audio performance was streamed back out to the Internet participants. Thus, anyone could take part in the collective realisation of the work and hear the musical results in real time. The underlying technology is, to our knowledge, the first standards-based implementation linking the Web with Cycling '74 MAX. Using only ECMAScript/JavaScript, Java, and the OTUDP external from UC Berkeley CNMAT, virtually any conceivable interaction with a Web page can send data to a MAX patch for processing. The code can also be readily adapted to work with Pd, jMAX and other network-enabled applications.
We introduce a system for generalised sound classification and similarity using a machine-learning framework. Applications of the system include automatic classification of environmental sounds, musical instruments, music genre and human speakers. In addition to classification, the system may also be used for computing similarity metrics between a target sound and other sounds in a database. We discuss the use of hidden Markov models for representing the temporal evolution of audio spectra and present results of testing the system on classification and retrieval tasks. The system has been incorporated into the MPEG-7 international standard for multimedia content description and is therefore publicly available in the form of a set of standardised interfaces and software reference tools for developers and researchers.
The musical use of realtime digital audio tools implies the need for simultaneous control of a large number of parameters to achieve the desired sonic results. Often it is also necessary to be able to navigate between certain parameter configurations in an easy and intuitive way, rather than to precisely define the evolution of the values for each parameter. Graphical interpolation systems (GIS) provide this level of control by allocating objects within a visual control space to sets of parameters that are to be controlled, and using a moving cursor to change the parameter values according to its current position within the control space. This paper describes Interpolator, a two-dimensional interpolation system for controlling digital signal processing (DSP) parameters in real time.
This conference talk is about change. It does not pretend to be an academic paper as such; it was offered at the ‘Music without Walls? Music without Instruments?’ conference as a provocative ‘performance’. The following pages investigate twenty-six developments associated with music that one might discover in the not-too-distant future.
This issue of Organised Sound is devoted to a conference, ‘Music without Walls? Music without Instruments’, which took place at De Montfort University, Leicester, UK in June of this year. It was hosted by the university's Music, Technology and Innovation Research Group. As one can easily discover, given the conference's title, the goal of this meeting was to investigate vision and plans for the future. The three-day event included papers, musical events, installations, demonstrations and a plenary. All artists whose pieces were selected for the conference concerts were requested to present vision papers to accompany their performance.
In quantum mechanics a particle can behave like a particle or a wave. Thus, systems of particles can be likened to a superposition of waves. Since sound can be described as a superposition of frequencies, it can also be described in terms of a system of particles manifest as waves. This metaphor between ‘particle physics’ and sound synthesis is quantitatively developed here, suggested initially from some similarities between the two domains. It is applied to a few fundamental physical principles to show how these can be sonified. The author discusses the process of using a simulated ‘atom trap’ to compose a piece that does not require a physicist to appreciate it. This metaphor blurs the distinctions between science and art, where scientific experiment becomes musical composition, and exploring a musical idea involves playing with particle system dynamics. In the future, methods like these could be used with a real system of particles – the particle accelerator will become an expressive musical instrument, and the particle physicist will become the composerscientist.
This paper addresses the issues concerning the impact of music on cyberspace globalisation. It points out that Internet access is limited to only one per cent of the wealthiest human population. It also looks into the future of cyberspace music and predicts that most of the cyberspace activities are going to be unleashing one of many of humanity's predispositions – the predisposition to steal. Cyberspace represents a perfect medium for concealing a person's identity and for the masking of any of the responsibility that is expected from socially acceptable human behaviour. With these aspects in mind, the paper concentrates on the Internet's distribution of pirated software and the trading of MP3 files. The paper also focuses on commercial music whose economic impact on the development of music technology allows ‘academic musicians’ to appropriate most of the tools that otherwise would not have been developed for ‘academic’ use.