To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
English as a Second Language (ESL) teachers in Malaysia, as in many other countries, are anxious to exploit the potential of Information and Communication Technologies (ICT) to enhance the teaching and learning process. Given the increasing pressure exerted by technological developments on language education, it is important to understand the underlying factors behind teachers’ decisions regarding ICT. Egbert et al. (2002) state that few investigations have been conducted on computer-using language teachers’ development. According to BECTA (1999) the learning potential of ICT is not being realised, because many teachers are not familiar with ICT and do not use it in their teaching. This study investigates the present use of ICT among ESL technical school teachers in teaching, factors that affect the use of ICT and perceptions of their skills in ICT. The theory that frames this study is drawn from theories of learning. The model adopted is Davis’ Technology Acceptance Model (TAM) (1989) which provides a basis for determining ICT attitudes and factors affecting the usage of ICT in teaching. Data was collected via a questionnaire survey of ESL technical school teachers in Malaysia, and followed by semi-structured interviews with them. The questionnaire data was analysed using descriptive statistics and later triangulated with the interviews. The findings will be presented and elaborated upon in this paper.
This case study investigated the ways pre-major and pre-minor students of Spanish interacted with a grammar application from four perspectives. Firstly, using the computer’s tracking ability to collect learners’ behaviors, the study set out to uncover the different ways learners approached the application. Secondly, the study assessed the influence of two learner variables on learning behaviors: language abilities determined by a placement test and personality preferences measured by Jung-Myers-Briggs-Typology based approach. Thirdly, the study assessed whether the frequency of various behaviors resulted in different knowledge increases. Finally, the study categorized the uncovered behaviors into the learning strategies covered in the Strategy Inventory for Language Learning (SILL) (Oxford 1990). The study concludes with implications for software improvement, as well as with indications of likely directions for future research.
If some readers are surprised to receive a copy of ReCALL at this early stage in the year,let me remind you that we are now moving from two to three issues a year, for publication in January, May and September instead of May and November. The changeover has proved rather difficult, in terms of allowing time for authors to make revisions in the light of reviewers’ comments, which is why this issue is rather late.
In this issue, as well as regular papers from Dermot Campbell, Fenfang Hwu and Elke Stracke, we have included two papers from the EUROCALL 2006 conference in Granada. Melor Md Yunus discusses Malaysian ESL Teachers’ Use of ICT in Their Classrooms, whilst Petter Karlström and colleagues explore Tool mediation in focus-on-form activities. Having received 39 conference paper submissions, so far 13 have been rejected (but some of these have been advised to submit a version of their paper to the EUROCALL Review) and the remainder are at various stages in the process of bringing them to publication. Tony Harris, organiser of the Granada conference, will have more to say about conference submissions in the next issue of ReCALL, where the bulk of the selected papers will be published.
This paper outlines the ongoing construction of a speech corpus for use by applied linguists and advanced EFL/ESL students. In the first part, sections 1–4, the need for improvements in the teaching of listening skills and pronunciation practice for EFL/ESL students is noted. It is argued that the use of authentic native-to-native speech is imperative in the teaching/learning process so as to promote social inclusion. The arguments for authentic language learning material and the use of a speech corpus are contextualised within the literature, based mainly on the work of Swan, Brown and McCarthy. The second part, section 5, addresses features of native speech flow which cause difficulties for EFL/ESL students (Brown, Cauldwell) and establishes the need for improvements in the teaching of listening skills. Examples are given of reduced forms characteristic of relaxed native speech, and how these can be made accessible for study using the Dublin Institute of Technology’s slow-down technology, which gives students more time to study native speech features, without tonal distortion. The final part, sections 6–8, introduces a novel Speech Corpus being developed at DIT. It shows the limits of traditional corpora and outlines the general requirements of a Speech Corpus. This tool – which will satisfy the needs of teachers, learners and researchers – will link digitally recorded, natural, native-to-native speech so that each transcript segment will give access to its associated sound file. Users will be able to locate desired speech strings, play, compare and contrast them – and slow them down for more detailed study.
In this paper, a Neural-Network- (NN) based guidance methodology is proposed for the high-precision docking of autonomous vehicles/platforms. The novelty of the overall online motion-planning methodology is its applicability to cases that do not allow for the direct proximity measurement of the vehicle's pose (position and orientation). In such instances, a guidance technique that utilizes Line-of-Sight- (LOS) based task-space sensory feedback is needed to minimize the detrimental impact of accumulated systematic motion errors. Herein, the proposed NN-based guidance methodology is implemented during the final stage of the vehicle's motion (i.e., docking). Systematic motion errors, which are accumulated after a long-range motion are reduced iteratively by executing corrective motion commands generated by the NN until the vehicle achieves its desired pose within random noise limits. The proposed guidance methodology was successfully tested via simulations for a 6-dof (degree-of-freedom) vehicle and via experiments for a 3-dof high-precision planar platform.
To obtain specifications for a tactile display that would be effective in virtual reality and tele-existence systems, we have developed two types of matrix-type experimental tactile displays. One is for virtual figures (display A) and the other is for virtual textures (display B). Display A's pad has a 4 × 6 array of stimulus pins, each 0.8 mm in diameter. Three pad configurations, in which distances between any two adjacent pins (pin pitch) are 1.2, 1.9, or 2.5 mm, were developed to examine the influence of distance on a human operator's determination of virtual figures. Display B has an 8 × 8 array of stimulus pins, each 0.3 mm in diameter and with 1-or 1.8-mm pin pitch, because presentation of virtual textures was presumed to require a higher pin density. To establish a design method for these matrix-type tactile displays, we performed a series of psychophysical experiments using displays A and B. By evaluating variations in the correct answer percentage and threshold caused by different pin arrays and different pin strokes, we determined under what conditions the operator could best feel the virtual figures and textures. The results revealed that the two-point threshold should be adopted as the pitch between pins in the design of the tactile display, that a pin stroke should exceed 0.25 mm, and that the adjustment method is the most appropriate to evaluate the capabilities of tactile displays. Finally, when we compared the virtual texture with the real texture, we found that the threshold for the real texture is almost 1/3rd that of the virtual texture. This result implies that it is effective to present variations in patterns caused by rotation and variation in shearing force, itself produced by relative motion between the finger surface and object surface.
Up to now, most people are still cooking in the kitchen, which makes them feel fatigued and also makes the air polluted. With the development of the numerical control technology, it becomes more and more urgent to apply the related technology to the automatic cooking field. In this paper, the cooking technique for the Chinese dishes is introduced and China's first cooking robot, named AI Cooking Robot (AIC), is presented. The robot mainly consists of four parts: the wok mechanism, the stirring-fry and dispersing mechanism, the feeding mechanism, and the mechanism of leaving the material in the middle process. In order to adjust the temperature, the fire control system is also given in this paper. Experiments show that the new robot will be a milestone in the cooking automation science because of its cooking technique.
This paper describes the development of a novel compact magneto-rheological (MR) fluid brake with high transmitted torque and a simple structure. The MR fluid brake has two shearing disks with an electromagnetic coil located between them. Such a structure enables the brake to have a small radial dimension and a large torque transmission capacity. In the design process, a Bingham viscoplastic model is used to predict the transmitted torque. Electromagnetic finite element analysis (FEA) is performed to assist the magnetic circuit design and structural parameters' optimization. The novel brake design is prototyped and studied. Experimental results show that a compact MR fluid brake with high transmitted torque is successfully achieved.
We extend the work of A. Ciaffaglione and P. di Gianantonio on the mechanical verification of algorithms for exact computation on real numbers, using infinite streams of digits implemented as a co-inductive type. Four aspects are studied. The first concerns the proof that digit streams correspond to axiomatised real numbers when they are already present in the proof system. The second re-visits the definition of an addition function, looking at techniques to let the proof search engine perform the effective construction of an algorithm that is correct by construction. The third concerns the definition of a function to compute affine formulas with positive rational coefficients. This is an example where we need to combine co-recursion and recursion. Finally, the fourth aspect concerns the definition of a function to compute series, with an application on the series that is used to compute Euler's number e. All these experiments should be reproducible in any proof system that supports co-inductive types, co-recursion and general forms of terminating recursion; we used the COQ system (Dowek et al. 1993; Bertot and Castéran 2004; Giménez 1994).
Large scale real number computation is an essential ingredient in several modern mathematical proofs. Because such lengthy computations cannot be verified by hand, some mathematicians want to use software proof assistants to verify the correctness of these proofs. This paper develops a new implementation of the constructive real numbers and elementary functions for such proofs by using the monad properties of the completion operation on metric spaces. Bishop and Bridges's notion (Bishop and Bridges 1985) of regular sequences is generalised to what I call regular functions, which form the completion of any metric space. Using the monad operations, continuous functions on length spaces (which are a common subclass of metric spaces) are created by lifting continuous functions on the original space. A prototype Haskell implementation has been created. I believe that this approach yields a real number library that is reasonably efficient for computation, and still simple enough to verify its correctness easily.
The Coq system is a Curry–Howard based proof assistant. Therefore, it contains a full functional, strongly typed programming language, which can be used to enhance the system with powerful automation tools through the implementation of reflexive tactics. We present the implementation of a cylindrical algebraic decomposition algorithm within the Coq system, whose certification leads to a proof producing decision procedure for the first-order theory of real numbers.
The title Constructive analysis, types and exact real numbers covers the wide field of research dealing with ‘precise’ computationson continuous structures. The adjective ‘precise’ is used here in an informal way, referring to computations where the rounding off of the output and the approximative nature of the input are explicitly taken nto account in some way.
This paper is an introduction to the RealLib package for exact real number computations. The library provides certified accuracy, but tries to achieve this at performance close to the performance of hardware floating point for problems that do not require higher precision. The paper gives the motivation and features of the design of the library and compares it with other packages for exact real arithmetic.
In this paper we will discuss various aspects of computable/constructive analysis, namely semantics, proofs and computations. We will present some of the problems and solutions of exact real arithmetic varying from concrete implementations, representation and algorithms to various models for real computation. We then put these models in a uniform framework using realisability, which opens the door to the use of type theoretic and coalgebraic constructions both in computing and reasoning about these computations. We will indicate that it is often natural to use constructive logic to reason about these computations.
We prove two results for the sequential topology on countable products of sequential topological spaces. First we show that a countable product of topological quotients yields a quotient map between the product spaces. Then we show that the reflection from sequential spaces to its subcategory of monotone ω-convergence spaces preserves countable products. These results are motivated by applications to the modelling of computation on non-discrete spaces.
Why has integration and interoperability of design and manufacturing knowledge proven so difficult? There is a common puzzle that appears in your Sunday newspaper or The New Yorker magazine.
Triggered by expert systems technology, artificial intelligence (AI) was a silver bullet in the early 1980s. AI seemed to be able to perfectly solve various problems that involved any intellectual activities. For instance, MYCIN (Shortliffe, 1976), developed at Stanford, gave a strong impression that medical doctors could have been soon supported by a clever consultation system, resulting in more accurate diagnoses. Inspired by these, also in engineering fields, a variety of experimental systems for fault diagnosis, planning, selection, and design were developed, which demonstrated promising possibilities of their applications.
Neural network (NN)-based constitutive models have been used increasingly to capture soil constitutive response. When combined with the self-learning simulation (SelfSim) inverse analysis framework, NN models can be used to extract soil behavior when given field measurements of boundary deformations and loads. However, the data sets used to train and repeatedly retrain the NN models are large, and training times, especially when used in SelfSim, are long. A diverse set of stress–strain data is extracted from a simulated braced excavation problem to train a NN-based constitutive model. Several methods for reducing the data set size are proposed and evaluated. Each of these methods selectively removes training data so that the smallest amount of data is used to train the NN. The Gaussian point method removes data based on its position in each finite element in the model. The lattice method removes data so that all remaining points are evenly spaced in stress space. Finally, the loading path method compares the stress–strain history of each Gaussian point and removes points with similar loading histories. Each of these methods shows that a large amount of the training data (up to 94%) can be removed without adversely affecting the performance of the NN model, with the loading path method showing the best and most consistent performance. Model training times are reduced by a factor of 20. The performance of the loading path method is also demonstrated using stress–strain data extracted from a simulated laboratory triaxial compression test with frictional ends.