To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we focus on the base and tool calibration of a self-calibrated parallel robot. After the self-calibration of a parellel robot by using the built-in sensors in the passive joints, its kinematic transformation from the robot base to the mobile platform frame can be computed with sufficient accuracy. The base and tool calibration, hence, is to identify the kinematic errors in the fixed transformations from the world frame to the robot base frame and from the mobile platform frame to the tool (end-effector) frame in order to improve the absolute positioning accuracy of the robot. Using the mathematical tools from group theory and differential geometry, a simultaneous base and tool calibration model is formulated. Since the kinematic errors in a kinematic transformation can be represented by a twist, i.e. an element of se(3), the resultant calibration model is simple, explicit and geometrically meaningful. A least-square algorithm is employed to iteratively identify the error parameters. The simulation example shows that all the preset kinematic errors can be fully recovered within three to four iterations.
In multi-robot assembly of parts, for successful mating, the grasped parts must be located with sufficiently small position and orientation errors so that assembly can be achieved. This paper describes a new approach for determining the absolute three-dimensional spatial location of parts grasped by robots during assembly. Through a combination of robot pose calibration and part-sensor calibration, the robot, used to grasp the part, is calibrated to accurately position and orient parts to a designated mating location. First, by employing a robot pose measurement system, the 6 DOF robot pose errors relative to a reference coordinate frame are compensated. Second, with the implementation of a part pose measurement, the 6 DOF part pose errors, relative to the robot tool frame, are estimated in real time. An experimental verification of the proposed methodology using a single FANUC S–110 robot manipulating an automotive sheet metal part is described.
In this paper, the problem of tip position tracking control of a flexible-link manipulator is considered. Two neural network schemes are presented. In the first scheme, the controller is composed of a stabilizing joint PD controller and a neural network tracking controller. The objective is to simultaneously achieve hub-position tracking and control of the elastic deflections at the tip. In the second scheme, tracking control of a point along the arm is considered to avoid difficulties associated with the output feedback control of a non-minimum phase flexible manipulator. A separate neural network is employed for determining an appropriate output to be used for feedback. The controller is also composed of a neural network tracking controller and a stabilizing joint PD controller. Experimental results on a single-link flexible manipulator show that the proposed networks result in significant improvements in the system response with an increase in controller dynamic range despite changes in the desired trajectory.
In this section we continue the analysis and comparison of the computational complexity of functions. Recall that in some cases, for example, the case of two-person zero-sum games analyzed in Chapter 3, Section 3.1, Leontief's criteria give unambiguous comparisons of solution functions. The comparison of diagonal games and 3 × 3 matrix games of Chapter 3, Section 3.3 cannot be made by a direct application of the Leontief criteria. In that chapter an added restriction, the symmetrical computation restriction, is imposed on the networks that represent the computation. That restriction can be interpreted as a simplicity requirement on the structure of the network. With this restriction we are able to apply Leontief's criteria to obtain unambiguous comparisons of the solution functions for the two classes of games.
However, it remains the case that when the number of variables is small, without imposing additional restrictions, the methods of Chapter 3 can be used to distinguish between the computational complexities of members of only a small class of functions.
In this chapter we extend the applicability of Leontief's criteria by introducing another type of restriction on the networks that represent computations. As in the preceding chapters, we do this in the setting of examples. We seek to compare the complexities of two different solutions in a class of two-agent bargaining problems. Specifically, we compare the Nash solution and the Kalai – Smorodinsky solution. Direct application of Leontief's criteria to the payoff functions for the Nash and Kalai–Smorodinsky solutions is not decisive.
Authoring systems can be a great advantage for teachers concerned with tools created specifically for their learners. They allow the designer to conceive modules for a precise public. The designer, often the teacher himself, can model his work on existing CD-ROMs marketed for the general public, inserting information that is more specific to his own public. For L2 acquisition, this solution can be satisfactory if the teacher’s main preoccupation is to have learners work on the language at their own rhythm and in their own time. However, there are a few oversights to this type of conception which need to be resolved: (1) teacher access to the finished product, (2) student access to the different parts of the finished product, and (3) pedagogical and didactic criteria. This paper concerned with all three oversights, notably the last, concentrates on developing the characteristics of all four learning stages described by Narcy (1997), while illustrating the different theoretical and practical possibilities of incorporating these stages into modules created by teachers.
This paper considers the progressive integration of a web resource in a language degree option centred on language varieties and linguistic variation in French. It focuses on the features introduced by the web environment and how they can enhance the learning experience of the students. It addresses the issue of the language used for teaching and learning with diverse groups of students including a high proportion of non-native speakers of English, varying levels of linguistic competence and different stages of study. The paper also examines web-based facilities supporting interaction and collaboration among members of the learning group (some teacher moderated). The evaluation is based on the results of a survey conducted with the group of students who took part in the pilot implementation in 2000–01 and on module feedback. The last section in the paper points to ways of integrating further the technology within the delivery and the assessment of the module, and of increasing student support and interaction.
The paper presents the results of a comparative investigation of course developers’ and teacher trainees’ views regarding the usefulness and effectiveness of a multimedia self-tuition course designed to introduce foreign language teacher trainees to tools and methods for organising computer-assisted language learning. The paper first provides a brief description of the home-study course itself. It describes the course’s main components, its content as well as the learning and evaluation tasks the course provides in support of the learning process. Next, the paper reports on the way in which the evaluation project investigating teacher trainees’ and course developers’ views regarding the effectiveness of the course was set up. The project’s design is presented, and the way in which various procedures of data collection (written evaluations and individual interviews) were triangulated is commented on. In the third section we present the investigation’s main findings. The section focuses on points of agreement and disagreement between developers’ and trainees’ views regarding the usefulness and effectiveness of the course. Finally,we describe the changes brought about by the evaluation project, and reflect on the necessityto take account of future users’ views and requirements in the design and developmentprocess if the training of foreign language teachers is to benefit from web-based delivery.
In this work we set out to investigate the feasibility of applying measures of lexical frequency to the assessment of the writing of learners of French. A system developed for analysing the lexical knowledge of learners, according to their productive use of high and low frequency words (Laufer and Nation 1995), was adapted for French and used to analyse learners’ texts from an Open University French course. Whilst we found that this analysis could not be said to reflect the state of the learners’ vocabulary knowledge in the same way that Laufer and Nation’s study did, elements of the system’s output did correlate significantly with scores awarded by human markers for vocabulary use in these texts. This suggests that the approach could be used for self-assessment. However, the feedback that can be given to learners on the basis of the current analysis is very limited. Nevertheless, the approach has the potential for refinement and when enhanced with information derived from successive cohorts of learners performing similar writing tasks, could be a first step in the development of a viable aid for learners evaluating their own writing.
‘Design’ is a term that is familiar to many language teachers and CALL practitioners. It is used regularly in relation to curriculum, syllabus, course and task in the general literature and it occurs in all these areas and more in the CALL sphere where instructional design, website design, interface design and screen design are just some of the additional points of focus. This paper aims to look at CALL design in more detail. It places a particular emphasis on describing the discourse, products and processes of design in CALL. It looks at what we have learnt about design and points to areas that remain problematical. It also makes connections with cognate fields whenever these links prove helpful. This study is the second in a series of three complementary papers which look at research, design and evaluation in CALL (see Levy, 2000). All use the same corpus of CALL work as a database and the research design and methodology in each is the same. In this paper the description and discussion is based on 93 articles involving design published in books and journals published in 1999. The descriptive section is followed by analysis and interpretation with special attention given to the relationship between theory and design, and the centrality of the task and the learner in the design process.
We address the problem of clustering words (or constructing a thesaurus) based on co-occurrence data, and conducting syntactic disambiguation by using the acquired word classes. We view the clustering problem as that of estimating a class-based probability distribution specifying the joint probabilities of word pairs. We propose an efficient algorithm based on the Minimum Description Length (MDL) principle for estimating such a probability model. Our clustering method is a natural extension of that proposed in Brown, Della Pietra, deSouza, Lai and Mercer (1992). We next propose a syntactic disambiguation method which combines the use of automatically constructed word classes and that of a hand-made thesaurus. The overall disambiguation accuracy achieved by our method is 88.2%, which compares favorably against the accuracies obtained by the state-of-the-art disambiguation methods.
The objective of this paper is to describe a task-based project in tandem via e-mail, and to discuss the effects of motivation on task performance. In this project, a group of Irish students and a group of Spanish students are asked to carry out a series of tasks in collaboration with their tandem partners via e-mail by means of a web page especially designed for the project. Half the message is meant to be written in the student’s native language and half in the target language, and students are also encouraged to correct one another. The goal behind our research is to discuss the effects of motivation on task performance. We argue that resource directing (such as reasoning demands) and resource depleting factors (such as prior knowledge) which belong to task complexity in Robinson’s model (Robinson, 2001) are closely connected to affective variables which, as is the case with motivation, belong to task difficulty. Motivational factors like interest in the meanings to be exchanged, involvement in the decision-making process, students’ expertise in the topic, media and materials used, and the diffusion of outcomes among others have strong effects on task performance, and should therefore be considered together with complexity variables.
The TIPSTER Text Summarization Evaluation (SUMMAC) has developed several new extrinsic and intrinsic methods for evaluating summaries. It has established definitively that automatic text summarization is very effective in relevance assessment tasks on news articles. Summaries as short as 17% of full text length sped up decision-making by almost a factor of 2 with no statistically significant degradation in accuracy. Analysis of feedback forms filled in after each decision indicated that the intelligibility of present-day machine-generated summaries is high. Systems that performed most accurately in the production of indicative and informative topic-related summaries used term frequency and co-occurrence statistics, and vocabulary overlap comparisons between text passages. However, in the absence of a topic, these statistical methods do not appear to provide any additional leverage: in the case of generic summaries, the systems were indistinguishable in accuracy. The paper discusses some of the tradeoffs and challenges faced by the evaluation, and also lists some of the lessons learned, impacts, and possible future directions. The evaluation methods used in the SUMMAC evaluation are of interest to both summarization evaluation as well as evaluation of other ‘output-related’ NLP technologies, where there may be many potentially acceptable outputs, with no automatic way to compare them.
Student interest in films as a medium for ESL education is high. Interest, however, is not enough to foster understanding in a traditional classroom environment. A more hands-on interactive approach to studying a second language via film is needed. By using online interactive exercises to study the language and culture in film, students are able to gain a better understanding of the language used in the film. This paper outlines a course that was developed using online interactive exercises and film to study language and culture. The course incorporates several modern technologies to allow students to take an active role in their learning and to increase their skills in areas that the students perceive to be of value in the future, namely listening, reading, and presentation skills. Automated feedback functions let the students, as well as the instructors, constantly monitor their progress. These technologies allow a more efficient use of classroom time and permit the students to go into the content of the film – especially the cultural aspects – much more deeply. Through this course, students are able to boost their confidence and their motivation to continue the study of language and culture via films on their own
With innovative ways available to assess language performance through the use of computer technology, practitioners have to rethink their preferred strategies of language testing. It is necessary to take into account both the new developments in language learning and teaching research and also the latest features computers have to offer to help with language assessment. In addition to best practices developed over the years in the field, it is necessary for provision to be made for authentic assessments of intercultural communication abilities. After a review of the latest language-testing literature and a discussion of the current problems identified in it, this paper explores the latest developments in computer technology and proposes areas of language testing in the light of the new findings. A practical application follows. This is an adaptation, in a school board in Ontario, of the latest evaluation model. The model represents unit planning as an isosceles triangle with assessed assignments stacked in horizontal bands from the base to the vertex, i.e. the top. The suggestion is offered that this approach can be enriched, by changing the triangle into a pyramid with a different model on each side. Access to the four sides by rotation of the pyramid allows a broader range of activities culminating in one final assessment task at the summit.