We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
Online ordering will be unavailable from 17:00 GMT on Friday, April 25 until 17:00 GMT on Sunday, April 27 due to maintenance. We apologise for the inconvenience.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This paper is concerned with case-matching effects under clausal ellipsis. We begin by considering available crosslinguistic data that indicate that variation in case marking on a fragment is delimited by the argument structure of the lexical head that assigns case to the fragment’s correlate in the antecedent clause. We then offer experimental evidence for a case-matching preference in Korean when a fragment and its correlate may differ in case marking. This case-matching preference corresponds to a known case of mandatory case-matching in Hungarian, but their relationship is not predicted by any of the existing syntactic accounts of case-matching effects under clausal ellipsis. We propose a novel perspective on fragments that derives case-matching effects, including optional and mandatory case matching, from the predictions of cue-based retrieval. Two further acceptability judgment studies are offered in support of our proposal.
The core model of sentence processing used in the book is introduced and its empirical coverage relative to the existing reading time data is considered. Here, we also discuss the Approximate Bayesian Computation method for parameter estimation for model evaluation.
This chapter discusses two extensions of the model presented in the previous chapter: the effect of prominence (through discourse prominence, etc.) and the effect of so-called multi-associative cues. The empirical coverage of the extended model is evaluated against benchmark data.
This chapter discusses three central phenomena of interest in sentence processing: reanalysis, underspecification, and capacity-based differences in sentence comprehension. The model's quantitative predictions are evaluated against two benchmark data-sets that investigate reanalysis and underspecification.
This chapter presents another extension of the core model: an eye-movement control system is integrated with the parsing architecture, and this extended model is investigated using benchmark eyetracking data (the Potsdam Sentence Corpus).
This chapter investigates whether sentence comprehension difficulty in aphasia can be explained in terms of retrieval processes. By modelling individuals with aphasia (IWAs) separately, we show that different IWAs show impairments along different dimensions: slowed processing, intermittent deficiency, and resource reduction. The parameters in the cue-based retrieval model have a theoretical interpretation that allows these three theories to be implemented within the architecture. In a further investigation, we compare the relative predictive accuracy of the cue-based model with that of the direct-access model. The benchmark data here are from Caplan et al. (2015); k-fold cross-validation is used as in the preceding chapter. The cue-based retrieval model is shown to have a better predictive performance.
This chapter presents a model comparison between two competing models of retrieval processes: the cue-based retrieval model presented in this book and the direct-access model. The two models are implemented in a Bayesian framework, and then model comparison is carried out using k-fold cross-validation. The benchmark data used for evaluation are from a previously published large-sample, self-paced reading study (181 participants). The results show that the direct-access model has a better performance on this benchmark data than the cue-based retrieval model.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.