Hostname: page-component-89b8bd64d-ktprf Total loading time: 0 Render date: 2026-05-08T09:52:32.407Z Has data issue: false hasContentIssue false

Feature reassembly and meso-parameters versus interpretability: From inconsistent null subjects in L1 Hebrew to no null subjects in L2 English

Published online by Cambridge University Press:  31 January 2025

Noa Brandel*
Affiliation:
University of Cambridge
*
Rights & Permissions [Opens in a new window]

Abstract

This paper reports the results of a cross-sectional study investigating the acquisition of the syntactic properties associated with the null subject (meso-)parameter in English as a second language (L2) among Hebrew-speaking youngsters (18-year-olds). The two languages differ concerning these properties, with Hebrew allowing null subjects and related properties (although inconsistently) and English disallowing these properties altogether. One hundred four intermediate learners and 97 English-speaker controls provided grammaticality judgments and corrections concerning constructions involving expletive and referential null subjects, post-verbal subjects, and complementizer-trace sequences. The results reveal limited evidence for transfer from the learners’ mother tongue (first language [L1]) and indicate that learners have met the native standard concerning null and post-verbal subjects. These findings support both the meso-parametric view of cross-linguistic variation and feature reassembly on functional heads in L2 acquisition, while partially rejecting the Interpretability Hypothesis. Learners nevertheless deviate from the native standard concerning complementizer-trace sequences. This finding is unaccounted for by the meso-parametric approach, feature reassembly, or interpretability, but can instead be attributed to L1 transfer. Controls also demonstrate variability concerning complementizer-trace sequences, suggesting that the performance of all participants regarding this configuration is affected by processing difficulties, lower frequency in the input, and methodological issues with the items and/or the task.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press
Figure 0

Figure 1. Language types according to subject omission.

Figure 1

Table 1. Features on tense (T) and pronouns in different language types, based on Roberts (2019), combined with the current proposal concerning inconsistent NS languages and expletive pronouns1

Figure 2

Table 2. Learners’ scores (rate of correct rejections of ungrammatical constructions in the L2) in the GJ task, compared to native-speaker controls: Mean accuracy levels (%) and standard deviations (in parentheses) in the item types inspecting the various NSP properties, split according to (un)grammaticality in the learners’ L1

Figure 3

Figure 2. Learners’ scores (rate of correct rejections of ungrammatical constructions in the L2) in the GJ task, in the item types inspecting the various NSP properties.

Figure 4

Figure 3. Native-speaker controls’ scores (rate of correct rejections of ungrammatical constructions) in the GJ task, in the item types inspecting the various NSP properties.

Figure 5

Table 3. Percentages of learners and controls making mistakes in the GJ task in constructions involving more mistakes on the learners’ part

Figure 6

Figure 4. Percentages of learners and controls making mistakes in the GJ task in constructions involving more mistakes on the learners’ part.

Figure 7

Table 4. Learners’ scores (rate of correct rejections of ungrammatical constructions) in the GJ task, compared to native-speaker controls: Mean accuracy levels (%) and standard deviations (in parentheses) in the different NSP properties, undivided by item type

Figure 8

Figure 5. Learners’ scores (rate of correct rejections of ungrammatical constructions) in the GJ task, compared to native-speaker controls, in the different NSP properties, undivided by item type.

Figure 9

Table 5. Learners’ scores (rate of correct rejections of ungrammatical constructions) in the GJ task, compared to native-speaker controls: Mean accuracy levels (%) and standard deviations (in parentheses) in the different NSP property pairs involving interpretable and uninterpretable members