Skip to main content Accessibility help
×
Home
Hostname: page-component-5959bf8d4d-4p99k Total loading time: 1.145 Render date: 2022-12-10T04:31:56.586Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true

Pronunciation assessment

Published online by Cambridge University Press:  31 May 2017

Talia Isaacs
Affiliation:
UCL Institute of Education, University College London, UKtalia.isaacs@ucl.ac.uk
Luke Harding
Affiliation:
Lancaster University, Lancaster, UKl.harding@lancaster.ac.uk
Rights & Permissions[Opens in a new window]

Extract

After an extended period of being on the periphery, numerous advancements in the field of second language (L2) pronunciation over the past decade have led to increased activity and visibility for this subfield within applied linguistics research. As Derwing (2010) underscored in her 2009 plenary at the first annual Pronunciation in Second Language Learning and Teaching (PSLLT) conference, a record number of graduate students researching L2 pronunciation and subsequently launching into academic positions at international universities assures L2 pronunciation a bright future in research and teacher training. Other indicators of momentum include the focus of a Language Teaching timeline on the topic of pronunciation (Munro & Derwing 2011), the appearance of multiple encyclopedia volumes or handbooks of pronunciation (e.g. Levis & Munro 2013; Reed & Levis 2015), and the establishment of the specialized Journal of Second Language Pronunciation in 2015, which constitutes a milestone in the professionalization of the field and ‘an essential step toward a disciplinary identity’ (Levis 2015: 1).

Type
Research Timeline
Copyright
Copyright © Cambridge University Press 2017 

Introduction

After an extended period of being on the periphery, numerous advancements in the field of second language (L2) pronunciation over the past decade have led to increased activity and visibility for this subfield within applied linguistics research. As Derwing (Reference Derwing, Levis and LeVelle2010) underscored in her 2009 plenary at the first annual Pronunciation in Second Language Learning and Teaching (PSLLT) conference, a record number of graduate students researching L2 pronunciation and subsequently launching into academic positions at international universities assures L2 pronunciation a bright future in research and teacher training. Other indicators of momentum include the focus of a Language Teaching timeline on the topic of pronunciation (Munro & Derwing Reference Munro and Derwing2011), the appearance of multiple encyclopedia volumes or handbooks of pronunciation (e.g. Levis & Munro Reference Levis, Munro and Chapelle2013; Reed & Levis Reference Reed and Levis2015), and the establishment of the specialized Journal of Second Language Pronunciation in 2015, which constitutes a milestone in the professionalization of the field and ‘an essential step toward a disciplinary identity’ (Levis Reference Levis2015: 1).

These positive developments notwithstanding, the vast majority of renewed applied pronunciation research activity has been undertaken by researchers in the fields of second language acquisition (SLA), language pedagogy, sociolinguistics, and psycholinguistics. The language assessment community has been slower in its uptake of interest in pronunciation, with few advocates drawing attention to its exclusion from the collective research agenda or underscoring its marginalization as an assessment criterion in L2 speaking tests until recently (e.g. Harding Reference Harding and Chapelle2013; Purpura Reference Purpura2016). Pronunciation remains under-conceptualized in models of communicative competence/communicative language ability (Isaacs Reference Isaacs and Kunnan2014) and typically receives minimal coverage in standard texts, such as Luoma's (Reference Luoma2004) Assessing speaking from the Cambridge Language Assessment series. Although there is a dedicated book on assessing grammar and vocabulary in that series, there is none on assessing pronunciation or pragmatics. The treatment of pronunciation in Fulcher's Language Teaching timeline on assessing L2 speaking is indicative, in that it is singled out as the only area relevant to the L2 speaking construct that he was ‘not able to cover’ (2015: 201).

However, there are signs suggesting that pronunciation is also beginning to emerge as an important research area in language assessment. For example, whereas only two pronunciation-focused articles were published in the first 25 years of publication of the longest-standing language assessment journal, Language Testing (1984–2009), at least one such article per year has appeared in the years since (2010–). Assessment issues have recently been featured in major events on pronunciation teaching and learning (e.g. 2012 PSLLT invited roundtable on pronunciation assessment), while pronunciation has been featured in assessment-oriented discussions (e.g. 2013 Cambridge Centenary Speaking Symposium, which will feed into a special issue of Language Assessment Quarterly; Lim & Galaczi forthcoming). A general shift in attention in language assessment research toward pronunciation and fluency has followed the introduction of fully automated standardized L2 speaking tests. Finally, the growing use of English as a lingua franca (ELF) in diverse international contexts brought about by globalization and technological advancements has catapulted the issue of defining an appropriate pronunciation standard to the frontline of assessment concerns (e.g. Davies Reference Davies2013; Jenkins Reference Jenkins2006), with discussions extending to pronunciation norms in lingua franca contexts for languages other than English (Kennedy, Blanchet & Guénette Reference Kennedy, Blanchet, Guénette, Isaacs and Trofimovich2017). New edited volumes (Isaacs & Trofimovich Reference Kennedy, Blanchet, Guénette, Isaacs and Trofimovich2017; Kang & Ginther Reference Kang and Gintherin press) are taking stock of these developments, fusing perspectives from research communities where there has, hitherto, been little communication.

This resurgence can be seen as part of a cycle, as there have been times in the past where pronunciation was at the forefront of language teaching, learning, and assessment (Isaacs Reference Isaacs and Kunnan2014). The goal of this timeline is, therefore, to chart a clear historical trajectory of pronunciation assessment. In this, we will underscore how conceptualizations and practical implementations have evolved over time, with influences from teaching methodologies, theoretical frameworks, and seminal research that evidence (or in the case of newer pieces, have potential for) ‘historical reverberation’. Throughout, we chart how new lines of inquiry may be instigating or reinforcing change in assessment practice, establishing links where possible between work in different eras.

The starting point for this endeavour requires defining the terms ‘pronunciation’ and ‘assessment.’ In the context of this review, ‘pronunciation’ is inclusive of both segmental (individual sounds) and suprasegmental (prosodic) features, although the assessment instruments cited (e.g. rating scales) have their own operational definitions that may diverge from this. Following Bachman (Reference Bachman2004), the term ‘assessment’ refers to any systematic information gathering process used to foster an understanding of the phenomenon of interest (e.g. learners’ ability or processes). Conversely, a ‘test’ denotes a particular type of assessment in which a performance is elicited and an inference/decision is made about that performance, usually on the basis of a test score. All tests are assessments, but not all assessments are tests – although tests are the most common type of formal assessment. Because tests tend to be higher-stakes and more ubiquitous than other assessment types, they are well represented in the timeline, which includes both direct citations of assessment instruments, and the research and validation work which underpins their development and use. No timeline can be exhaustive, and English is overrepresented as the target language in the included entries.

Much of the focus of the timeline is on defining a suitable standard for assessing pronunciation (e.g. native-like accuracy vs intelligible/comprehensible speech), arriving at an adequate operational definition of pronunciation, or considering pronunciation in relation to some conception of aural-oral ability or communicative competence/communicative language ability. Although from a research perspective, the terms ‘intelligibility’ and ‘comprehensibility’ are frequently distinguished in how they are operationalized (e.g., using orthographic descriptions vs rating scales in Derwing & Munro's Reference Derwing and Munro2015 conception, although Smith & Nelson Reference Smith and Nelson1985, offer a different interpretation), these terms have not been used consistently in L2 speaking scales. The term used in the timeline is simply the one used by the author of the cited publication or assessment instrument.

Another prominent line of inquiry relates to reliability: how might pronunciation be objectively assessed? There is potential for individual differences in the characteristics of those scoring pronunciation assessments to unduly influence or bias the assessment, which raises issues of test fairness. Human raters can now be supplanted through the use of modern technology, which addresses the issue of human behavioural variability. However, machine scoring of speech is not without limitations, with automated scoring systems, as yet, only able to robustly approximate human judgments on highly controlled L2 speaking tasks that yield predictable learner output (e.g. sentence read-aloud, construction, or repetition tasks). This has raised concerns within the assessment community about the narrowing of the L2 speaking construct using automated scoring (e.g. interactional patterns not captured; tasks relatively inauthentic; Chun Reference Chun2006). Although improvements in technological capabilities offer much promise into the future, it is humans (not computers) who are relevant in the context of real-world communicative transactions. Relative to this standard, to which machine scoring will continue to be compared, there will always be limitations to what machines are able to measure and simulate (Isaacs Reference Isaacs, Tsagari and Banerjee2016).

To capture the scope of topics and sources of influence, we organized papers into one or more of a range of themes. The themes were initially devised to cover four key areas: operational assessment systems, practitioner-oriented guides, theoretical frameworks, and research studies/syntheses. However, given that peer-reviewed journal articles and other research publications constituted over two-thirds of the entries, the fourth area – research studies/syntheses – was split into three further categories: research investigating learner performance or development; research examining the role of non-linguistic factors in pronunciation assessment; and research which takes a broader view of assessment in relation to SLA or language pedagogy. The resulting themes are:

  1. A: A language test or scoring system, including rating scales and automated assessments

  2. B: A teaching methodology or assessment-oriented guide for language researchers and/or practitioners

  3. C: A theoretical framework of language ability, knowledge, and/or processing

  4. D: Research on defining or validating speech-related constructs, either as operationalized in an assessment instrument, or through investigations of human- or machine-derived linguistic measures in relation to learner performance or development

  5. E: Research on the effects of non-linguistic variables (e.g. attitudes, accent familiarity, age) on speakers’ or listeners’ test/task performance or on listeners’ (raters’/examiners’) judgments of speech

  6. F: Lab or classroom-based L2 research incorporating a broader notion of assessment, including studies examining the effectiveness of pedagogical interventions

Talia Isaacs is a Senior Lecturer in Applied Linguistics and TESOL at the UCL Centre for Applied Linguistics, UCL Institute of Education, University College London. Her research examines sources of variability in listeners' judgments of speech, including mapping the factors promoting/impeding efficient oral communication in rating scale descriptors. She has taught a range of applied linguistics courses, including in second language acquisition, language assessment, pedagogy and curriculum, oral communication, and research methodology. She currently serves on the executive board of the International Language Testing Association (Member-at-Large) and on the editorial boards of the Journal of Second Language Pronunciation, Language Assessment Quarterly, and Language Testing.

Luke Harding is a Senior Lecturer in the Department of Linguistics and English Language at Lancaster University. His research is mainly in the area of language testing, specifically listening assessment, pronunciation and intelligibility, and the challenges of World Englishes and ELF for language assessment. He regularly teaches on Lancaster's MA in Language Testing on courses including Issues in Language Testing, and Statistical Analyses for Language Testing. He is the test reviews editor for the journal Language Testing and is on the editorial boards of Language Assessment Quarterly and the Journal of Second Language Pronunciation.

Footnotes

Note: Authors’ names are shown in small capitals when the study referred to appears in this timeline.

1 McNamara, T. & C. Roever (2006). Language testing: The social dimension. Malden, MA: Blackwell.

2 Weir, C. J., I. Vidaković & E. D. Galaczi (2013). Measured constructs: A history of Cambridge English language examinations 19132012. Cambridge: Cambridge University Press.

3 Chalhoub‐Deville, M. & G. Fulcher (2003). The oral proficiency interview: A research agenda. Foreign Language Annals 36.4, 498–506.

4 Kang, O. & D. L. Rubin (2009). Reverse linguistic stereotyping: Measuring the effect of listener expectations on speech evaluation. Journal of Language and Social Psychology 28.4, 441–456.

5 Bernstein, J., A. Van Moere & J. Cheng (2010). Validating automated speaking tests. Language Testing 27.3, 355–377.

6 Winke, P., S. Gass & C. Myford (2013). Raters’ L2 background as a potential source of bias in rating oral performance. Language Testing 30.2, 231–252.

7 Ockey, G. & R. French (2016). From one to multiple accents on a test of L2 listening comprehension. Applied Linguistics 37.5, 693–715.

8 Kang, O. & L. Pickering (2014). Using acoustic and temporal analysis for assessing speaking. In A. J. Kunnan (ed.), The companion to Language Assessment (vol. 2). Hoboken, NJ: Wiley-Blackwell, 1047–1062.

9 Crowther, D., P. Trofimovich, T. Isaacs & K. Saito (2015). Does speaking task affect second language comprehensibility? The Modern Language Journal 99.1, 80–95.

10 Saito, K. (2012). Effects of instruction on L2 pronunciation development: A synthesis of 15 quasi-experimental intervention studies. TESOL Quarterly 46.4, 842–854.

11 Yates, L., E. Zielinski & E. Pryor (2011). The assessment of pronunciation and the new IELTS Pronunciation Scale. In J. Osborne (ed.), IELTS Research Reports 12. Melbourne: IDP IELTS Australia, 23–68.

References

Bachman, L. F. (2004). Statistical analyses for language assessment. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Chun, C. W. (2006). An analysis of a language test for employment: The authenticity of the PhonePass test. Language Assessment Quarterly 3.3, 295306.CrossRefGoogle Scholar
Davies, A. (2013). Native speakers and native users: Loss and gain. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Derwing, T. M. (2010). Utopian goals for pronunciation teaching. In Levis, J. & LeVelle, K. (eds.), Proceedings of the 1st Pronunciation in Second Language Learning and Teaching Conference. Ames, IA: Iowa State University, 2437.Google Scholar
Derwing, T. M. & Munro, M. J. (2015). Pronunciation fundamentals: Evidence-based perspectives for L2 teaching and research. Amsterdam: John Benjamins.CrossRefGoogle Scholar
Fulcher, G. (2015). Assessing second language speaking. Language Teaching 48.2, 198216.CrossRefGoogle Scholar
Harding, L. (2013). Pronunciation assessment. In Chapelle, C. A. (ed.), The encyclopedia of applied linguistics. Hoboken, NJ: Wiley-Blackwell.Google Scholar
Isaacs, T. (2014). Assessing pronunciation. In Kunnan, A. J. (ed.), The companion to language assessment (vol. 1). Hoboken, NJ: Wiley-Blackwell, 140155.Google Scholar
Isaacs, T. (2016). Assessing speaking. In Tsagari, D. & Banerjee, J. (eds.), Handbook of second language assessment. Berlin: DeGruyter Mouton, 131146.Google Scholar
Jenkins, J. (2006). The spread of EIL: A testing time for testers. ELT Journal 60.1, 4250.CrossRefGoogle Scholar
Kang, O. & Ginther, A. (eds.) (in press). Assessment in second language pronunciation. New York: Routledge.Google Scholar
Kennedy, S., Blanchet, J. & Guénette, D. (2017). Teacher-raters’ assessments of French lingua franca pronunciation. In Isaacs, T. & Trofimovich, P. (eds.), Second language pronunciation assessment: Interdisciplinary perspectives. Bristol: Multilingual Matters, 212236.Google Scholar
Levis, J. (2015). The Journal of Second Language Pronunciation: An essential step toward a disciplinary identity. The Journal of Second Language Pronunciation 1.1, 110.CrossRefGoogle Scholar
Levis, J. & Munro, M. J. (eds.) (2013). Phonetics and phonology [volume]. In Chapelle, C. A. (ed.), The encyclopedia of applied linguistics. Hoboken, NJ: Wiley-Blackwell.Google Scholar
Lim, G. S. & Galaczi, E. D. (eds.) (forthcoming). Special Issue on Conceptualizing and operationalizing second language speaking assessment: Updating the construct for a new century. Language Assessment Quarterly.Google Scholar
Luoma, S. (2004). Assessing speaking. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Munro, M. J. & Derwing, T. M. (2011). The foundations of accent and intelligibility in pronunciation research. Language Teaching 44.3, 316327.CrossRefGoogle Scholar
Purpura, J. E. (2016). Second and foreign language assessment. The Modern Language Journal 100.S1, 190208.CrossRefGoogle Scholar
Reed, M. & Levis, J. (eds.) (2015). The handbook of English pronunciation. Malden, MA: Wiley-Blackwell.CrossRefGoogle Scholar
Smith, L. E. & Nelson, C. I. (1985). International intelligibility of English: Directions and resources. World Englishes 4.3, 333342.CrossRefGoogle Scholar
Figure 0

1

You have Access
15
Cited by

Save article to Kindle

To save this article to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Pronunciation assessment
Available formats
×

Save article to Dropbox

To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox.

Pronunciation assessment
Available formats
×

Save article to Google Drive

To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive.

Pronunciation assessment
Available formats
×
×

Reply to: Submit a response

Please enter your response.

Your details

Please enter a valid email address.

Conflicting interests

Do you have any conflicting interests? *