Hostname: page-component-784d4fb959-xqrh7 Total loading time: 0 Render date: 2025-07-15T21:08:13.301Z Has data issue: false hasContentIssue false

Development and validation of an LMOOC Quality Evaluation Scale: A large-scale mixed-method approach

Published online by Cambridge University Press:  29 May 2025

Rong Luo
Affiliation:
Zhejiang University, China (luorong@zju.edu.cn)
Rou Huang
Affiliation:
Hangzhou Normal University, China (huangrouhznu@163.com)
Gaojun Shi
Affiliation:
Zhejiang University, China (gjshi@zju.edu.cn)
Rights & Permissions [Opens in a new window]

Abstract

Limited research has examined the quality of language MOOCs and no existing instrument has been developed to gauge learners’ evaluation of LMOOC quality. This study develops an LMOOC Quality Evaluation Scale (LQES) and validates it in the Chinese context, which has the largest number of LMOOC learners in the world. The data were collected from 2,315 LMOOC learners in China using a mixed-method approach. Development and validation of the scale involved (1) generation of an initial item pool based on a semi-structured interview and literature review, (2) refinement of scale items through consultation of LMOOC experts and a focus group interview, (3) exploration of the factor structure of the scale using exploratory factor analysis, and (4) validation and confirmation of the final scale using confirmatory factor analysis. A four-factor model, comprising Instructional Design, L2 Teachers’ Competence, Teaching Implementation, and Technical Support, emerged and was validated. The 26-item LQES provides an original and comprehensive framework for understanding the complexities of LMOOC quality. This study highlights the critical factors underpinning the evaluation of LMOOC quality and paves the way for further refining of the instrument in future research.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NC
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial licence (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of EUROCALL, the European Association for Computer-Assisted Language Learning

1. Introduction

Originally defined as “dedicated Web-based online courses for second languages with unrestricted access and potentially unlimited participation” (Bárcena & Martín-Monje, Reference Bárcena, Martín-Monje, Martín-Monje and Bárcena2014: 1), language MOOCs (or LMOOCs) have emerged as an enticing alternative to traditional language education. Over the past decade, they have witnessed exponential growth, driven by advancements in educational technology and the increasing demand for accessible language learning opportunities (Sallam, Martín-Monje & Li, Reference Sallam, Martín-Monje and Li2022; Zhang & Sun, Reference Zhang and Sun2023). Notably, during the COVID-19 pandemic, language learning ranked among the top 10 subjects for MOOC learners globally (Shah, Reference Shah2020). Given their evolving nature and expanding applications, LMOOCs have been redefined as “dedicated Web-based online courses which deal with various aspects related to second languages, such as language learning, language teaching and learning methodology, linguistic expression of cultures or language teacher education. They have unrestricted access and potentially unlimited participation” (Martín-Monje & Borthwick, Reference Martín-Monje, Borthwick, Wang and Cárdenos Claros2024: 176). This redefinition reflects the adaptability of LMOOCs to encompass diverse educational contexts, making them a critical component of modern language education. With the proliferation of LMOOCs in the past decade, much scholarly attention has been drawn to the potential and opportunities provided in this field. For example, early studies have discussed the prospect of integrating MOOCs into second language education (Perifanou & Economides, Reference Perifanou, Economides, Gómez Chova, López Martinez and Candel Torres2014b; Qian & Bax, Reference Qian and Bax2017). Subsequent studies have explored various aspects of LMOOC learning, such as learners’ attention, engagement and autonomy (Ding & Shen, Reference Ding and Shen2022; Jiang & Peng, Reference Jiang and Peng2025; Zeng, Zhang, Gao, Xu & Zhang, Reference Zeng, Zhang, Gao, Xu and Zhang2022), learner interaction and communication (Appel & Pujolà, Reference Appel and Pujolà2021; Martín-Monje, Castrillo & Mañana-Rodríguez, Reference Martín-Monje, Castrillo and Mañana-Rodríguez2018), as well as learner satisfaction and achievement (Bartalesi-Graf, Agonács, Matos & O’Steen, Reference Bartalesi-Graf, Agonács, Matos and O’Steen2024; Wright & Furneaux, Reference Wright and Furneaux2021). While extensive research has highlighted the potential of LMOOCs in language education, studies have identified significant challenges, such as inappropriate course design, inefficient learner support, and unfit assessment (Bárcena, Martín-Monje & Read, Reference Bárcena, Martín-Monje and Read2015; Hsu, Reference Hsu2023; Luo & Ye, Reference Luo and Ye2021; Sokolik, Reference Sokolik, Martín-Monje and Bárcena Madera2014). Furthermore, a critical review of the design features of LMOOCs confirmed their underlying problems, including limited personalization, interactivity, and flexibility (Chong, Khan & Reinders, Reference Chong, Khan and Reinders2024). These quality concerns have drawn scholarly attention, leading to initial explorations and discussions. Prior studies have discussed issues of “what constitutes an effective LMOOC” or “how to design a framework for language learning in an LMOOC” (Perifanou, Reference Perifanou2014a; Read & Bárcena, Reference Read and Bárcena2020; Sokolik, Reference Sokolik, Martín-Monje and Bárcena Madera2014). These quality-related discussions are generally grounded in personal experience or insights from researchers’ perspectives, leaving substantial room for evidence-based investigations into the quality of LMOOCs.

The quality of MOOCs, understood as a multidimensional concept reflecting teaching and learning processes from various perspectives, plays a crucial role in determining the success of MOOCs (Albelbisi, Reference Albelbisi2020; Stracke & Trisolini, Reference Stracke and Trisolini2021). Building on this notion, this study defines the quality of LMOOCs as the comprehensive quality of various course components, including instructional design, course content, L2 instruction, learning activities, assessment, and technological environment, etc. Since LMOOCs have gained significant momentum worldwide, there is an urgent need to establish a robust evaluation framework for assessing LMOOC quality. To date, limited empirical research has addressed the issue, and no existing instrument has been developed to gauge learners’ evaluation of LMOOC quality.

The present study introduces an LMOOC Quality Evaluation Scale (LQES) for assessing the quality of LMOOCs from learners’ perspectives. Adopting a mixed-method approach, the study develops theoretically and empirically grounded rubrics that encompass dimensions of LMOOC design, teaching, and implementation. The study then validates the scale in the Chinese context, which has the largest number of LMOOC learners in the world.Footnote 1 To our knowledge, the present study is the first of its kind and has answered Zhang and Sun’s (Reference Zhang and Sun2023) call for developing standards for quality assurance of LMOOCs based on empirical evidence. The LQES provides an original and comprehensive framework for understanding the complexities of LMOOC quality. The study is guided by the following research questions:

  • RQ1: What are the key dimensions underpinning learners’ evaluation of LMOOC quality?

  • RQ2: To what extent is the LQES developed in the current study reliable and valid?

2. Literature review

2.1 Quality standards for MOOCs

Regarding the success of MOOCs, it is widely acknowledged that high enrollment and completion rates alone are insufficient indicators (Khalil & Ebner, Reference Khalil, Ebner, Viteli and Leikomaa2014). Understanding the success of a MOOC as process-defined rather than outcomes-defined, Stephen Downes offered four key success factors for a MOOC: autonomy, diversity, openness, and interactivity (Creelman, Ehlers & Ossiannilsson, Reference Creelman, Ehlers and Ossiannilsson2014). Other scholars argue that the success of MOOCs should be viewed from perspectives such as learners’ background, engagement, motivation (Rõõm, Luik & Lepp, Reference Rõõm, Luik and Lepp2023), as well as course quality, system quality, information quality, and service quality (Albelbisi, Reference Albelbisi2020). Among all variables, a consensus has emerged that a comprehensive MOOC quality is a critical indicator of learner success and significantly influences their satisfaction and continuous intention (Gu, Xu & Sun, Reference Gu, Xu and Sun2021; Stracke & Trisolini, Reference Stracke and Trisolini2021).

Quality criteria of MOOCs differ from traditional courses given their unique features such as asynchronous and self-paced learning mode (Creelman et al., Reference Creelman, Ehlers and Ossiannilsson2014). While many MOOCs are developed by reputable institutions, their quality and rigor vary widely, failing to meet the needs of a massive number of learners. Additionally, due to the large-scale nature of MOOCs, challenges such as inadequate learner support, unfit assessment, inappropriate course design, and poor pedagogical method remain within the field (Conole, Reference Conole2016; Gamage, Fernando & Perera, Reference Gamage, Fernando and Perera2015; Margaryan, Bianco & Littlejohn, Reference Margaryan, Bianco and Littlejohn2015). These issues have captured researchers’ attention and provoked widespread discussion on the quality assurance of MOOCs. Therefore, there is a broad consensus on the urgent need to establish benchmarks for evaluating the quality of MOOCs (Gamage et al., Reference Gamage, Fernando and Perera2015; Hood & Littlejohn, Reference Hood and Littlejohn2016; Xiao, Qiu & Cheng, Reference Xiao, Qiu and Cheng2019).

Among existing research concerning MOOC quality, quality dimensions and frameworks for MOOCs have been proposed from different perspectives. The pedagogical dimension is considered the most important dimension influencing the quality of MOOCs (Stracke & Trisolini, Reference Stracke and Trisolini2021). First, instructional design is a critical indicator of MOOC quality in promoting learner engagement and learning effectiveness (Jung, Kim, Yoon, Park & Oakley, Reference Jung, Kim, Yoon, Park and Oakley2019). Margaryan et al. (Reference Margaryan, Bianco and Littlejohn2015), employing first principles of instruction, assessed the instructional design quality of MOOCs (76 courses in total) and revealed the coexistence of well-packaged content and poor instructional design. Conole (Reference Conole2016) employed her 7Cs framework – Conceptualize, Capture, Communicate, Collaborate, Consider, Combine, and Consolidate – to refine the pedagogical design and implementation of MOOCs.

Second, the significant role of interaction and evaluation design in MOOC learning has been highlighted in recent studies. Four types of interaction (learner–facilitator, learner–resource, learner–learner, and group–group interaction) act as crucial factors impacting learners’ perceived quality of MOOCs (Stracke & Tan, Reference Stracke, Tan, Kay and Luckin2018), and peer assessment design is a key predictor of learners’ satisfaction and completion in MOOCs (Gamage et al., Reference Gamage, Staubitz and Whiting2021; Yousef, Chatti, Schroeder & Wosnitza, Reference Yousef, Chatti, Schroeder and Wosnitza2014). Besides the pedagogical dimension, the technical dimension is also acknowledged to be closely related to the quality assurance of MOOCs (Cross et al., Reference Cross, Keerativoranan, Carlon, Tan, Rakhimberdina and Mori2019; Stracke & Trisolini, Reference Stracke and Trisolini2021). In this dimension, MOOC platforms and delivery systems play fundamental roles in ensuring online learning success (Fernández-Díaz, Rodríguez-Hoyos & Calvo Salvador, Reference Fernández-Díaz, Rodríguez-Hoyos and Calvo Salvador2017). User interface and video features are proved to be indispensable factors for optimizing accessibility, navigation, and scaffolding (Deng & Gao, Reference Deng and Gao2023; Maloshonok & Terentev, Reference Maloshonok and Terentev2016). Combining the above two dimensions, Yousef et al.’s (Reference Yousef, Chatti, Schroeder and Wosnitza2014) widely cited study categorized MOOC quality standards into two levels: the pedagogical level and the technological level.

Considering the multifaceted nature of MOOC quality, recent research has integrated multiple dimensions into the frameworks for assessing MOOC quality. As for other major quality factors, course content serves as an indispensable element in influencing MOOC quality (Ucha, Reference Ucha2023), which consists of teaching videos and supplementary resources, etc. (Huang, Zhang & Liu, Reference Huang, Zhang and Liu2017; Ma, Reference Ma2018). Teaching group and teachers’ competence are also closely linked to learners’ perceived effectiveness of MOOCs (Ferreira, Arias & Vidal, Reference Ferreira, Arias and Vidal2022; Huang et al., Reference Huang, Zhang and Liu2017). Recognizing the necessity of incorporating these relevant elements in the quality evaluation of MOOCs, extensive studies have established MOOC quality evaluation frameworks consisting of indicators such as course content, instructional design, teaching group, technology, teaching implementation, learner support, etc. (Dyomin, Mozhaeva, Babanskaya & Zakharova, Reference Dyomin, Mozhaeva, Babanskaya, Zakharova, Delgado Kloos, Jermann, Pérez-Sanagustín, Seaton and White2017; Poce, Amenduni, Re & De Medio, Reference Poce, Amenduni, Re and De Medio2019; Stracke & Tan, Reference Stracke, Tan, Kay and Luckin2018; Yang, Zhou, Zhou, Hao & Dong, Reference Yang, Zhou, Zhou, Hao and Dong2020).

Although the aforementioned studies have significantly advanced the understanding of MOOC quality, they have certain limitations: first, from a research design perspective, the quality evaluation standards are mostly proposed by MOOC designers and researchers. As the recipients and target audience of MOOCs, learners are an inseparable component of MOOC learning (Creelman, Ehlers & Ossiannilsson, Reference Creelman, Ehlers and Ossiannilsson2014) and their perceptions of courses are indicative of effectiveness and the ultimate success of MOOCs. Second, regarding data analysis methods, most MOOC quality evaluation frameworks derive from literature reviews or theoretical reflections, focusing more on qualitative rather than quantitative methods (Ma, Reference Ma2018; Yang et al., Reference Yang, Zhou, Zhou, Hao and Dong2020). Therefore, the current situation necessitates a more comprehensive evaluation method and calls upon more empirical evidence underpinning LMOOC quality from learners’ perspectives.

2.2 Quality evaluation of LMOOCs

Following the practice of xMOOCs, LMOOCs are mostly institutionalized and rigorously structured online language courses (Jitpaisarnwattana, Reinders & Darasawang, Reference Jitpaisarnwattana, Reinders and Darasawang2019). Since the proliferation of LMOOCs in recent years, much scholarly attention has been devoted to learner experience and behaviors (Appel & Pujolà, Reference Appel and Pujolà2021; Bartalesi-Graf et al., Reference Bartalesi-Graf, Agonács, Matos and O’Steen2024; Martín-Monje et al., Reference Martín-Monje, Castrillo and Mañana-Rodríguez2018), learner motivation, autonomy and emotions (Beaven, Codreanu & Creuzé, Reference Beaven, Codreanu, Creuzé, Martín-Monje and Bárcena Madera2014; Ding & Shen, Reference Ding and Shen2022; Luo & Wang, Reference Luo and Wang2023), as well as learning outcomes in LMOOC learning (Wright & Furneaux, Reference Wright and Furneaux2021). LMOOC quality is not a primary focus in the initial stages of LMOOC research and only a handful of studies have delved into this topic.

Perifanou and Economides (Reference Perifanou, Economides, Gómez Chova, López Martinez and Candel Torres2014b) made a first step in evaluating the quality of LMOOC environments and proposed a six-dimension standard for constructing an adaptive and personalized LMOOC learning environment, including content, pedagogy, assessment, community, technical infrastructure, and financial issues. Focusing on the “human dimension,” Bárcena, Martín-Monje and Read (Reference Bárcena, Martín-Monje and Read2015) discussed problems influencing LMOOC quality and highlighted the significance of online social interactions, feedback, and learner support in online language learning. Ding & Shen (Reference Ding and Shen2020) conducted a content analysis of learner reviews on 41 English LMOOCs, indicating that content design, video presentation, and MOOC instructors were of most importance to the learners. Based on grounded theory, Luo and Ye (Reference Luo and Ye2021) identified key quality criteria of LMOOCs from learners’ perspectives, consisting of instructor, teaching content, pedagogy, technology, and teaching management dimensions. Hsu (Reference Hsu2023) used Reference DeLone and McLean1992 Information System Success Model to define the success of LMOOCs, revealing that system quality acts as a significant factor influencing learners’ intention and satisfaction toward LMOOCs. To conclude, existing research on LMOOC quality primarily consists of qualitative studies based on researchers’ reflections or learners’ interviews and comments. Few empirical studies have thoroughly investigated the quality of LMOOCs, and no instrument currently exists to assess learners’ evaluation of LMOOC quality.

With the specific purpose of assessing the quality of MOOCs for L2 learning, the current study introduces the LQES for measuring L2 learners’ evaluation of LMOOC quality. This study integrates perspectives from experts and learners to design and develop a questionnaire for assessing the quality of LMOOCs. This discipline-specific instrument better captures the features of the development, delivery, and implementation of LMOOCs. Combining quantitative and qualitative methods, the current study aims to explore the key dimensions influencing L2 learners’ assessment of LMOOC quality.

3. Developing and validating LMOOC Quality Evaluation Scale (LQES)

3.1 Methods

The present study adopted a large-scale mixed-method approach. We collected data from multiple sources: literature reviews of previous studies, interviews of focus group and LMOOC experts, and questionnaires of learners’ evaluation of LMOOC quality. This study employed both quantitative and qualitative methods for the scale’s development and validation.

3.2 Participants and settings

To investigate different types of language MOOCs, the present study selected 15 representative LMOOCs from three major MOOC platforms in China: icourse.org, XuetangX, and UMOOCs. All these courses have been open to the public for at least eight sessions, with enrollment number ranging from 50,000 to 200,000. Among the 15 LMOOCs, seven courses are recognized as national-level courses and eight as provincial-level courses. Our survey was conducted from March to September 2023. In collaboration with LMOOC providers, an online survey was distributed to learners who had completed their courses to understand their perceptions of LMOOC quality. A total of 2,549 questionnaire responses were collected, of which 2,327 were valid and complete. These respondents were from various places in mainland China, including Beijing, Zhejiang, Jiangsu, Jiangxi, Jilin, Gansu, Xinjiang, Guangxi, and Guangdong. With their consent, 2,315 questionnaires were retained for further analysis.

According to the new taxonomy of LMOOCs proposed by Díez-Arcón and Agonács (Reference Díez-Arcón and Agonács2024), 15 LMOOCs in our study cover four modalities: general language learning LMOOCs (e.g. Practical College English Grammar & English Translation Techniques), LMOOCs for academic purposes (e.g. Academic English Listening and Speaking & Academic English Writing), LMOOCs for professional purposes (e.g. Maritime English and Conversations & Advanced Medical English Vocabulary), and cultural-oriented LMOOCs (e.g. Impressions of British & American Culture & An Overview of Chinese Culture). All the courses are delivered in English, and their teaching materials are also written in English. In their learning process, students felt free to control their LMOOC learning according to their own learning speed and habit. At the end of the semester, they need to complete an online survey reporting their evaluation of the quality of the LMOOC they have participated in. Permission from students was obtained before data collection. Table 1 presents the demographic information of our respondents.

Table 1. Demographic information of participants

3.3 Scale development procedures

The scale development and validation in this study were carried out in four distinct phases: (1) generation of an initial item pool, (2) refinement of the scale items, (3) exploration of the scale’s factor structure, and (4) validation and confirmation of the final scale. In the first stage involving the generation of scale items, 158 learners (66 male, 92 female) participated in a semi-structured interview, 10 learners (5 male, 5 female) took part in a focus group interview, while 270 (156 male, 114 female) learners participated in a pilot test. In the second stage, to identify the underlying factor structure of the scale, an exploratory factor analysis (EFA) was conducted on 1,295 students. In the third stage, construct validation was performed on all 2,315 students through confirmatory factor analysis (CFA). Figure 1 illustrates the four stages involved in the scale development process.

Figure 1. Scale development procedures.

3.3.1 Generation of the item pool

To generate items for the LQES, we employed a combination of inductive and deductive methods, which has been recommended by previous studies on scale development (Cheng & Clark, Reference Cheng and Clark2017; Morgado, Meireles, Neves, Amaral & Ferreira, Reference Morgado, Meireles, Neves, Amaral and Ferreira2017). The inductive approach to scale item generation involves deriving items from observations, reflections, or empirical data that reflect the perspectives of the target audience. The deductive approach involves an extensive literature review reflecting the constructs or dimensions of the existing theories or conceptual frameworks. First, since the LQES was developed from learners’ perspectives, we adopted the inductive approach by utilizing a semi-structured interview to capture L2 learners’ perceptions about LMOOC quality. The interview was conducted among 158 students from two universities located in southern China by recalling their recent LMOOC learning experience. Drawing on Li, Li and Zhao’s (Reference Li, Li and Zhao2020) qualitative study on the quality of MOOCs, our semi-structured interview focused on the three questions: (1) What is the quality of an LMOOC? (2) What are the characteristics and features demonstrated by a good-quality LMOOC? (3) What are the core dimensions or key issues in constructing a good-quality LMOOC? The content analysis of participants’ answers yielded 455 pieces of useful information. Themes related to learners’ perceptions of LMOOC quality were extracted as indicators for scale development. LMOOC quality indicators and descriptions emerging in learners’ perceptions are presented in Table 2.

Table 2. Indicators and descriptions in the semi-structured interview

Second, this study utilized the deductive approach through a comprehensive review of existing research concerning the quality of MOOCs and LMOOCs. The literature review aims to obtain key indicators and descriptions in previous research that echoed the findings of the semi-structured interview. Relevant studies revealed different categories involving similar descriptions to LMOOC learners’ perceptions, including course objectives, instructional design, course content, learning process, teachers’ competence, learner support, and technical environment. In the process of analyzing descriptions below key indicators, we find the classification of some items like “The course provides clear introduction to teaching objectives and guidelines for learning this course” is overlapping since it concerns both indicators of teaching objectives and learner support. We thus only extracted relevant item descriptions without a clear classification into specific quality indicators. To ensure full comprehension of the scale, the items were translated into Chinese, the native language of our participants. Based on the item descriptions generated from the semi-structured interview and literature review, an initial item pool of 43 items was generated. A 5-point Likert scale was adopted as the response format, ranging from 1 (strongly disagree) to 5 (strongly agree). Similar designs using an odd-number 5-point response scale to investigate learners’ experience of MOOCs were adopted in the development of a MOOC success scale (Albelbisi, Reference Albelbisi2020), the establishment of a MOOC evaluation system (Tzeng, Lee, Huang, Huang & Lai, Reference Tzeng, Lee, Huang, Huang and Lai2022), and the exploration of MOOC learners’ motivation (Chen, Gao, Yuan & Tang, Reference Chen, Gao, Yuan and Tang2020).

3.3.2 Refinement of scale items

To enhance item readability, eliminate redundant items, and generate new items in the initial pool, a focus group interview and an LMOOC expert review were conducted in the following part. Ten LMOOC learners from this research group’s university volunteered to join the focus group interview. Participants evaluated the intelligibility of 43 items, with an additional option, “I don’t understand this statement,” included to evaluate face validity. Any new item or keyword not previously identified in the initial pool was retained for further review. The focus group interview results suggested that most of the items could be easily understood by Chinese LMOOC learners. Among the 43 items, 7 items were reconstructed for clarity and 4 new items reflecting the features of LMOOC learning in the Chinese context were added to the initial item pool. After the focus group interview, the total number of items selected for further analysis rose to 47 (see Appendix 1 in the supplementary material).

The second step of item refinement involved LMOOC experts’ review. The study invited eight LMOOC experts who are either LMOOC designers or researchers and have at least five years of experience developing or researching LMOOCs. An email containing a link to an online survey was distributed to all experts. They checked the scale by judging (1) whether the items effectively measure the target constructs, (2) whether the statements are adequate for assessing the quality of LMOOCs, and (3) how many quality indicators can be identified in the initial item pool. They were also invited to rate their level of agreement with 47 items and offer feedback on each item, the structure of the scale, and its overall design. The experts concurred that the items adequately addressed the necessary quality issues of LMOOCs. Five items were further adjusted and reworded according to the suggestions from the experts. Afterward, by comparing and combining the main indicators, seven quality indicators of LMOOCs were finally defined: course objectives, course content, assessment, L2 teachers’ competence, teaching implementation, learner support, and technical environment. Meanwhile, items from various different frameworks that shared similar meanings were combined. Following this step, a preliminary set of 41 quality items was retained for further analysis. Finally, the three authors of this paper cross-checked all the indicators and items to resolve any disagreements before further application of the scale.

A pilot survey was conducted to reassess the items’ face validity, and 270 LMOOC learners from three universities participated in it. Learners were asked to reflect on a recent LMOOC they learned and gave their responses to 41 statements on a 5-point Likert scale. At the end of the questionnaire, students were asked to respond to a yes/no question to determine whether they experienced any difficulties in understanding the items. If they answered “yes,” they were prompted to specify which item(s) they found unclear or problematic. The participants suggested that all the statements were understandable, although further refinements were necessary. Finally, the 41 items were revised and classified into seven dimensions: course objectives (6 items), course content (10 items), assessment (3 items), L2 teachers’ competence (8 items), teaching implementation (4 items), learner support (4 items), and technical environment (6 items). The items retained after the pilot survey were subsequently subjected to a purification process (see Appendix 2).

3.3.3 Exploratory factor analysis (EFA)

In the item purification stage, we employed EFA to illuminate the underlying factor structure of the 41-item scale. EFA is often commonly used in the early stage of scale development to refine and optimize items, laying the groundwork for subsequent confirmatory analysis. In the present study, EFA was tested on 1,295 LMOOC learners from eight universities. We used SPSS Version 26.0 to analyze the internal consistency and reliability of the draft questionnaire. We first conducted Bartlett’s test of sphericity and Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy. The results (KMO = 0.985, > 0.60, p < 0.001) showed that the data were suitable for subsequent factor analysis. Then EFA with principal axis factoring and promax rotation was performed with the preliminary set of 41 scale items. Fifteen items with loading values below 0.500 or items that did not load onto any factor were removed. The remaining 26 items had eigenvalues greater than 1. As shown in Table 3, EFA results yielded a four-factor solution for the LQES, and the item loadings within their respective factors ranged from 0.501 to 0.741, indicating satisfactory reliability.

Table 3. Factor loadings of the items for the LMOOC Quality Evaluation Scale (LQES)

Exploring the factor structure presented in Appendix 3, eight items from original factors of course objectives, course content, and assessment loaded onto a new factor called Instructional Design. Six items loaded onto their original factor of L2 Teachers’ Competence. Six items from original factors of assessment, teaching implementation, and learner support loaded onto the factor of Teaching Implementation. Six items from original factors of learner support and technology environment loaded onto a new factor called Technical Support. The variances explained by the four factors were 42.28%, 13.27%, 8.62%, and 5.74% respectively, accounting for 69.91% in total, higher than the cut-off value of 50%. The questionnaire of the Chinese version is presented in Appendix 4.

3.3.4 Confirmatory factor analysis (CFA)

In accordance with the standard procedures for scale development and validation (e.g. DeVellis, Reference DeVellis2003; Hair, Sarstedt, Ringle & Mena, Reference Hair, Sarstedt, Ringle and Mena2012), CFA was performed to evaluate the validity and reliability of the four-factor scale. To confirm the assumed four-factor structure, CFA was conducted with the maximum likelihood method on the remaining 26 items using AMOS with a larger sample of 3,508 LMOOC learners from 16 universities. Fit indices recommended by Shi, Lee and Maydeu-Olivares (Reference Shi, Lee and Maydeu-Olivares2019) were utilized to assess the model fit. These evaluation metrics considered the ratio of chi-square to degrees of freedom (χ2/df), root-mean-square error of approximation (RMSEA), standardized root-mean-square residual (SRMR), Tucker–Lewis index (TLI), goodness-of-fit index (GFI), normed fit index (NFI), and comparative fit index (CFI). According to Perry, Nicholls, Clough and Crust (Reference Perry, Nicholls, Clough and Crust2015), a model is considered to have a reasonable fit if it meets the following criteria: (1) χ2/df ranges from 2 to 3, with smaller values preferred; (2) GFI is above .90; (3) RMSEA ranges from .05 to .08, with RMSEA below 0.05 considered indicative of good model fit (with the lower limit of the 90% confidence interval below .05 and the upper limit under .10); (4) SRMR is below .10; (5) NFI surpasses 0.90; (6) TLI is greater than 0.90; and (7) CFI exceeds .90.

Based on multiple criteria (refer to Table 4), the initial CFA results suggested that the model demonstrated a strong fit to the data, with χ2/df = 1.711, GFI = 0.96, RMSEA = 0.02, SRMR = 0.01, NFI = 0.92, TLI = 0.97, and CFI = 0.96. The findings indicated that the four-factor model aligned well with the sample data. For the four dimensions – Instructional Design, L2 Teachers’ Competence, Teaching Implementation, and Technical Support – the indices (χ2/df, GFI, RMSEA, SRMR, and CFI) demonstrated satisfactory values.

Table 4. Evaluation of the goodness of fit of the scale

Note. GFI = goodness-of-fit index; RMSEA = root-mean-square error of approximation; SRMR = standardized root-mean-square residual; CFI = comparative fit index; CR = composite reliability; AVE = average variance extracted.

Additionally, composite reliability (CR) was used to assess reliability, reflecting the proportion of true variance relative to the total variance of the scores. According to Ab Hamid, Sami and Mohmad Sidek (Reference Ab Hamid, Sami and Mohmad Sidek2017), CR values above 0.70 indicate good reliability and acceptable reliability with values between 0.6 and 0.7. Convergent validity was assessed using the average variance extracted (AVE), which quantifies the extent to which the variance in observed variables is accounted for by their corresponding latent constructs. According to Cheung, Cooper-Thomas, Lau and Wang (Reference Cheung, Cooper-Thomas, Lau and Wang2024), AVE is a stricter criterion than CR, allowing researchers to conclude adequate convergent validity even if AVE falls below 0.50. In this study, CR values exceeding 0.80 confirmed high reliability, as the recommended threshold value was 0.70. Additionally, AVE values were above 0.50, demonstrating good convergent validity.

As shown in Figure 2, the standardized path diagram for the four-factor structure of the LQES is demonstrated. The finalized LQES included 26 items: 8 items of Instructional Design, 6 items of L2 Teachers’ Competence, 6 items of Teaching Implementation, and 6 items of Technical Support. Figure 2 indicates the structure coefficients for the LQES model, illustrating the standardized regression weights across the four identified dimensions. All structure coefficients exceeded the threshold values of 0.7, indicating acceptable structure coefficients for Instructional Design (from 0.712 to 0.804), L2 Teachers’ Competence (from 0.709 to 0.854), Teaching Implementation (from 0.708 to 0.816), and Technical Support (from 0.754 to 0.835). Furthermore, this study explored a higher-order model in which Instructional Design and L2 Teachers’ Competence were combined and represented as a second-order factor, as well as a bifactor model and a general factor model. Among these tested models, the hypothesized four-factor structure demonstrated the most optimal overall model fit and the most favorable reliability estimates.

Figure 2. Structure coefficients for the LMOOC Quality Evaluation Scale (LQES) Model.

4. Discussion and conclusion

In the present study, we developed an LMOOC Quality Evaluation Scale (LQES) for measuring L2 learners’ perceptions of LMOOC quality. The 26-item LQES consisted of four components: Instructional Design (8 items), L2 Teachers’ Competence (6 items), Teaching Implementation (6 items), and Technical Support (6 items).

Specifically, the quality indicator of Instructional Design included items related to LMOOC objectives, content, and assessment, reflecting how LMOOCs are designed and developed, as well as how learners’ academic performance is measured. Its inclusion of three indicators –objectives, content, and assessment – aligns with Stracke and Trisolini’s (Reference Stracke and Trisolini2021) findings that instructional design serves as the main component of pedagogy of online courses, consisting of learning objectives, design approaches, course content, and assessment. The EFA results revealed that the Instructional Design factor accounted for the largest proportion of the total variance of the LQES, echoing the opinion that instructional design is “a key component of the overall quality and pedagogic effectiveness of a learning experience” (Margaryan et al., Reference Margaryan, Bianco and Littlejohn2015: 78). Compared with other factors, LMOOC learners pay more attention to the clarity of teaching objectives, richness of teaching content (videos and supplementary materials), and diversity of assessment forms. For example, in the semi-structured interview, student Yu described an LMOOC she had taken as “a well-designed course with an introduction video for each chapter just like the trailer for a movie.” Student Li said jokingly, “I do not care how long an LMOOC video lasts and I just hope to learn some interesting and useful content different from my textbooks.”

Although previous studies have confirmed the critical role of the teacher in determining the quality of MOOCs (Huang et al., Reference Huang, Zhang and Liu2017; Ma, Reference Ma2018; Qiu & Ou, Reference Qiu and Ou2015), few empirical studies have investigated L2 teachers’ competence in evaluating the quality of LMOOCs. The quality indicator of L2 Teachers’ Competence in this study included items about L2 teachers’ oral English proficiency, expertise, and familiarity with the course, as well as teaching manners and methods. This dimension is the second influential factor in forming learners’ perceptions of LMOOC quality. Student Xi mentioned her favorite LMOOC in which “the teacher speaks fluent and clear English with an appropriate speed, which helps us to fully understand the complicated academic writing skills.” However, student Zhang described his disappointment with an LMOOC in which the teacher spent 30 minutes reading PowerPoint pages about a translation technology without any facial expressions.

In addition, the quality indicator of Teaching Implementation involves maintaining effective communication, ensuring timely feedback, supporting academic integrity, and fostering a harmonious online learning atmosphere in LMOOC teaching. This dimension is the third influential factor in shaping learners’ perceptions of LMOOC quality and plays a pivotal role in the quality evaluation framework of MOOCs (Yang et al., Reference Yang, Zhou, Zhou, Hao and Dong2020). Learners attached great importance to the feedback they receive from peer learners and teachers in LMOOC learning. Student Wu said she was excited about a storytelling task in the discussion forum of a cultural LMOOC, in which learners were asked to share a story about cultural heritage from their hometown in English. Teachers then gave likes and comments on learners’ stories, which made the latter feel encouraged and rewarded. However, many students expressed their disappointment with the silent teacher–student and student–student interactions in the discussion forums of LMOOCs. The findings confirmed the findings in previous studies suggesting that the lack of personalized guidance and interactions facing a heterogeneous target group remained an intrinsic problem in the implementation of MOOCs (Bárcena et al., Reference Bárcena, Martín-Monje and Read2015; Castellanos-Reyes, Reference Castellanos-Reyes2021; Stracke & Tan, Reference Stracke, Tan, Kay and Luckin2018).

Besides the above three quality indicators, the Technical Support factor is the fourth influential factor in forming students’ evaluation of LMOOC quality. This dimension included items related to the accessibility of LMOOC resources and the stability and multi-functionality of LMOOC platforms. Learners’ evaluation of technical support reflects their relationship with the technology that supports LMOOCs, such as how the platforms help learners navigate, search, study, download materials, and keep track of their learning process. Many learners in this study described today’s LMOOC platforms as “user friendly” and felt that it is easy to proceed with the help of the navigation page. Some problems remain concerning the accessibility of LMOOCs, as student Liu complained that “some LMOOCs are not attainable after the term is over.” The above four quality indicators of the LQES reveal the key components of the quality of LMOOCs and reflect L2 learners’ real expectations from LMOOCs.

The overall results from the reliability and validity analyses indicate that the 26-item LQES exhibits good psychometric properties. The scale proves to be a reliable and valid tool for measuring learners’ evaluation of LMOOC quality. Further CFA results demonstrate that this four-factor measure has a good fit, with each factor showing strong convergent and discriminant validity. These results suggest that the LQES, which adopts a large-scale mixed-method approach, is a promising instrument for measuring the quality and effectiveness of LMOOC learning.

The quantitative and qualitative findings in developing the LQES have theoretical and practical implications. Theoretically, the identification of a four-factor structure of the LQES provides a better understanding of the complexities of LMOOC learning. This study addresses the notable lack of multidimensional instruments for assessing the quality of LMOOCs. Through large-scale surveys, semi-structured interviews, focus group discussions, and expert reviews, we investigated learners’ perceptions of LMOOC quality, contributing to the theoretical advancement of MOOC evaluation by integrating language education–specific dimensions into the assessment model. Pedagogically, the research findings provide implications for LMOOC practitioners. First, the LQES provides scientifically rigorous standards for measuring LMOOC quality, serving as a diagnostic tool to provide feedback to LMOOC educators. The data-driven insights inform instructional designers and teachers of learners’ evaluation of the key factors that contribute to high-quality LMOOC experiences. Therefore, they can adjust pedagogical approaches and resources to enhance learner engagement, motivation, and satisfaction, leading to improved learning outcomes. In the near future, generative AI could be used to create diverse teaching materials, including quizzes, interactive exercises, and simulations for learners. Meanwhile, AI chatbots and virtual assistants will answer learners’ questions in real time, provide guidance, and facilitate discussions. The integration of AI technologies can deliver more interactive, personalized, adaptive, and customized learning experiences to improve the pedagogical and technological quality of LMOOCs. Second, the LQES scale fosters professional development among language educators. Through understanding the criteria for high-quality MOOCs, LMOOC educators can adopt innovative instructional methods and regularly refine their courses to ensure their continuous improvement. By identifying successful LMOOC teaching practices and case studies through this evaluation scale, educators can share valuable insights with peers, fostering collaboration within the language education community. Finally, since the main MOOC platforms worldwide provide xMOOCs, this study, conducted in the Chinese context, will provide implications for other EFL contexts. How Chinese L2 learners perceive and evaluate the quality of LMOOCs may serve as a useful reminder for LMOOC learners and instructors in other countries and regions.

We acknowledge the following limitations of this study. First, although 2,315 learners from 15 LMOOCs cover a wide range of participants and LMOOC themes, the participants in this study cannot be considered representative of all LMOOC learners. Second, additional validation of the translation versions into other languages from the LQES should be implemented since this study is conducted in a Chinese EFL context. Lastly, a longitudinal study could be conducted in the future to reveal the dynamic changes in learners’ perceptions and evaluations of LMOOC quality. To conclude, the present study develops an LMOOC Quality Evaluation Scale (LQES) for the first time and elucidates the key quality indicators underpinning L2 learners’ evaluation of LMOOC quality. The LQES paves the way for developing instruments for measuring the quality and effectiveness of other technology-enhanced language learning practices.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/S0958344025000072

Data availability statement

Data available on request from the authors.

Authorship contribution statement

Rong Luo: Conceptualization, Project administration, Methodology, Writing – original draft; Rou Huang: Data collection, Writing – review & editing; Gaojun Shi: Methodology, Data analysis, Writing – review & editing.

Funding disclosure statement

This research is supported by the Fundamental Research Funds for the Central Universities (Qingfeng Project of Zhejiang University) and Zhejiang Provincial Program for Educational Science Research (2025SCG 229).

Competing interests statement

The authors declare no competing interests.

Ethical statement

Ethical review and approval were not required for the study in accordance with the local legislation and institutional requirements. Informed consent was gathered from all participants.

GenAI use disclosure statement

The authors declare no use of generative AI.

About the authors

Rong Luo is an associate professor at the School of International Studies, Zhejiang University. Her research interests include language MOOCs (LMOOCs), technology-enhanced second language acquisition, and media education.

Rou Huang is a postgraduate student at the School of International Studies, Hangzhou Normal University. Her research interests are LMOOCs and learner autonomy in technology-enhanced language learning.

Gaojiao Shi is a PhD student at the College of Education, Zhejiang University. His research interests include online learning and educational technology.

Footnotes

1 Based on a survey conducted by the authors of this study, the total number of language MOOCs on the five biggest national MOOC platforms in China in 2024 is more than 600, not including provincial and other local platforms. The number of learners of an LMOOC ranges from 500 to 30,000 each semester, depending on the popularity of the course and its platform.

References

Ab Hamid, M. R., Sami, W. & Mohmad Sidek, M. H. (2017) Discriminant validity assessment: Use of Fornell & Larcker criterion versus HTMT criterion. Journal of Physics: Conference Series, 890: Article 012163. https://doi.org/10.1088/1742-6596/890/1/012163 Google Scholar
Albelbisi, N. A. (2020) Development and validation of the MOOC success scale (MOOC-SS). Education and Information Technologies, 25(5): 45354555. https://doi.org/10.1007/s10639-020-10186-4 CrossRefGoogle Scholar
Appel, C. & Pujolà, J.-T. (2021) Designing speaking interaction in LMOOCs: An eTandem approach. ReCALL, 33(2): 161176. https://doi.org/10.1017/S0958344021000045 CrossRefGoogle Scholar
Bárcena, E. & Martín-Monje, E. (2014) Introduction. Language MOOCs: An emerging field. In Martín-Monje, E. & Bárcena, E. (eds.), Language MOOCs: Providing learning, transcending boundaries. Warsaw: De Gruyter Open, 1–15. https://doi.org/10.2478/9783110420067.1 Google Scholar
Bárcena, E., Martín-Monje, E. & Read, T. (2015) Potentiating the human dimension in language MOOCs. Proceedings of the European MOOC Stakeholder Summit 2015, 7: 4654.Google Scholar
Bartalesi-Graf, D., Agonács, N., Matos, J. F. & O’Steen, D. (2024) Insight into learners’ experience in LMOOCs. Computer Assisted Language Learning, 37(7): 14781503. https://doi.org/10.1080/09588221.2022.2082484 CrossRefGoogle Scholar
Beaven, T., Codreanu, T. & Creuzé, A. (2014) Motivation in a language MOOC: Issues for course designers. In Martín-Monje, E. & Bárcena Madera, E. (eds.), Language MOOCs: Providing learning, transcending boundaries. Warsaw: De Gruyter Open, 4866. https://doi.org/10.2478/9783110420067.4 CrossRefGoogle Scholar
Castellanos-Reyes, D. (2021) The dynamics of a MOOC’s learner-learner interaction over time: A longitudinal network analysis. Computers in Human Behavior, 123: Article 106880. https://doi.org/10.1016/j.chb.2021.106880 CrossRefGoogle Scholar
Chen, Y., Gao, Q., Yuan, Q. & Tang, Y. (2020) Discovering MOOC learner motivation and its moderating role. Behaviour & Information Technology, 39(12): 12571275. https://doi.org/10.1080/0144929X.2019.1661520 CrossRefGoogle Scholar
Cheng, K. K. F. & Clark, A. M. (2017) Qualitative methods and patient-reported outcomes: Measures development and adaptation. International Journal of Qualitative Methods, 16(1): 13. https://doi.org/10.1177/1609406917702983 CrossRefGoogle Scholar
Cheung, G. W., Cooper-Thomas, H. D., Lau, R. S. & Wang, L. C. (2024) Reporting reliability, convergent and discriminant validity with structural equation modeling: A review and best-practice recommendations. Asia Pacific Journal of Management, 41(2): 745783. https://doi.org/10.1007/s10490-023-09871-y CrossRefGoogle Scholar
Chong, S. W., Khan, M. A. & Reinders, H. (2024) A critical review of design features of LMOOCs. Computer Assisted Language Learning, 37(3): 389409. https://doi.org/10.1080/09588221.2022.2038632 CrossRefGoogle Scholar
Conole, G. (2016) MOOCs as disruptive technologies: Strategies for enhancing the learner experience and quality of MOOCs. Revista de Educación a Distancia (RED), 50: Article 2. https://doi.org/10.6018/red/50/2 Google Scholar
Creelman, A., Ehlers, U.-D. & Ossiannilsson, E. (2014) Perspectives on MOOC quality – An account of the EFQUEL MOOC Quality Project. The International Journal for Innovation and Quality in Learning, 2(3): 7887.Google Scholar
Cross, J. S., Keerativoranan, N., Carlon, M. K. J., Tan, Y. H., Rakhimberdina, Z. & Mori, H. (2019) Improving MOOC quality using learning analytics and tools. In Proceedings of 2019 IEEE Learning With MOOCS (LWMOOCS). Piscataway: IEEE, 174179. https://doi.org/10.1109/LWMOOCS47620.2019.8939617 CrossRefGoogle Scholar
DeLone, W. H. & McLean, E. R. (1992) Information systems success: The quest for the dependent variable. Information Systems Research, 3(1): 6095. https://doi.org/10.1287/isre.3.1.60 CrossRefGoogle Scholar
Deng, R. & Gao, Y. (2023) Using learner reviews to inform instructional video design in MOOCs. Behavioral Sciences, 13(4): Article 330. https://doi.org/10.3390/bs13040330 CrossRefGoogle ScholarPubMed
DeVellis, R. F. (2003) Scale development: Theory and application (2nd ed.). Thousand Oaks: SAGE Publications.Google Scholar
Díez-Arcón, P. & Agonács, N. (2024) Conceptualising language MOOC diversity: The creation of a defined taxonomy through the analysis of course indicators. ReCALL, 36(3): 324342. https://doi.org/10.1017/S0958344024000132 CrossRefGoogle Scholar
Ding, Y. & Shen, H. (2022) Delving into learner autonomy in an EFL MOOC in China: A case study. Computer Assisted Language Learning, 35(3): 247269. https://doi.org/10.1080/09588221.2019.1681464 CrossRefGoogle Scholar
Ding, Y. & Shen, H. Z. (2020) English Language MOOCs in China: Learners’ Perspective. The EuroCALL Review, 28(2): 1322. http://doi.10.4995/eurocall.2020.13090 CrossRefGoogle Scholar
Dyomin, V., Mozhaeva, G., Babanskaya, O. & Zakharova, U. (2017) MOOC quality evaluation system: Tomsk State University experience. In Delgado Kloos, C., Jermann, P., Pérez-Sanagustín, M., Seaton, D. T. & White, S. (eds.), Digital education: Out to the world and back to the campus. Springer International Publishing, 197202. https://doi.org/10.1007/978-3-319-59044-8_23 CrossRefGoogle Scholar
Fernández-Díaz, E., Rodríguez-Hoyos, C. & Calvo Salvador, A. (2017) The pedagogic architecture of MOOC: A research project on educational courses in Spanish. The International Review of Research in Open and Distributed Learning, 18(6): 1835. https://doi.org/10.19173/irrodl.v18i6.2964 CrossRefGoogle Scholar
Ferreira, C., Arias, A. R. & Vidal, J. (2022) Quality criteria in MOOC: Comparative and proposed indicators. PLOS ONE, 17(12): Article e0278519. https://doi.org/10.1371/journal.pone.0278519 CrossRefGoogle ScholarPubMed
Gamage, D., Fernando, S. & Perera, I. (2015) Quality of MOOCs: A review of literature on effectiveness and quality aspects. In Proceedings of the 2015 8th International Conference on Ubi-Media Computing (UMEDIA). Piscataway: IEEE, 224229. https://doi.org/10.1109/UMEDIA.2015.7297459 CrossRefGoogle Scholar
Gamage, D., Staubitz, T. & Whiting, M. (2021) Peer assessment in MOOCs: Systematic literature review. Distance Education, 42(2): 268289. http://doi.org/10.1080/01587919.2021.1911626 CrossRefGoogle Scholar
Gu, W., Xu, Y. & Sun, Z.-J. (2021) Does MOOC quality affect users’ continuance intention? Based on an integrated model. Sustainability, 13(22): Article 12536. https://doi.org/10.3390/su132212536 CrossRefGoogle Scholar
Hair, J. F., Sarstedt, M., Ringle, C. M. & Mena, J. A. (2012) An assessment of the use of partial least squares structural equation modeling in marketing research. Journal of the Academy of Marketing Science, 40(3): 414433. https://doi.org/10.1007/s11747-011-0261-6 CrossRefGoogle Scholar
Hood, N. & Littlejohn, A. (2016) MOOC quality: The need for new measures. Journal of Learning for Development, 3(3): 2842. https://doi.org/10.56059/jl4d.v3i3.165 CrossRefGoogle Scholar
Hsu, L. (2023) What makes good LMOOCs for EFL learners? Learners’ personal characteristics and information system success model. Computer Assisted Language Learning, 36(1–2): 125. https://doi.org/10.1080/09588221.2021.1899243 CrossRefGoogle Scholar
Huang, L., Zhang, J. & Liu, Y. (2017) Antecedents of student MOOC revisit intention: Moderation effect of course difficulty. International Journal of Information Management, 37(2): 8491. https://doi.org/10.1016/j.ijinfomgt.2016.12.002 CrossRefGoogle Scholar
Jiang, Y. & Peng, J.-E. (2025) Exploring the relationships between learners’ engagement, autonomy, and academic performance in an English language MOOC. Computer Assisted Language Learning, 38(1–2): 7196. https://doi.org/10.1080/09588221.2022.2164777 Google Scholar
Jitpaisarnwattana, N., Reinders, H. & Darasawang, P. (2019) Language MOOCs: An expanding field. Technology in Language Teaching & Learning, 1(1): 2132. https://doi.org/10.29140/tltl.v1n1.142 CrossRefGoogle Scholar
Jung, E., Kim, D., Yoon, M., Park, S. & Oakley, B. (2019) The influence of instructional design on learner control, sense of achievement, and perceived effectiveness in a supersize MOOC course. Computers & Education, 128: 377388. https://doi.org/10.1016/j.compedu.2018.10.001 CrossRefGoogle Scholar
Khalil, H. & Ebner, M. (2014) MOOCs completion rates and possible methods to improve retention – A literature review. In Viteli, J. & Leikomaa, M. (eds.), Proceedings of EdMedia 2014–World Conference on Educational Media and Technology. Tampere: Association for the Advancement of Computing in Education, 13051313. https://www.learntechlib.org/primary/p/147656/ Google Scholar
Li, S., Li, M. & Zhao, H. (2020) A qualitative study of online course quality and quality factors: Insights from researchers, practitioners and learners. Distance Education in China, 3: 4250, 81. https://doi.org/10.13541/j.cnki.chinade.2020.03.006 Google Scholar
Luo, R. & Wang, Y. (2023) Enjoyment, boredom, and perceived effectiveness of learners in language MOOCs: The mediating effect of self-regulated learning. Frontiers in Psychology, 14: Article 1145773. https://doi.org/10.3389/fpsyg.2023.1145773 CrossRefGoogle ScholarPubMed
Luo, R. & Ye, Z. (2021) What makes a good-quality language MOOC? An empirical study of criteria to evaluate the quality of online language courses from learners’ perspectives. ReCALL, 33(2): 177192. https://doi.org/10.1017/S0958344021000082 CrossRefGoogle Scholar
Ma, R. (2018) Research on the construction of evaluation index system for teaching effectiveness of xMOOCs. China Information Technology Education, 2018(8): 6066. https://doi.org/10.3969/j.issn.1674-2117.2018.08.021 Google Scholar
Maloshonok, N. & Terentev, E. (2016) The impact of visual design and response formats on data quality in a web survey of MOOC students. Computers in Human Behavior, 62: 506515. https://doi.org/10.1016/j.chb.2016.04.025 CrossRefGoogle Scholar
Margaryan, A., Bianco, M. & Littlejohn, A. (2015) Instructional quality of massive open online courses (MOOCs). Computers & Education, 80: 7783. https://doi.org/10.1016/j.compedu.2014.08.005 CrossRefGoogle Scholar
Martín-Monje, E. & Borthwick, K. (2024) Transitions in language MOOCs: International perspectives to redefine the field. In Wang, Y. & Cárdenos Claros, M. (eds.) Proceedings of the XXIInd International CALL Research Conference. Melbourne: Castledown, 173176. https://doi.org/10.29140/9780648184485-27 CrossRefGoogle Scholar
Martín-Monje, E., Castrillo, M. D. & Mañana-Rodríguez, J. (2018) Understanding online interaction in language MOOCs through learning analytics. Computer Assisted Language Learning, 31(3): 251272. https://doi.org/10.1080/09588221.2017.1378237 CrossRefGoogle Scholar
Morgado, F. F. R., Meireles, J. F. F., Neves, C. M., Amaral, A. C. S. & Ferreira, M. E. C. (2017) Scale development: Ten main limitations and recommendations to improve future research practices. Psicologia: Reflexão e Crítica, 30: Article 3. https://doi.org/10.1186/s41155-016-0057-1 Google ScholarPubMed
Perifanou, M. A. (2014a) PLEs & MOOCs in language learning context: A challenging connection. PLE Conference 2014. Tallinn, Estonia, 16–18 July.Google Scholar
Perifanou, M. & Economides, A. (2014b) MOOCs for foreign language learning: An effort to explore and evaluate the first practices. In Gómez Chova, L., López Martinez, A. & Candel Torres, I. (eds.), INTED2014 Proceedings: 8th International Technology, Education and Development Conference. Valencia: IATED Academy, 35613570.Google Scholar
Perry, J. L., Nicholls, A. R., Clough, P. J. & Crust, L. (2015) Assessing model fit: Caveats and recommendations for confirmatory factor analysis and exploratory structural equation modeling. Measurement in Physical Education and Exercise Science, 19(1): 1221. https://doi.org/10.1080/1091367X.2014.952370 CrossRefGoogle Scholar
Poce, A., Amenduni, F., Re, M. R. & De Medio, C. (2019) Establishing a MOOC quality assurance framework – A case study. Open Praxis, 11(4): 451460. https://doi.org/10.5944/openpraxis.11.4.1019 CrossRefGoogle Scholar
Qian, K. & Bax, S. (eds.) (2017) Beyond the language classroom: Researching MOOCs and other innovations. Dublin: Research-publishing.net.Google Scholar
Qiu, J. & Ou, Y. (2015) A comparative analysis on higher education quality standards of America, UK and Australia. Chongqing University Journal of Social Sciences, 21(2): 162166. https://doi.org/10.11835/j.issn.1008-5831.2015.02.023 Google Scholar
Read, T. & Bárcena, E. (2020) Toward a framework for language MOOCs and mobile-assisted language learning. Propósitos y Representaciones, 8(1): Article e470. https://doi.org/10.20511/pyr2020.v8n1.470 Google Scholar
Rõõm, M., Luik, P. & Lepp, M. (2023) Learner success and the factors influencing it in computer programming MOOC. Education and Information Technologies, 28(7): 86458663. https://doi.org/10.1007/s10639-022-11535-1 CrossRefGoogle Scholar
Sallam, M. H., Martín-Monje, E. & Li, Y. (2022) Research trends in language MOOC studies: A systematic review of the published literature (2012–2018). Computer Assisted Language Learning, 35(4): 764791. https://doi.org/10.1080/09588221.2020.1744668 CrossRefGoogle Scholar
Shah, D. (2020) The second year of the MOOC: A review of MOOC stats and trends in 2020. https://www.classcentral.com/report/the-second-year-of-the-mooc Google Scholar
Shi, D., Lee, T. & Maydeu-Olivares, A. (2019) Understanding the model size effect on SEM fit indices. Educational and Psychological Measurement, 79(2): 310334. https://doi.org/10.1177/0013164418783530 CrossRefGoogle ScholarPubMed
Sokolik, M. (2014) What constitutes an effective language MOOC? In Martín-Monje, E. & Bárcena Madera, E. (eds.), Language MOOCs: Providing learning, transcending boundaries. Warsaw: De Gruyter Open, 1632. https://doi.org/10.2478/9783110420067.2 CrossRefGoogle Scholar
Stracke, C. M. & Tan, E. (2018) The quality of open online learning and education: Towards a quality reference framework for MOOCs. In Kay, J. & Luckin, R. (eds.), Rethinking learning in the digital age. Making the learning sciences count: 13th International Conference of the Learning Sciences (ICLS) 2018, Volume 3. London: International Society of the Learning Sciences, 10291032.Google Scholar
Stracke, C. M. & Trisolini, G. (2021) A systematic literature review on the quality of MOOCs. Sustainability, 13(11): Article 5817. https://doi.org/10.3390/su13115817 CrossRefGoogle Scholar
Tzeng, J.-W., Lee, C.-A., Huang, N.-F., Huang, H.-H. & Lai, C.-F. (2022) MOOC evaluation system based on deep learning. The International Review of Research in Open and Distributed Learning, 23(1): 2140. https://doi.org/10.19173/irrodl.v22i4.5417 CrossRefGoogle Scholar
Ucha, C. R. (2023) Role of course relevance and course content quality in MOOCs acceptance and use. Computers and Education Open, 5: Article 100147. https://doi.org/10.1016/j.caeo.2023.100147 CrossRefGoogle Scholar
Wright, C. & Furneaux, C. (2021) ‘I am proud of myself’: Student satisfaction and achievement on an academic English writing MOOC. International Journal of Computer-Assisted Language Learning and Teaching (IJCALLT), 11(1): 2137. https://doi.org/10.4018/IJCALLT.2021010102 CrossRefGoogle Scholar
Xiao, C., Qiu, H. & Cheng, S. M. (2019) Challenges and opportunities for effective assessments within a quality assurance framework for MOOCs. Journal of Hospitality, Leisure, Sport & Tourism Education, 24: 116. https://doi.org/10.1016/j.jhlste.2018.10.005 CrossRefGoogle Scholar
Yang, X., Zhou, H., Zhou, X., Hao, Z. & Dong, X. (2020) Research on the construction of the criteria for identifying teaching quality of online open courses in universities. Distance Education and Online Learning, 2020(2): 6774.Google Scholar
Yousef, A. M. F., Chatti, M. A., Schroeder, U. & Wosnitza, M. (2014) What drives a successful MOOC? An empirical examination of criteria to assure design quality of MOOCs. In Proceedings: IEEE 14th International Conference on Advanced Learning Technologies. Piscataway: IEEE, 4448. https://doi.org/10.1109/ICALT.2014.23 CrossRefGoogle Scholar
Zeng, S., Zhang, J., Gao, M., Xu, K. M. & Zhang, J. (2022) Using learning analytics to understand collective attention in language MOOCs. Computer Assisted Language Learning, 35(7): 15941619. https://doi.org/10.1080/09588221.2020.1825094 CrossRefGoogle Scholar
Zhang, Y. & Sun, R. (2023) LMOOC research 2014 to 2021: What have we done and where are we going next? ReCALL, 35(3): 356371. https://doi.org/10.1017/S0958344022000246 CrossRefGoogle Scholar
Figure 0

Table 1. Demographic information of participants

Figure 1

Figure 1. Scale development procedures.

Figure 2

Table 2. Indicators and descriptions in the semi-structured interview

Figure 3

Table 3. Factor loadings of the items for the LMOOC Quality Evaluation Scale (LQES)

Figure 4

Table 4. Evaluation of the goodness of fit of the scale

Figure 5

Figure 2. Structure coefficients for the LMOOC Quality Evaluation Scale (LQES) Model.

Supplementary material: File

Luo et al. supplementary material

Luo et al. supplementary material
Download Luo et al. supplementary material(File)
File 34.4 KB