Hostname: page-component-76fb5796d-2lccl Total loading time: 0 Render date: 2024-04-26T01:23:32.137Z Has data issue: false hasContentIssue false

Virtual reality in the EAP classroom: Creating immersive, interactive, and accessible experiences for international students

Published online by Cambridge University Press:  28 September 2022

Katie Coleman*
Affiliation:
University of Michigan, Ann Arbor, Michigan, USA
Brian Derry
Affiliation:
University of Michigan, Ann Arbor, Michigan, USA
*
*Corresponding author. Email: kcolema@umich.edu
Rights & Permissions [Opens in a new window]

Abstract

Type
Research in Progress
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

1. Introduction

1.1 Motivation

In English for Academic Purpose (EAP) speaking courses, tasks are designed to give students opportunities to practice academic communication skills and meet specific learning objectives. In-class activities are often designed to mimic authentic interactive contexts in which students find themselves throughout their academic careers, such as study groups, networking events, and office hours.

Although EAP instructors can replicate these ‘real-world’ scenarios to an extent in activities such as role plays, there will always remain a gap between what can be practiced in the classroom and what is actually expected of students. In recent years, this gap has been exacerbated for many international students affected by pandemic-related travel restrictions, social distancing, and remote learning.

Fortunately, virtual reality (VR) has the potential to help close this gap. While VR has not yet been widely explored in EAP contexts, it has clear educational potential, affording access to authentic language and the ability to experience it in an immersive and interactive way. Now that VR technology has developed to the point where non-programmers are able to create content uniquely suited to the needs of different audiences, teachers and curriculum developers can create VR experiences that more closely simulate real life, requiring less of an imaginative leap for students than traditional roleplays.

1.2 Why VR?

Studies show several advantages to using VR in the language classroom. VR tasks have been shown to increase student motivation and engagement (Han, Reference Han2019; Parmaxi, Reference Parmaxi2020), which can then positively impact higher order thinking (Sun et al., Reference Sun, Pan, Wan, Li and Wu2021). VR tasks can be designed to provide safe learning environments for students, which could lower their affective filters and thus reduce anxiety and improve performance (Lan, Reference Lan2020).

VR tasks in the language classroom can cultivate learner autonomy, self-efficacy, and communicative and linguistic competence (Parmaxi, Reference Parmaxi2020; Wang et al., Reference Wang, Lan, Tseng, Lin and Gupta2020) and allow for meaningful, place-based learning while providing students with current, contextually relevant cultural information (Han, Reference Han2019). With VR, this rich learning is now accessible to those who are not physically present, creating opportunities for more learners (Han, Reference Han2019; Lan, Reference Lan2020; Parmaxi, Reference Parmaxi2020), especially in this time of social distancing and remote learning.

Accessibility continues to improve as VR headsets become more affordable and low-cost options become widely available (e.g., Google Cardboard). In fact, VR headsets are no longer required for participation, as users are often able to use PCs to control the 3D environments (Lan, Reference Lan2020). VR task development is more accessible as well, as 360° cameras are becoming more affordable and widely adopted (Sun et al., Reference Sun, Pan, Wan, Li and Wu2021), allowing non-expert users to create immersive experiences in authentic contexts that simulate real-world environments (Lan, Reference Lan2020; Wang et al., Reference Wang, Lan, Tseng, Lin and Gupta2020).

1.3 Project goals

In this project, we aim to take advantage of the affordances of VR for language learning by creating immersive, interactive, and accessible tasks for international university students. These VR tasks will give students opportunities to access authentic language and practice academic language skills in realistic and non-threatening spaces, thus promoting full and successful participation in their academic and social communities.

Students are matriculated graduate and undergraduate international students enrolled in interactive speaking courses at the University of Michigan's English Language Institute (U-M ELI). Students come from a variety of linguistic and cultural backgrounds and typically enroll in these courses to hone their speaking and listening skills to prepare them for the communicative demands of their academic programs.

2. Methods

The three phases of our project include: developing the initial VR task, piloting activity and gathering feedback, and expanding offerings.

2.1 Developing VR tasks

The important first step of developing VR tasks for language learning is to define learning goals and objectives. The overall goal of this activity is for students, after completing the activity, to be able to successfully navigate an office hour appointment with a professor. Specific learning objectives include being able to ask questions, elicit information, initiate and sustain interactions, and listen effectively in interactive contexts. The task itself involves a student navigating a campus building to find their professor for an office hours appointment. Along the way, they interact with multiple individuals, asking for directions, providing relevant information, and engaging in small talk. When meeting with the professor, the student has the option of asking for a letter of recommendation, an extension on an assignment, or a summer internship.

Once objectives, goals, and tasks are defined, a detailed script is written. Using this script, storyboard creation is helpful in visualizing scenes before filming. Then, appropriate locations for filming are scouted, permissions obtained, and lighting and audio needs determined. Finally, actors are recruited, scheduled, and coached.

Materials used to capture the video and audio needed for the VR activity include a 360° camera (Insta360 One X2), a smartphone, tripod, and lavalier microphones. In keeping with best practices for capturing images and video with a 360° camera, each scene is shot multiple times, being mindful of what could be seen when students ‘look around’ in VR. Scenes are filmed from the student's point of view, so ‘personal space’ is important to consider: all actors and objects should be positioned about 3–7 feet away from the camera. Still 360° images are taken along the ‘path’ from one scene to the next so the participant can ‘walk’ around in a manner similar to Google Street View.

After filming is complete, the audio, video, and image files are downloaded from the camera onto our personal laptops and uploaded to the VR editing software Uptale (https://www.uptale.io/). Within this platform, footage is edited and interactive learning elements added, such as text overlay, voiceovers, voice analysis, and tags triggering new images, prompts, and scenes. Voice analysis can be added throughout the activity as a way of increasing the feeling of interaction with interlocutors. For example, at one point in the activity, the student asks someone for directions to the elevators, and when the software hears the pre-programmed key word ‘elevator,’ the interlocutor's ‘response’ is automatically played. The task can be repeated as many times as desired.

User testing is conducted throughout the development process, allowing us to iterate on the design and edit the task as necessary before implementation.

2.2 Piloting the activity and gathering feedback

When development of the pilot VR task was complete, the activity was introduced to students in two EAP interactive speaking courses, who were invited to participate using either a VR headset or a web browser (all students chose the latter). A survey was then administered to participants in which they were asked to provide feedback on the task, indicating on a Likert scale the degree to which they agreed or disagreed with statements such as ‘I gained confidence speaking English as a result of this activity.’ Open-ended questions about their reactions to the task were also included in the survey.

2.3 Expanding VR offerings

Our goal is to use this feedback to continue iterating on our task design and create additional VR experiences addressing additional learning objectives in a wider variety of academic and social settings.

3. Results

Survey data showed overwhelmingly positive responses from students. Over 75% of all students (n = 13) either ‘agreed’ or ‘strongly agreed’ with the following statements:

  1. 1. I learned something new about campus culture.

  2. 2. I gained confidence speaking English as a result of this activity.

  3. 3. I was able to practice English speaking skills.

  4. 4. I was able to practice English listening skills.

  5. 5. I felt comfortable speaking when asked to.

  6. 6. The language I heard was realistic, or similar to how people really talk.

In open-ended responses, many students stated that the activity was interesting and realistic, instructions were clear, and that it was overall a worthwhile learning experience.

However, some students noted their frustrations with certain technological issues, such as computer microphones not recording properly, or longer video scenes being slow to load. One student noted that the software did not recognize their speech, so they ‘could not escape’ and had to restart the activity.

4. Discussion and conclusion

As indicated by student feedback, the design and implementation of this VR activity was largely successful. Our plans to expand the project will continue, guided by student feedback, best practices, and our takeaways, as outlined below.

For our project, it will be necessary to develop additional methods of assessing learning outcomes to provide quality feedback to the student (e.g., pre- or post-tests, in-activity questions, self-reflection). A peer assessment strategy described by Chien et al. (Reference Chien, Hwang and Jong2020) looks promising, as it was shown to be effective in enhancing student motivation, self-reflection, critical thinking, and decreasing anxiety.

When developing future VR tasks, it is also essential to consider diversity and equity in the context of VR task design, including who (and whose ‘English’) is represented in the scenes. And when implementing these tasks in the classroom, it is important to recognize that physical or emotional challenges may arise that require accommodations, such as dizziness or anxiety (Lan, Reference Lan2020).

Lastly, we found that VR activity design and production requires significant resources and can be time consuming, which may discourage some teachers or programs from pursuing this educational technology. As we go forward with our project, efforts will be made to streamline tasks, find additional support where possible, and explore pre-made VR tasks that would also suit our needs.

Future research exploring the effect of VR tasks on student success in academic speaking contexts would be beneficial, as well as how the language produced by students in VR tasks compares with that of traditional role-plays and equivalent ‘real-world’ speech events.

Acknowledgements

This project was funded and supported by the University of Michigan Center for Academic Innovation's 2021 XR Innovation Fund.

Katie Coleman is a lecturer at the University of Michigan's English Language Institute, where she teaches EAP courses to international graduate and undergraduate students. Areas of interest include EAP/ESP, technology-enhanced language learning, task-based language learning, and data-driven learning.

Brian Derry is a senior at the University of Michigan. He is majoring in Film, Television, and Media and minoring in Spanish Language, Literature, and Culture. His expertise is in digital media production, with focuses on screenwriting, producing, and editing.

Footnotes

1 A reproduction of the poster discussed is available in the supplementary material published alongside this article on Cambridge Core.

References

Chien, S. Y., Hwang, G. J., & Jong, M. S. Y. (2020). Effects of peer assessment within the context of spherical video-based virtual reality on EFL students’ English-speaking performance and learning perceptions. Computers & Education, 146(1), 103751. https://doi.org/10.1016/j.compedu.2019.103751CrossRefGoogle Scholar
Han, Y. (2019). Exploring multimedia, mobile learning, and place-based learning in linguacultural education. Language Learning & Technology, 23(3), 2938. http://hdl.handle.net/10125/44692Google Scholar
Lan, Y. J. (2020). Immersion, interaction and experience-oriented learning: Bringing virtual reality into FL learning. Language Learning & Technology, 24(1), 115. http://hdl.handle.net/10125/44704Google Scholar
Parmaxi, A. (2020). Virtual reality in language learning: A systematic review and implications for research and practice. Interactive Learning Environments, 113. https://doi.org/10.1080/10494820.2020.1765392Google Scholar
Sun, F., Pan, L., Wan, R., Li, H., & Wu, S. (2021). Detecting the effect of student engagement in an SVVR school-based course on higher level competence development in elementary schools by SEM. Interactive Learning Environments, 29(1), 316. https://doi.org/10.1080/10494820.2018.1558258CrossRefGoogle Scholar
Wang, C.-P., Lan, Y.-J., Tseng, W.-T., Lin, Y.-T. R., & Gupta, K. C.-L. (2020). On the effects of 3D virtual worlds in language learning: A meta-analysis. Computer Assisted Language Learning, 33(8), 891915. https://doi.org/10.1080/09588221.2019.1598444CrossRefGoogle Scholar
Supplementary material: Image

Coleman and Derry supplementary material

Coleman and Derry supplementary material

Download Coleman and Derry supplementary material(Image)
Image 3.3 MB