1 Contextualising Online Harms in the Metaverse
1.1 Aims
Heraclitus, writing around 500 BCE, famously declared πάντα ῥεῖ (‘everything flows’), capturing the essence of a world in constant motion. He further illustrated this idea with the observation that No man ever steps in the same river twice, for it is not the same river and he is not the same man. This philosophy of perpetual change serves as a guiding principle for this Element, which explores the dynamic and evolving nature of cyberspace, particularly that of the metaverse, revealing how transformation shapes our understanding and experiences in virtual worlds. Over the past three decades, the convergence of communication and computing technologies has rapidly advanced, reshaping how individuals interact and engage with the world. The internet, the World Wide Web (WWW), and mobile communication have become embedded in modern society, influencing nearly every aspect of daily life (Castells et al., Reference Castells, Fernandez-Ardevol, Qiu and Sey2009). This digital transformation has created a new reality fundamentally different from how humans have lived for millennia (Levin et al., 2021).
As digital spaces become ever more pervasive, the emergence of the metaverse marks a shift towards a deeply immersive and interactive virtual environment, amplifying both positive and negative experiences (Patel, Reference Patel2025). Its heightened sense of presence can enrich learning (Damaševičius et al., Reference Damaševičius and Sidekerskienė2024), create social connections (Hennig-Thurau et al., Reference Hennig-Thurau, Aliman and Herting2023), and create new opportunities for engagement (Rane et al., 2023). However, this same intensity also has the potential to magnify the impact of harmful encounters (Effing et al., Reference Effing and Hinz2024), possibly leading to loneliness (Oh et al., Reference Oh, Chang, Park and Lee2023), and long-lasting psychological and emotional distress (Zakaria, Reference Zakaria2025). Despite the fast expansion of these virtual spaces, the broader implications, both for individuals and for society, remain largely unexplored. This uncertainty presents critical challenges in safeguarding children, as risks and vulnerabilities may emerge in novel and, at times, unpredictable ways.
The metaverse is currently underrepresented in the cybercrime literature, and it is the intention of this Element to critically explore this emerging topic. This Element aims to address these concerns by exploring the nature of harm experienced by children in virtual reality environments. Drawing on recent research (Davidson et al., Reference Davidson, Martellozzo, Farr, Bradbury and Meggyesfalvi2024), it challenges these risks through a criminological lens, assessing how immersive digital spaces have the potential to reshape victimisation, offending behaviour, and safeguarding strategies. By doing so, it aims to contribute to a more nuanced understanding of child safety in the metaverse and inform the development of effective protective measures.
1.2 Rationale
Virtual reality technology has been intentionally designed to blur the boundaries between digital and physical experiences, with some researchers suggesting that the mind and body often struggle to distinguish between the two (Sabry, Reference Sabry2022). As the metaverse evolves, particularly with the integration of haptic devices that provide sensory feedback, it is increasingly positioned to become a central part of everyday life. However, this immersive digital frontier also presents new risks, including the emergence of sexual violence in virtual environments (Clare McGlynn et al., Reference McGlynn. and Carlotta2025). Research shows that individuals can experience psychological and physiological responses to virtual events as though they occurred in the physical world (Cheng et al., Reference Cheng, Wu, Chen and Han2022; Patel, 2021), raising urgent questions about harm, consent, and protection in immersive spaces. Drawing from data gathered through the Virtual Reality Risks Against Children Project (VIRRAC) funded by the National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online (REPHRAIN), this Element presents different perspectives from industry and practice experts as well as children and young people themselves.
This Element seeks to provide a comprehensive examination of child safety in the metaverse, offering a multidisciplinary perspective that integrates criminological and psychological theory, technology, and policy. As virtual and immersive environments become increasingly embedded in the everyday lives of children and young people, it is important to understand the opportunities and the risks they present.
Another key objective is to address the existing knowledge gaps and resource limitations among professionals and practitioners responsible for protecting children from online harms, including child sexual exploitation and abuse (CSEA) in the metaverse. The Element identifies areas where safeguarding measures, legal frameworks, and technological solutions remain inadequate, and it offers practical recommendations for improving child protection efforts.
Crucially, this Element centres the voices of children, young people, and professionals (see Section 5) by engaging with them directly to understand their lived experiences, concerns, and expectations regarding online safety in the metaverse. Rather than solely relying on adult perspectives, it seeks to incorporate the insights of young users to inform more effective and child-centred policy, safeguarding practice and strategies.
This Element seeks to address critical gaps in knowledge, support evidence-based interventions, and inform future digital safety strategies by bringing together research, policy discussions, and the voices of young people. It is intended for academics, policymakers, child protection professionals, law enforcement, and technology developers committed to creating safer digital environments for children and young people.
1.3 Defining the Metaverse and Its Relevance to Child Safeguarding
The metaverse, despite its popularity, lacks a universally agreed-upon definition. Its multidimensional nature has resulted in a different range of interpretations. Academics have described it as a three-dimensional or spatial internet, an immersive virtual society with its own functioning economy, and a shared world where avatars participate in political, economic, social, and cultural life. It has also been characterised as an interoperable network, a medium for simulation and immersion, and a hypothetical synthetic environment intricately linked to the physical world ; Dolata & Schwabe, Reference Dolata and Schwabe2023; Dwivedi et al., Reference Dwivedi, Ismagilova and Slade2022; Lee et al., Reference Lee, Choi and Lee2021; Park & Kim, Reference Park and Kim2022). Dolata and Schwabe (Reference Dolata and Schwabe2023) found that the notion of the metaverse is subject to interpretative flexibility, as various actors shape its meaning according to their interests, affecting how it is presented and what components or technologies are connected to it, leaving it unclear whether it is already in existence, currently emerging, or only possible in the future.
For the purposes of this Element, we adopt a pragmatic approach: the metaverse is used as an umbrella term for immersive digital environments enabled by Extended Reality (XR) technologies, including virtual reality (VR), augmented reality (AR), and mixed reality (MR) (Rauschnabel et al., Reference Rauschnabel, Felix, Hinsch, Shahab and Alt2022). These environments are typically accessed through headsets or AR-enabled devices, and allow users to interact with others via avatars in shared, persistent spaces.
In this Element, we use XR technologies to refer to the range of tools that generate artificial or enhanced environments, and XR environments to describe any digitally modified setting. VR is particularly central to our analysis, as it fully replaces the user’s physical surroundings with a 3D digital world that is often experienced as immersive, interactive, and lifelike, sometimes including sensory feedback through haptic technologies. These affordances raise distinct safeguarding challenges, especially for children.
Unlike traditional social platforms such as Facebook or Instagram, the metaverse offers offenders enhanced opportunities to manipulate, coerce, or groom others in highly immersive, anonymous settings. The embodied nature of VR interactions (including voice, gesture, eye contact, and even touch via haptics) can intensify the sense of presence and emotional realism. This immersive quality, while a technical achievement, also complicates a child’s ability to distinguish safe from unsafe encounters and may increase the psychological impact of abusive or inappropriate behaviour (Fry et al., Reference Fry, Gaitis and Ladringan2023).
While some scholars have proposed terms such as Extended Verse (XV) to account for the evolving and hybridised nature of these environments (Jagatheesaperumal et al., Reference Jagatheesaperumal, Ahmad, Al-Fuqaha and Qadir2024), we have opted to use metaverse throughout this Element for clarity and accessibility. The term is now broadly recognised, and captures the technologies, environments, and user experiences most relevant to this Element. We have purposely avoided extensive definitional debates in order to focus on the practical implications of immersive technologies for child safety.
The metaverse, as a concept, emerged in the early 1990s, initially coined by author Neal Stephenson in his science fiction novel Snow Crash (1992). The term has since gained considerable traction, both in academic circles and in the popular media. It represents a collective virtual shared space facilitated by converging virtually enhanced physical reality and persistent digital environments. Within the metaverse, users engage through avatars and interact with one another in a simulated environment that is both immersive and persistent in a simulated, immersive, and persistent environment. While the term ‘metaverse’ remains widely accepted, it is important to acknowledge that terminology in this field is not fixed, and alternative expressions such as ‘Extended Verse’ (XV) have emerged. The term ‘Extended Verse’ (or ‘XV’) is a more recent innovation and could be seen as an expansion or modification of the traditional metaverse concept. ‘Extended Verse’ suggests an even broader scope, incorporating virtual worlds and enhanced interactions with the physical world. Such a term could include immersive hybrid environments that merge the virtual and real, offering users a more dynamic, cross-medium experience (Jagatheesaperumal et al., Reference Jagatheesaperumal, Ahmad, Al-Fuqaha and Qadir2024). Therefore, the ‘XV’ label could evoke a sense of evolution beyond the metaverse, incorporating various technologies such as AR, holography, and photorealism, which augment the real world rather than replace it altogether.
Though ‘Extended Verse’ (XV) presents an exciting possibility, the decision to adopt ‘metaverse’ in this Element stems from several considerations. First and foremost, the metaverse has become a widely accepted and easily recognisable term within both academic and public discussions. It encapsulates the digital spaces most people envision when considering virtual environments, encompassing VR worlds, digital avatars, and the various platforms facilitating these experiences. The term ‘Extended Verse’ is, in contrast, less familiar. While it offers an expanded conceptualisation, it may confuse audiences who are more familiar with the established notion of the metaverse. Given that this Element aims to reach a broad audience, including those not necessarily immersed in the cutting-edge developments of digital technology, using the term ‘metaverse’ allows for easier understanding and greater engagement.
Moreover, the term ‘metaverse’ allows for flexibility within its application. As the metaverse continues to evolve and expand, the term itself has become a catch-all for a range of technologies and digital experiences. This broadness allows for future advancements, such as the incorporation of AR, AI, and other innovations, to fall under the metaverse umbrella (Purdy, Reference Purdy2022). For these reasons, the decision to maintain the use of ‘metaverse’ in this Element is one grounded in both accessibility and relevance, ensuring that readers can connect to the material while recognising the broad scope and potential of digital worlds.
At the same time, it is acknowledged that the field of virtual and augmented realities is fluid and rapidly changing. As new technologies and innovations continue to emerge, the term ‘metaverse’ may one day give way to other expressions such as ‘extended verse’ (Anderson & Rainie et al., Reference Anderson, Rainie and Arbanas2022). However, for the purposes of this Element, the choice to prioritise the familiar term ensures that the content remains engaging and relevant to a wide array of readers.
1.3.1 How the Metaverse Is Reshaping Our World
The metaverse is increasingly shaping various aspects of society, driving transformations in education (Bokyun et al., Reference Bokyung, Nara, Eunji, Yeonjeong and Soyoung2021), industry, entertainment, and digital economies (Dwivedi et al., Reference Dwivedi, Ismagilova and Slade2022). Virtual reality (VR) and augmented reality (AR) technologies are revolutionising medical education by providing realistic, risk-free simulations for surgical training, allowing future professionals to refine their skills before performing procedures on real patients (Shahrezaei et al., Reference Shahrezaei, Sohani, Taherkhani and Zarghami2024).
Beyond medical training, the metaverse is redefining education across a variety of disciplines, offering immersive and interactive learning environments that enhance engagement and knowledge retention (Lindgren & Johnson-Glenberg, Reference Lindgren and Johnson-Glenberg2013). In corporate and industrial training, VR-based simulations are increasingly used in fields such as aerospace, automotive, and energy, providing employees with risk-free, experiential learning opportunities that improve skill acquisition and workplace safety (Freina & Ott, Reference Freina and Ott2015; ). Similarly, these practices are growing within policing, although they are still in the early stages. However, they present great potential for transforming law enforcement training and operations. Unlike traditional methods, which rely on classroom instruction and physical training exercises, the metaverse can enable interactive, dynamic environments where officers can engage in realistic simulations without real-world risks (Podoletz et al., Reference Podoletz, McGill and McIlhatton2024). For example, police forces across the UK are already adopting VR technology to provide more situational-based learning in which police officers can experience real-world simulations, for scenarios such as domestic abuse or knife crime (Kleygrewe et al., Reference Kleygrewe, Hutter, Koedijk and Oudejans2024), in which response officers attend, talk to victims and witnesses, integrate and work with colleagues from specialist units, and be observed and supported by trainers. They can also build knowledge and skills, learning about the forensic qualities of different materials and objects as they encounter them (Kye et al., Reference Kye, Han, Kim, Park and Jo2021). Team debriefings can be held, and the scenario could be carried through the whole investigation cycle, ending up presenting evidence in court (Hull, n.d.). Other potential uses could include where police officers and staff practice conversations that they may undertake in the workplace that are relatively infrequent but that have very high stakes when they do, such as delivering a death message, talking to a victim of domestic abuse, or conducting a disclosure briefing to a defence solicitor in a custody suite. Learners could learn about psychology and criminology as they walk through crime case studies. Constructivist and experiential learning theories suggest that students learn more effectively when actively engaged in their environment. Metaverse-based learning platforms align with these pedagogical models by enabling hands-on experiences that promote deeper understanding (Kolb, Reference Kolb1984).
When focusing on children and the associated risks, it is important to examine the gaming industry, which remains at the forefront of metaverse innovation. VR headsets are enabling unprecedented levels of immersion, allowing players to step into fully interactive digital environments and creating deeper engagement with both the game world and other participants (Cummings & Bailenson, Reference Cummings and Bailenson2016). While platforms like Second Life, Minecraft, Roblox, and FortniteFootnote 1 are not yet fully realised 3D metaverses, they exemplify the potential of persistent virtual worlds where users, through a digital avatar, can connect, socialise, explore, and experience virtual spaces with others who are not physically present (University of Cambridge, 2022, December 14).
The future of the metaverse remains uncertain, but Kristensson (Reference Kristensson2022) argues that it holds potential for creating new opportunities for individuals marginalised by disability. He is investigating how the metaverse could provide life-changing benefits to people of all ages experiencing impairments, including those related to vision, hearing, mobility, dexterity, and mental health. The metaverse also provides increased opportunities for socialisation for those with neurodiversity diagnoses such as autism spectrum disorder, as social interactions using headsets can minimise the difficulties often faced with eye-to-eye contact and verbal communication impairments (Lee et al., Reference Lee and Yoo2023).
As the metaverse continues to evolve, its societal impact extends beyond these domains, influencing education, governance, and human interaction in a variety of ways. In part, this Element aims to engage with the implications of the widespread adoption of the metaverse, particularly regarding the safety of children and young people.
1.4 Children and the Metaverse: Overview of Online Harms
This Element explores the types of harm children may encounter in the metaverse, examining how curiosity, peer influence, and social engagement can draw them into virtual spaces that are not intended or designed for audiences of their age group. Despite platforms setting a minimum age of thirteen, weak age verification measures often allow underage access to adult-oriented environments (Patel, Reference Patel2025). As a result, children may be exposed to various risks, including data privacy, inappropriate content, grooming, and cyberbullying. Engaging in gaming and social interactions within the metaverse has become increasingly appealing to children, facilitated by ‘social VR’ applications that offer spatial and relational experiences akin to everyday interactions, irrespective of physical location. The growing popularity of online social gaming attracts children and young people globally, with an estimated 2.7 billion gamers identified as a key consumer demographic (UNICEF, 2023). Platforms such as VRChat, Meta Horizon Worlds, and Rec Room enable users to connect with friends and strangers. Notably, Roblox, a user-generated gaming platform, is considered one of the most frequented metaverses, boasting over thirty-two million daily active users under thirteen as of the fourth quarter of 2024 (Statista, 2024). This demographic represents a substantial portion of its user base, highlighting the platform’s large engagement among younger audiences.
Children are not only participants but also creators within these platforms, developing virtual worlds, designing products, and offering services such as tutorials and entertainment content. This is highly popular within the game Roblox and has led to concerning reports of financial exploitation by such sites regarding child labour accusations (The Guardian, 2022). Some content creators and game developers have achieved notable financial success and global recognition. A particularly lucrative aspect of the metaverse experience is avatar creation. Avatars, digital representations of users or virtual bodies (Han et al., Reference Han, Ratan and Kim2023), allow children to explore self-expression and identity formation. The synchronisation of avatars’ movements and gestures with users’ real-time actions through VR headsets and controllers enables immersive, first-person experiences. Research indicates that approximately half of young individuals perceive their virtual identity as an extension of their physical selves, with 41 per cent finding it easier to express themselves online than in real life (McKinsey & Company, 2022). However, their experiences, such as the sense of presence or enjoyment, can depend on the amount of time spent in the metaverse and the realism of their avatars, with studies suggesting that uniform avatars may provide greater enjoyment than customised self-avatars (Han et al., Reference Han, Ratan and Kim2023).
Positive opportunities for children within the metaverse have been identified, including accessible education, fitness and creativity enhancement, social interaction, workplace readiness training, and the promotion of participatory and digital rights (UNICEF, 2023). Through immersive technologies, children are introduced to interactive and engaging educational environments, often leading to enhanced learning outcomes in formal and non-formal educational settings (Dede, 2019). However, the metaverse presents a range of potential harms to children, including exposure to explicit content, data privacy concerns, and harmful interactions. A study by the Centre for Countering Digital Hate (CCDH) revealed that users, including minors, encounter abusive behaviour approximately every seven minutes in virtual environments (CCDH, 2023). These harmful interactions include harassment based on race, sexuality, and gender; exposure to graphic sexual content; bullying; and threats of violence. The immersive nature of these platforms has the potential to intensify the impact of such negative experiences, with inadequate monitoring and moderation exacerbating the issue. The foundational work of Thomas Holt (e.g., Holt et al., Reference Holt, Bossler and Fitzgerald2013; Holt et al., Reference Holt and Bossler2015) laid the groundwork for understanding the dynamics of cybercrime and online child exploitation, and this Element builds upon that legacy by examining how immersive affordances, such as embodied presence, spatial audio, and avatar realism, create new and intensified vectors for harm in the metaverse.
The metaverse’s immersive environments also facilitate grooming processes, allowing offenders to exploit technology to coerce and abuse children (Gómez-Quintero et al., Reference 78Gómez-Quintero, Johnson, Borrion and Lundrigan2024). In 2023, UK police investigated at least eight instances where virtual reality (VR) devices were used to store and view child sexual abuse material (CSAM). A BBC investigation revealed alarming instances of online harm within virtual reality environments. A researcher, posing as a thirteen-year-old girl, encountered grooming behaviours, exposure to sexual content, racist abuse, and even a rape threat while navigating VRChat, a widely accessible online virtual world (BBC News, 2022, February 22).
Bullying within metaverse platforms is another major concern. A study conducted in the United States (Hinduja et al., Reference Hinduja and Patchin2024) revealed that minors engage in abusive behaviours, such as following, insulting, and shouting at others. The absence of capable guardians and adequate safety mechanisms can escalate such violence.
The metaverse presents significant risks related to online radicalisation. Its immersive nature, combined with the ability for users to interact in real-time through lifelike avatars, creates an environment that extremist groups can exploit for recruitment, training, and the dissemination of harmful ideologies (UK Parliamentary Office of Science and Technology, 2023). Security agencies have raised concerns that these groups may use online gaming and virtual reality platforms to target vulnerable minors, expose them to propaganda, and integrate them into extremist networks (Radicalisation Awareness Network, n.d.). The deeply engaging and interactive nature of these spaces can reinforce in-group beliefs, facilitate indoctrination, and even serve as a training ground for violent behaviour (Bhatt et al., Reference Bhatt, Mantua, Gruber and Trachik2023).
Data privacy is another critical issue in the metaverse, potentially infringing upon children’s rights and allowing for the extraction of sensitive information about their habits and emotional states (UNICEF Innocenti, 2023). This data could be misused for malicious intent or targeted advertising (Information Commissioner’s Office, n.d.).
This Element will address all these challenges and present a critical evaluation as to why a comprehensive approach is required, including robust regulatory frameworks, technological safeguards, and educational initiatives to protect young users in these evolving digital spaces.
1.5 Structure
This Element explores how contemporary criminological theories can enhance our understanding of children’s online experiences as victims (see Section 2), addressing a significant gap in the literature. It also highlights the critical knowledge gaps and resource limitations faced by professionals and practitioners responsible for safeguarding children at risk of abuse and exploitation within metaverse environments. Furthermore, it investigates the complexities encountered by professionals in detecting, protecting, and disrupting CSEA activities within the metaverse platforms, shedding light on the multifaceted nature of professional practice in this domain (see Section 5). It amplifies the voices of children, providing an understanding of the need for enhanced support and online safety measures. Through a blend of research findings and practical insights, this Element contributes to a deeper understanding of the challenges and opportunities inherent in safeguarding children within metaverse platforms, providing a foundation for informed policy decisions and targeted interventions aimed at protecting the most vulnerable members of society.
Section 2 applies criminological theories to digital offending and victimisation in the metaverse. It explores frameworks such as anomie theory, routine activity theory, and social learning theory. These theories help explain how offenders operate in virtual environments and the impact on victims. The section highlights the intersection of digital exploitation and broader online abuse patterns, contributing to an evidence-based approach to understanding criminal behaviour in the metaverse.
Section 3 discusses research challenges and methodologies for studying CSEA in the metaverse. Due to the anonymity and fluidity of virtual spaces, traditional research methods may be inadequate. The Element draws from the VIRRAC project, which employs interdisciplinary approaches, including criminology, psychology, and sociology, to examine online harms dynamics. A mixed-methods approach, combining VR interactions with qualitative focus groups with children and young people, thematic data analysis, stakeholder roundtables, and primary survey data, aims to offer a comprehensive understanding of risks in these immersive spaces.
Children’s perspectives are at the heart of Section 4, which examines their experiences within metaverse platforms. A youth panel (ages 13–18) provides insights into the use of 2D platforms for gaming and socialising, while younger children (ages 8–12) from different countries reflect on their experiences in VR spaces. While many enjoy online interactions, they also report challenges such as exposure to racism, harassment, and grooming. Notably, children express greater concern about grooming and doxing than online hate crimes, emphasising the need for tailored safety measures. Trust in online individuals is also a key theme, with most preferring connections with people they know offline.
Section 5 examines the practical challenges of safeguarding children in the metaverse, drawing on findings from this and other research. It also explores broader online safety policy issues in the context of child protection and VR environments, incorporating insights from focus groups with expert stakeholders in education and industry (Stage 6 of the VIRRAC Study). Furthermore, this section considers the implications of these findings for policing challenges. Respondents highlighted the metaverse’s potential benefits for children and young people, such as increased inclusivity and therapeutic applications; however, they also expressed concerns about risks, including cyberbullying, harassment, online hate, and grooming.
The Element concludes with Section 6 calling for a recalibration of regulatory, theoretical, and ethical approaches. It argues that traditional criminological theories must evolve to account for digital embodiment, interactivity, and immersive spatial-temporal dynamics. Similarly, policy responses need to move beyond fragmented, reactive strategies towards child-centred, proactive design frameworks. Children’s voices, central throughout the Element, are positioned not only as sources of insight but also as essential partners in developing safer, more empowering, digital futures.
The Element ends with a powerful ethical call: protecting children in the metaverse requires dynamic, adaptive, and child-informed strategies. Safeguarding is no longer just about mitigating harm; it is about reimagining digital life in ways that centre children’s rights, agency, and well-being. It is a challenge not only of regulation or technology, but of collective responsibility in an ever-evolving digital world.
This Element serves as a critical resource for understanding CSEA in digital spaces and shaping policies to protect children in the evolving metaverse landscape.
2 Offending and Victimisation: An Application of Criminological Theory to a Virtual Reality Paradigm
2.1 Introduction
The emergence of Virtual Reality (VR) technology and the proliferation of metaverse platforms have created new avenues for investigating the dynamics of offending and victimisation, particularly in the context of cyber offences against children, through a criminological lens (Van Gelder, Reference Van Gelder2023). Instances of interpersonal violence targeting users and children’s experiences of hate crimes and harassment within these virtual realms underscore the need for social scientists to critically examine this unexplored terrain where legal boundaries, moral ethics, and cognitive vulnerability intersect. The potential impact of this research is significant (Miah, Reference Miah, Hausknecht, Zabel and Miah2022). Primarily, it can provide pivotal insight into ways of detecting, disrupting, and criminalising those who offend and inform measures of support for its victims.
In recent years, there has been significant momentum in the study of cyber offending behaviours and victimisation, leading to a reassessment of criminological frameworks (Bradbury et al., Reference Bradbury, Bleakley and Martellozzo2024; Goldsmith & Brewer, Reference Goldsmith and Brewer2015; Phillips et al., Reference Phillips, Davidson and Farr2022). However, the application of traditional social science theories in this virtual environment requires further adaptation, given the unique and exciting characteristics of the digital sphere. This novel application makes this Element unique, as it will explore offending and victimisation through various theoretical lenses in the context of the VIRRAC research.
Focusing on three primary actors within virtual reality platforms: offenders exploiting children, children engaging in risky online behaviours, and the virtual reality environment itself, this section begins by examining the uniqueness of the virtual reality environment as a facilitation to online behaviour before moving on to consider how such environments can lead to the proliferation of deviancy and victimisation.
Thomas Holt’s body of work provides a valuable theoretical and empirical foundation for understanding criminal behaviour within metaverses, even though he has not focused on these virtual environments directly. His research on cybercrime, digital deviance, and offender decision-making offers frameworks that are particularly adaptable to immersive and decentralised platforms (Holt et al., Reference Holt, Bossler and Fitzgerald2013; Holt et al., Reference Holt and Bossler2015).
Central to Holt’s criminological approach is the application of social learning theory, which posits that individuals acquire deviant behaviour through interactions and shared norms within online communities. This theory finds relevance in metaverses, where socialisation among avatars facilitates the transmission of criminal techniques, rationalisations, and affiliations. Additionally, his incorporation of routine activity theory illuminates how virtual spaces inadvertently foster opportunities for crime. In metaverses, the convergence of motivated offenders, suitable targets, and absent or ineffective guardianship mirrors the conditions Holt identifies in more traditional online settings. Building upon the work of Holt, several theories have been deemed integral to this discussion, from functionalist, positivist, cultural and the Chicago School of thought.
While numerous theoretical frameworks could be applied to online virtual reality behaviours to enhance our understanding of offending and victimisation, this section focuses on a select few that are most relevant to the topic: Durkheim’s Anomie (Reference Durkheim1893), Shaw and McKay’s approach to Social Disorganisation Theory (Reference Shaw and McKay1942), Suler’s (Reference Suler2004) Online Disinhibition Theory, Lyng’s (Reference Lyng1990) approach to Edgework Theory, Hirschi’s (Reference Hirschi2004) Self-Control Theory, and finally Sykes and Matza’s (Reference Sykes and Matza1957) Neutralisation Theory. The critical examination of these theories enables a deeper understanding of virtual reality behaviours and the various factors that shape individual decision-making and actions within virtual environments.
2.2 Virtual Reality Environments, Offending, and Victimisation
The VIRRAC Study and other research suggest that offending in VR spans a broad spectrum of behaviours, from verbal harassment, hate crime and trolling to more complex actions like virtual sexual assault, stalking, theft of virtual property, grooming, and exploitation (Davidson et al., Reference Davidson, Martellozzo, Farr, Bradbury and Meggyesfalvi2024; Porta et al., Reference Porta, Frerich and Hoffman2024). The immersive nature of VR, in contrast to web2 platforms, has been argued to intensify the psychological and emotional impact of offences, rendering users’ experiences more immediate and personal than comparable behaviours in non-immersive online environments (George, Reference George2024). Moreover, the anonymity afforded by VR and the malleability of virtual identities often embolden individuals to act in ways they would not in the physical world (XinYing et al., Reference XinYing, Tiberius, Alnoor, Camilleri and Khaw2024). This disinhibition, combined with the perception that actions in VR are ‘not real’, poses unique challenges for understanding and addressing deviance in these spaces.
For instance, theories like Routine Activity Theory (Cohen & Felson, Reference Cohen and Felson1979) can explain how offenders exploit the convergence of targets and a lack of capable guardianship in poorly moderated VR spaces. Similarly, Neutralisation Theory (Sykes & Matza, Reference Sykes and Matza1957) sheds light on how offenders justify their actions by minimising harm or denying responsibility, often framing VR offences as harmless entertainment. Other frameworks, such as Edgework Theory (Lyng, Reference Lyng1990) and Social Disorganisation Theory (Shaw & McKay, Reference Shaw and McKay1942), provide insight into the social and psychological dynamics that foster deviant behaviour in VR, including the thrill of pushing boundaries and the absence of cohesive societal norms.
2.3 Societal Norms and Social Disorganisation
Émile Durkheim’s theory of anomie (Reference Durkheim1893) refers to a state of normlessness or a breakdown in the social norms and values that guide individual behaviour within a society. This concept can be applied to understanding online offending in virtual reality environments from a social constructionist perspective, as the nature of online environments is a complex paradigm of diverse users with their own moral and ethical belief systems and ambiguous jurisdictional boundary lines (Mincewicz, Reference Mincewicz2023; Thierer & Crew, Reference Thierer and Crews2003). However, the online world is arguably more fractured, as there is no singular societal norm governing the behaviour of others, but rather a space in which multiple societal ideologies and norms exist in the same space and time. Manuel Castells argues that digital environments are not monolithic but rather comprise multiple, overlapping societies and groups, each shaped by distinct cultural, ideological, and normative frameworks (Castells, Reference Castells2000).
It can be argued that the exponential growth of the internet and digital technologies has outpaced the development of ethical guidelines, legal frameworks, and social norms governing online behaviour (Schoentgen & Wilkinson, Reference Schoentgen and Wilkinson2021). The web3 environment is, in essence, a new world, not built on the customs and values of any singular society that governs action. This can leave individuals in a moral vacuum, where they may engage in online offences without fully recognising or respecting the consequences (Hu & Zhou, Reference Hu and Zhou2013). The disconnected and transient nature of online spaces does not require the establishment of social bonds or norms for people to operate within the environment. It could be argued that people use online spaces to escape such bonds and social controls that heavily restrict their behaviours, attitudes, and self-representation in offline spaces. However, without such social bonds or controls, criminological theories have argued that this can potentially increase deviancy as there is no social pressure to conform (Hirschi, Reference Hirschi1969).
Durkheim (Reference Durkheim1893) and Hirschi (Reference Hirschi1969) emphasised the importance of social bonds in preventing deviance. They believed an environment lacking clear guidelines on behaviour resulted in an increased sense of disconnection and a desire to conform.
Much in alignment with earlier discussions around virtual reality behaviour from the perspective of Durkheim, social disorganisation theory, developed by Shaw and McKay in 1947, offers valuable insights. While initially applied to physical neighbourhoods, its principles can be easily attributable to cyberspace, including the metaverse, in elucidating why individuals may engage in risky or deviant behaviours. Research has indicated that social disorganisation in cyberspace, characterised by a lack of structure, moderation, and supervision, catalyses risk-taking and deviant behaviours (Suarez, Reference Suarez2015). Furthermore, in conjunction with factors such as anonymity, absence of rules and regulations, and inadequate content moderation, the diverse user populations contribute to a digital environment where social norms are ambiguous and regulatory mechanisms are less effective. One of the challenges in regulating harmful behaviour within the metaverse lies in the content moderation process and the complexities in detecting audio-overwritten abusive language. The decentralised nature of many VR platforms limits the ability of moderators or administrators to establish effective control, paralleling the lack of law enforcement in socially disorganised neighbourhoods.
In virtual spaces, relationships can be superficial, lacking the emotional depth found in real-world interactions. This weakens social attachments, reducing the internalised pressure to conform to accepted behaviours. Cyberbullying, harassment, and even digital fraud thrive in such environments where empathy is diminished. The metaverse provides an alternate reality where individuals can construct different identities and engage in behaviours that might be deemed unacceptable in physical society. Some users, detached from real-world consequences, may engage in hacking, virtual theft (e.g., stealing digital assets or NFTS), or other unethical activities without fear of legal repercussions. The immersive nature of the metaverse can lead individuals to spend excessive time in virtual worlds, decreasing their participation in conventional activities such as work, education, or family life. This disconnection can foster antisocial behaviour, including online extremism or illegal activities conducted in digital spaces. Without real-world regulatory frameworks, moral values in the metaverse become fluid. Some users may develop alternative moral codes that justify deviant behaviours like virtual violence, exploitation, or illicit trade (e.g., digital black markets). If the virtual community normalises such behaviours, individuals may feel less compelled to adhere to conventional ethical standards. It can also lead to individuals’ lapses in self-control as the environment, perceptions of impact, and consequences can dissociate individuals from attributing their actions to real-world harms.
Virtual reality (VR) creates an immersive digital environment where users can interact in ways that may differ from real-world behaviour or even behaviours that occur on 2D social or gaming platforms. VR can lower real-world consequences and increase impulsivity; it can amplify the relationship between low self-control (Kumar et al., Reference Kumar, Rajendran, Dutta, Ambwani, Lal, Ram and Raghav2023) and deviant behaviour, offering immediate sensory feedback and gratification and reinforcing impulsive behaviours. In physical reality, actions often have social, legal, or moral repercussions. In VR, the perceived consequences are weaker, making it easier for users with low self-control to engage in deviant behaviour without fear of punishment. There is the risk that VR environments can reinforce impulsive behaviours, making them habitual. Behaviours that are amplified by the absence of real-world consequences, as interactions are not face-to-face. Instead, there is the provision of a veil of anonymity by which any user can hide behind an avatar and distance themselves from their actions and feelings of accountability, especially when engaging in harmful or deviant behaviours (Sular, Reference Suler2004).
Avatars in 3D Virtual Reality (VR) spaces offer a fundamentally different and more immersive experience than those in 2D environments due to the depth, embodiment, and interactivity they enable. Avatars are not just visual representations; they become extensions of the user’s body, often tracked in real time through headsets, hand controllers, and even full-body sensors. This allows users to move naturally, gesture, and interact with virtual objects and other avatars in ways that mimic real-world behaviour. The result is a heightened sense of presence, the psychological feeling of ‘being there’, which is far more difficult to achieve or replicate in 2D settings such as Facebook and Roblox.
By contrast, 2D environments typically rely on flat, static, or minimally animated avatars, often viewed from a fixed perspective. These avatars may convey identity or emotion through text or limited visuals, but they lack spatial depth and embodied interaction. Users in 2D spaces are observers; in 3D VR, they are participants. Studies show that users often experience a phenomenon called the Proteus Effect (Yee & Bailenson, Reference Yee and Bailenson2007), where their behaviour changes to align with the characteristics of their avatar. This effect is amplified in VR due to the stronger sense of embodiment, presence, and interconnectivity.
Whilst cyberspace can enhance interconnectivity and mitigate loneliness and social isolation, online spaces, particularly immersive spaces such as the metaverse, can paradoxically foster fragmented interactions, weakening meaningful social integration (Yasuda, Reference Yasuda2024). Similar to how individuals seek out online communities for mental health, lifestyle or well-being support (Kim et al., Reference Kim, Wang and Kim2023), those who feel disconnected from traditional social structures may turn to virtual communities that provide a sense of belonging and validation, this has been evidenced in the context of youth cybercrime (Aitken et al., Reference Aiken, Davidson and Amann2016). However, as Copes and Williams (Reference Copes and Williams2007) note, certain online spaces, such as incel groups, can promote ideological radicalisation, often supporting misogynistic discourse (Powell et al., Reference Powell and Henry2017) and, in some cases, extremist beliefs or violent ideation.
2.4 Deviant Behaviour in VR Environments
In any new technological environment, there will always be those who wish to engage in harmful acts towards others, whether intentional or unintentional. The online world, especially in its developing state of regulation, can often provide opportunities for unregulated behaviours. Research has shown that individuals intent on using online spaces to commit acts of deviancy will take advantage of technological and legal weaknesses (Curtis & Oxborough, Reference Curtis and Oxburgh2023). Criminological theory has somewhat neglected this area but has an important contribution to make to online safety as, through applying criminological and sociological theory, it is possible to develop a greater understanding of the manifestation of these behaviours and how they are unique to VR spaces. Whilst many theoretical constructs can be used, those that relate to anonymity, neutralisation, and self-control will be examined here.
The role of online anonymity as a determinant of online behaviour is integral to understanding human behaviour in online spaces. It amplifies, often reducing the perception of social control and accountability (Nitschinsk et al., Reference Nitschinsk, Tobin, Varley and Vanman2023). This aligns with Durkheim’s idea of anomie, as individuals may feel less restrained by societal norms, resulting in behaviours like trolling, spreading misinformation, or exploiting others, as it separates individuals further from their actions and consequences.
The augmented reality of metaverse domains blurs the lines between reality and the ‘real world’ more so than web2 platforms. Web3 environments give users a sense of reality confusion, which can impact a person’s willingness to engage in harmful behaviours. Behaviours that can also become augmented, disinhibited, and involve harmful and abusive acts that they may not act out so easily in the ‘real world’ (Suler, Reference Suler2004). Online metaverse offenders may view their online predatory behaviours as not being harmful, as the acts take place in virtual spaces.
Suler’s online disinhibition effect (ODE) describes how individuals tend to behave with less restraint in online environments due to anonymity, invisibility, and the lack of immediate real-world consequences. Applying this theory to offending behaviours in VR reveals a significant relationship between VR’s immersive characteristics and the disinhibition that can lead to deviant or harmful actions. Suler identified six areas of influence on human behaviour online;
2.4.1 Dissociative Anonymity
In VR, users often interact through avatars, which can mask their real-world identity. This sense of anonymity can encourage behaviours that individuals would not exhibit in real life, such as harassment, verbal abuse, or sexually inappropriate actions (Kim & Ellithorpe, Reference Kim, Ellithorpe and Burt2023). Offenders may dissociate their actions from their real-world selves, justifying their behaviour as unrelated to their true character. This can be further understood through the impact of the Proteus Effect (Yee & Bailenson, Reference Yee and Bailenson2007), whereby individuals adopt the characteristics of their avatars, and their self-perception and actions change to reflect the persona of the avatar, creating a greater degree of dissociation.
2.4.2 Invisibility
Although VR creates a sense of presence, the offender is not physically visible in the traditional sense. Only the VR user’s avatar is visible. This characterisation can change an indefinite number of times, and this degree of invisibility can embolden individuals to engage in actions they might avoid in face-to-face interactions. This phenomenon is amplified in VR spaces with limited consequences or enforcement mechanisms for deviant behaviour. The sense of presence is likely to change as advancements in photorealistic VR are introduced.
2.4.3 Asynchronicity
While many VR interactions occur in real-time, the perception of delayed or minimal consequences remains. Offenders may feel they have more time to ‘get away’ with harmful behaviours before being held accountable, reducing their inhibitions. Communications with other users, such as forms of harassment, can take place over some time and can be paused and resumed when people engage.
2.4.4 Solipsistic Introjection
In VR, communication is often filtered through avatars and virtual environments, making interactions feel less ‘real’. Offenders might project their fantasies or assumptions onto others, leading to a lack of empathy or understanding of the harm they cause. It is also possible that users can misinterpret the behaviours of others and even create impressions of others that are far removed from who they are in the real world. This tactic can be used by individuals intent on sexually abusing and exploiting children (Odudu, Reference Odudu2024).
2.4.5 Dissociative Imagination
VR environments are often perceived as separate from the real world, a ‘game-like’ space where social norms and rules do not apply. This can encourage individuals to behave as if their actions in VR have no real consequences, fostering harmful behaviour such as theft, assault, or harassment within the virtual space.
2.4.6 Minimisation of Authority
VR environments are often decentralised or lack clear authority figures, making offenders feel less restrained by social or institutional controls. This diminishes the perceived risk of punishment and increases disinhibition. The level of power or affluence offline is immaterial in many online environments.
Suler’s (Reference Suler2004) online disinhibition outlines some of the potential mechanisms behind online deviance in VR spaces as it explains how the online environment reduces social constraints and inhibitions, leading individuals to engage in behaviour they might avoid in face-to-face interactions. This can be benign disinhibition (e.g., self-disclosure, openness) or toxic disinhibition (e.g., cyberbullying, trolling, hate speech). Building upon this, it is important to consider how individuals use cognitive justifications to neutralise feelings of guilt when engaging in deviant behaviour. This allows them to break social norms while maintaining a self-image of morality. Whilst is has been argued that individuals can experienced heightened senses of reality confusion whilst using VR, to date there has been no existing research that has determined whether this sense of realism has an impact on disinhibition. It is possible that the extended sense of presence, and an individual’s evaluation of the costs and rewards of actions, could minimise an individual’s willingness to takes risks in a way not too dissimilar to offline behaviours. However, the reverse could also occur whereby the thrill of breaking societal constructs could amplify the desire to engage in deviant behaviours.
Once individuals engage in deviant online behaviours due to disinhibition, they may use neutralisation techniques (Matza & Sykes, Reference Sykes and Matza1957) or denial (Marshall, Reference Marshall1994) to justify their actions. VR users may justify deviant behaviour using techniques that neutralise guilt or responsibility by downplaying the significance of their actions, the harm caused, or their culpability, as the environment they are in is fantasy based.
The process of neutralisation centres around the denial of the harm caused, the legitimacy of victimisation, and the placing of accountability on others. This process was identified as involving five areas of neutralisation which can be applied to VR:
2.4.7 Denial of Responsibility
Offenders claim they have no control over their actions due to the nature of VR environments. This can include directing blame towards the technology being used, such as system glitches, or being accountable for the avatars’ actions (as previously discussed in the forms of online disinhibition), not their own. Research has shown this to occur when adults engage with children sexually on online platforms, whereby the adults blame the features of the platform, not their actions (Bradbury et al., Reference Bradbury, Bleakley and Martellozzo2024). This disassociation is amplified in VR because users interact through avatars and virtual personas, creating a psychological distance between the individual and their actions. Offenders could also blame the behaviours of others as a form of diffusion of responsibility regarding their actions, as they are only doing what everyone else is doing.
2.4.8 Denial of Injury
For this form of neutralisation, individuals argue that their actions caused no real harm because the VR space is not ‘real’; it is only a fantasy or just a game in which there are ample opportunities for others to leave the environment should they wish. This justification minimises the emotional or psychological harm experienced by victims, especially in cases like sexual harassment, verbal abuse, or griefing (deliberately ruining another user’s experience).
2.4.9 Denial of the Victim
Offenders suggest that the victim deserved the behaviour or that they are not truly victims. This could be where one individual feels aggrieved by another and seeks retribution or retaliation, which they view as justified. Or if an individual believes that behaviours towards others online have less harmful impacts on victims. Whilst the denial of a person’s status as a victim has been extensively discussed and debated over the last century, in a VR context, it becomes even more ambiguous as there can be challenges regarding sexual interactions as no physical contact occurs without the use of haptic suits.
2.4.10 Condemnation of the Condemners
This can occur when offenders shift blame onto those who criticise or enforce rules. For online platforms, the condemners are content moderators responsible for identifying harmful behaviours through observation, user reports, or proactive algorithm detections. Research conducted by Bradbury et al. (Reference Bradbury, Bleakley and Martellozzo2024) has revealed that when adults engage with children sexually online, they neutralise their behaviours by blaming the features of the sites as a facilitator of the interaction (Bradbury et al., Reference Bradbury, Bleakley and Martellozzo2024). In contrast to this, content moderators have been blamed for being overzealous in their approach to interactions, which users regard as being harmless. This deflection undermines the legitimacy of rules and discourages accountability.
2.4.11 Appeal to Higher Loyalties
Offenders justify their actions by claiming loyalty to a group or cause overrides the norms of the VR environment. This can occur in their defence that their actions were for the good of others, such as teammates, in winning a game. This technique is especially prevalent in group dynamics where collective behaviour supports deviance, such as in trolling communities or hacker groups.
As discussed with Suler’s Solipsistic Introjection, many users may perceive VR spaces as ‘not real’, which facilitates neutralisation techniques like denial of injury or responsibility. Changing cultural attitudes about the impact of VR interactions is crucial to reducing offending.
Whilst not the first academic to explore the notion of Edgework within sociology and criminology, Edgework theory, applied by sociologist Stephen Lyng in the 1990s, took a perspective that examines the appeal of risky and deviant behaviour. The theory suggests that individuals are drawn to engaging in deviant behaviours for the excitement and adrenaline rush they provide and for their unique experiences navigating the boundaries between risk and safety, control and danger. Edgework Theory is an important contribution to understanding adult and children’s online behaviours, especially in metaverses where their design aims to stimulate and thrill using augmented environments. These feelings can directly correlate to the desire to push boundaries and take risks to increase these sensations, which can be attributed to deviant behaviours in conjunction with a sense of dissociation and disorganisation (Shaw & McKay, Reference Shaw and McKay1942; Suler, Reference Suler2004).
Lyng’s Edgework Theory explores why individuals engage in risky or deviant behaviours for the thrill of pushing boundaries and experiencing control in chaotic or uncertain situations. When applied to offending in virtual reality (VR), the theory highlights how VR environments provide a unique space for users to test boundaries, take risks, and engage in deviant behaviours that might not be possible or permissible in real life. As discussed in Section 1, VR offers a space where individuals can take risks without experiencing the immediate real-world consequences, making it a fertile ground for edgework behaviours. Offenders might see their actions as a test of skill, such as outsmarting security systems, manipulating environments, or navigating social boundaries within VR. The rules and norms in virtual environments are often less clear or enforced than in the physical world, providing a liminal space for individuals to explore deviant behaviours. In other words, VR environments provide users with greater opportunities to engage in behaviours they might avoid in real life, such as theft, violence, or identity manipulation, as a means of exploring the boundaries of acceptable conduct.
This perception encourages edgework that might involve offending others, like stalking, harassment, or sabotaging virtual assets. Again, as emphasised by Suler’s work, using avatars makes offenders feel distanced from their actions, making it easier to justify boundary-pushing behaviours. The sense of thrill and excitement may be drawn from an increased sense of power and mastery by bending or breaking the rules, particularly in unmoderated or highly customisable environments. The unique environment may also magnify the appeal of edgework as offenders seek peer recognition and approval.
Edgework theory can be directly supported by self-control theory, proposed by Gottfredson and Hirschi (Reference Gottfredson and Hirschi1990), which suggests that individuals with low self-control are more likely to engage in deviant behaviour because they seek immediate gratification, lack foresight, and struggle with impulse regulation. This theory argues that deviant acts occur when an opportunity presents itself, and the individual lacks the self-discipline to resist.
It is reasonable to suggest that VR environments enhance sensory experiences, which can directly diminish an individual’s capacity for self-control (Kumar et al., Reference Kumar, Malhotra and Kathuria2025; Zhang, Reference Zhang2022). Furthermore, research indicates that heightened states of arousal are strongly correlated with increased risk-taking behaviours (Icenogle & Cauffman, Reference Icenogle and Cauffman2021), which may not only contribute to greater deviant conduct but also increase vulnerability to victimisation.
2.5 Victimisation in Virtual Reality Environments
Victimisation in the metaverse is an emerging issue that challenges traditional criminological theories. The development of the metaverse creates new opportunities for victimisation that parallel but also diverge from real-world experiences. Theories of victimisation, including Routine Activity Theory (Cohen & Felson, Reference Cohen and Felson1979), Lifestyle Theory (Walters, Reference Walters1990), and Victim Precipitation Theory (Wolfgang, Reference Wolfgang2002), provide robust frameworks for understanding how individuals become victims in this digital space.
Routine Activity Theory (Cohen & Felson, 1958) suggests that victimisation occurs when three elements converge: a motivated offender, a suitable target, and the absence of capable guardians. In the metaverse, all three components are frequently present. Motivated offenders include cybercriminals, hackers, and digital predators who exploit virtual spaces’ anonymity and unregulated nature. Suitable targets range from new or unsuspecting users to children or individuals who possess valuable digital assets, such as non-fungible tokens (NFTS), cryptocurrency, or the ability to share images. The absence of capable guardians is particularly significant in the metaverse, as many platforms lack effective moderation, legal enforcement, or technical protections against cybercrimes such as hacking, harassment, and virtual sexual assault. Suitable guardians can also extend beyond parents or carers, to a wide range of societal members who have a safeguarding responsibility over children, such as doctors, nurses, teachers, and community workers. It is vital that conversations with children about online safety are not limited to preventative and interventive measures, but extend to post-abuse care and support. Research has shown that after engaging with adults sexually online, children often seek advice and support online to determine whether they are at fault and risk of criminalisation for their actions (Bradbury, Reference Bradbury2025). The risk in this approach is the quality of the responses they receive and the potential harm that could be further caused through mis/disinformation. To fully support children, there needs to be spaces where they can access reliable informative resources and access therapeutic support without the risk of criminalisation or further exploitation.
This combination creates an environment where victimisation is both easy to commit and difficult to prevent. Whilst metaverse users may not define themselves as vulnerable, all users are vulnerable when navigating technological spaces. It can be argued that VR metaverses amplify that vulnerability as the maximisation of sensory experience means that harmful interactions with others can seem more real and feel more threatening as hearing other VR user voices in acts of sexual harassment, misogyny, and hate speech increases experiences of intimidation, anxiety and fear (Freeman et al., Reference Freeman, Zamanifard, Maloney and Acena2022).
While the theory traditionally emphasises situational factors over offender motivation, targeted offender interventions can nonetheless play a critical role in disrupting this convergence and mitigating associated risks.
Interventions aimed at offenders can reduce the likelihood of criminal events by altering either their motivation or their capacity to act. Cognitive-behavioural programs, for instance, seek to reshape decision-making processes and challenge distorted thinking patterns, thereby reducing the propensity to offend. These programs can be particularly effective in addressing impulsiveness, entitlement, or antisocial attitudes that contribute to criminal behaviour.
Moreover, interventions that enhance social bonds, such as employment support, educational opportunities, and community reintegration, can reduce exposure to criminogenic environments and diminish the appeal of suitable targets. By stabilising offenders’ routines and embedding them in prosocial networks, these measures indirectly increase guardianship and reduce the spatial-temporal conditions conducive to crime (Nolet et al., Reference Nolet, Charette and Mignon2022).
In digital or metaverse environments, offender interventions might include behavioural nudges, real-time moderation, or algorithmic deterrents that reduce anonymity and increase accountability. Such measures can recalibrate the offender’s perception of risk and opportunity, even in decentralised or avatar-based spaces. As a result, disrupting the behaviour from an actively developing lifestyle choice.
Lifestyle theory posits that individuals’ behaviours and choices influence their risk of victimisation. In the metaverse, various behaviours increase the risk to users, such as frequenting unmoderated virtual spaces, sharing personal information, or engaging in financial transactions. This phenomenon may be partly driven by the immersive nature of the metaverse, which enhances users’ sense of freedom and security. The heightened sense of presence and embodiment in virtual spaces may lead users to lower their guard, perceiving the environment as safe and controlled while unknowingly exposing themselves to potential risks (Slater & Sanchez-Vives, Reference Slater and Sanchez-Vives2016). As a result, individuals may lower their guard, mistakenly perceiving the environment as safe and controlled while unknowingly exposing themselves to potential risks. This openness to risk can ultimately leave a victim of online exploitation and abuse to be blamed for their active involvement in risk taking behaviours.
Victim precipitation theory (Wolfgang, Reference Wolfgang2002) examines the role of the victim in their own victimisation, either through active participation or by unknowingly provoking an offender. This theory has been viewed as controversial as it places blame towards victims (Dhanani et al., Reference Dhanani, Main and Pueschel2020). This theory is particularly relevant in the metaverse in online harassment, cyberbullying, and digital conflicts. Some users may unintentionally provoke hostility by expressing controversial opinions, challenging online hierarchies, or engaging in competitive gaming environments, which Suler (Reference Suler2004) defines as toxic disinhibition. Others may actively engage in risky behaviour, such as clicking on suspicious links, participating in unregulated digital economies, or sharing sensitive data, inadvertently enabling their victimisation. As discussed earlier, the anonymity of metaverse users and the presence of children in apps designed for adults could increase neutralisation responses that draw parallels with victim precipitation theory. Adults who engage with children in a sexual manner could argue that the child should not be present in such apps, and, therefore, the blame lies with the child, not with the adult (Bradbury et al., Reference Bradbury, Bleakley and Martellozzo2024).
The concept of secondary victimisation is also significant in the metaverse. When victims report abuse, harassment, or fraud, they may face disbelief, victim-blaming, or inadequate support from platform moderators or law enforcement. Given the evolving nature of virtual crimes, legal frameworks often do not address the unique forms of victimisation in digital spaces and policing such behaviour is problematic (see Section 5). Whilst there has been a considerable uptake in the use of VR to train the police in how to respond to crime (Ali & Laib, Reference Al Ali and Laib2024), such as domestic abuse and public order, there is a significant lack of training for law enforcement agencies around the nature of offending and victimisation that specifically occur within the metaverse (Haber, Reference Haber2024). Without such training, the lack of awareness will directly impact police investigations into reported cases of abuse, which can discourage victims from seeking help and may even embolden offenders who realise there are few repercussions for their actions (McIntosh & Allen, Reference McIntosh and Allen2024).
Another key consideration is the psychological impact of metaverse victimisation. Unlike traditional cybercrimes that occur on social media or through messaging platforms, the immersive nature of the metaverse heightens emotional and psychological effects. For instance, virtual sexual harassment, stalking, and simulated assault may not result in physical harm, but they can have severe emotional consequences due to the embodied experience of the metaverse. Victims may suffer from anxiety, PTSD (post-traumatic stress disorder), and social withdrawal, similar to those who experience real-world victimisation (Zakaria, Reference Zakaria2025). This reinforces the need for metaverse platforms to implement stronger safeguarding measures, such as AI-driven moderation, verified identities, and stricter enforcement of virtual conduct policies.
2.6 Conclusion
This section has examined the complexities of offending and victimisation within virtual reality (VR) environments, applying established criminological theories to an evolving digital landscape that defies traditional social structures. The rise of the metaverse has introduced immersive spaces that fundamentally alter how individuals perceive and engage in deviant or harmful behaviours. As technology has advanced, so too have the opportunities for new forms of criminality, moral disengagement, and vulnerability. In these immersive and often anonymised environments, VR not only facilitates deviance but also challenges existing frameworks of accountability, control, and justice.
Through an interdisciplinary application of theoretical perspectives, ranging from Durkheim’s Anomie (1893) and Hirschi’s Self-Control Theory (Reference Hirschi2004) to Suler’s online disinhibition effect (2004) and Lyng’s Edgework Theory (Reference Lyng1990), this section has demonstrated that offending and victimisation in VR are not only technologically mediated phenomena but are deeply embedded in the psychosocial dynamics of the user. Individuals in VR environments are often liberated from real-world constraints: their identities are masked, their actions are less visible, and their behaviours are frequently removed from traditional consequences. These conditions foster a unique psychological space where transgressive actions can feel acceptable, trivial, or even justified.
Criminological theories help to illuminate the mechanisms by which these behaviours are enacted and normalised. Sykes and Matza’s Neutralisation Theory (Reference Sykes and Matza1957) reveals how individuals rationalise their harmful conduct in VR, often claiming that their actions are not real, not harmful, or not their responsibility. Edgework Theory (1990) further highlights the attraction of virtual risk-taking, where thrill-seeking becomes entangled with digital disassociation (Suler, Reference Suler2004). This intersection of arousal motivation (Murray, Reference Murray1938) and anonymity reflects a profound shift in how agency and accountability are experienced online. Moreover, VR enables users to explore boundaries in ways that are often inaccessible in the physical world, creating new thresholds of risk and deviance that must be understood within both psychological and sociological contexts.
Victimisation within VR is equally complex. Immersive environments heighten users’ emotional vulnerability, creating more visceral experiences of harm even in the absence of physical contact. Theories of victimisation – particularly Routine Activity Theory (Cohen & Felson, Reference Cohen and Felson1979) and Lifestyle-Exposure Theory (Walter, 1957) – reveal how metaverse users can become targets through a convergence of risk factors, such as unmoderated spaces, inadequate digital guardianship, and the deceptive nature of avatars. The ambiguous status of avatars in VR – neither fully real nor entirely fictional – adds an additional layer of complexity to identifying and supporting victims. These challenges are further exacerbated by the current limitations in platform governance, moderation systems, and legal frameworks, which remain ill-equipped to address the nuanced forms of harm emerging in these digital territories.
Children, in particular, represent a demographic that is disproportionately at risk. As digital natives (Kincl T Strach, Reference Kincl and Štrach2021), they are drawn to the gamified and social features of metaverse platforms, often without a full understanding of the potential dangers. Yet the systems designed to protect them, such as age verification, moderation protocols, or parental controls, are often inadequate or easily bypassed. As highlighted throughout this section, there is an urgent need for research, policy, and design approaches that treat children not only as vulnerable users but as active participants in shaping the norms of virtual societies. Without such a shift, the metaverse risks becoming a space where the exploitation of children is not only possible but also largely untraceable.
The role of law enforcement and regulatory agencies in these environments remains unclear and underdeveloped. As virtual crimes become increasingly sophisticated, reactive approaches will fall short. Instead, preventative frameworks, grounded in digital ethics, user-centred design, and informed by criminological insight, must guide platform development. This includes the implementation of AI-driven moderation tools, robust identity verification systems, and embedded educational interventions that reinforce digital empathy and ethical engagement.
As criminology begins to adapt to technological shifts in social interactions and offending, it must continue to integrate theoretical frameworks with empirical research to respond effectively to the realities of virtual harm. Offenders’ behaviours, victims’ vulnerabilities, and the limitations of existing governance structures all converge in the metaverse in ways that mirror, distort, and reimagine traditional social interactions.
Having established the theoretical framework that underpins this Element, the next section explores the methods utilised throughout this project, the ethics and challenges that have been encountered, and how risk was minimised.
3 Methods, Ethics, and Challenges
3.1 Introduction
This Element’s section explores the methodological approaches utilised during the VIRRAC project, which was conducted over a twelve-month period, from February 2023 to 2024. VIRRAC was funded by the National Research Centre on Privacy, Harm Reduction, and Adversarial Influence Online (REPHRAIN) and UKRI, adopting a mixed-methods approach (Cresswell, Reference Creswell2018) that combined qualitative focus groups with stakeholder engagement, observations, a snapshot survey, and desktop research. This section discusses the participatory approach adopted throughout the conception, design, and delivery phases of the VIRRAC project. In addition, this section reviews and reflects on the research techniques employed to produce tangible project outputs collaboratively and considers some of the key challenges faced by the researchers in navigating this novel and interdisciplinary study. Crucially, this section evidences the importance of placing the voices of children at the heart of any methodological approach seeking to safeguard children better (Shelbe et al., Reference Schelbe, Chanmugam and Moses2015; UNICEF Innocenti, 2025) to ensure that children’s perceptions of risk and personal safety are gathered, heard, and included in the subsequent shaping of online safety.
3.2 Background and Context of Study
At present, little is known about the potential repercussions for children and young people participating in the immersive metaverse ecosystem. This gap leaves several important questions unanswered, namely: (1) What are children experiencing across metaverse platforms? (2) what impact these experiences may have, and (3) how children feel they can be better empowered and protected in these spaces.
To answer these questions, children’s voices needed to be at the heart of the methodological design, underpinning the foundations of all project outputs, tools, and evidence-led recommendations. Therefore, conducting innovative primary research with children and young people to explore experiences and perceptions of the metaverse was a vital strand of the VIRRAC approach, as was learning from various multidisciplinary stakeholders with first-hand experience and expertise.
VIRRAC adopted a holistic and multidisciplinary approach, not drawing from a single discipline, theory, or perspective but from a selection, as discussed in Sections 1 and 2. By doing so, it is possible to develop a greater understanding of the idiosyncratic nature of cyberspace and the metaverse. How to approach safety for children in these online environments more effectively is a complex and multifaceted journey.
3.3 Ethics and Integrity of VIRRAC
All aspects of VIRRAC received full ethical approval from the University Ethics Committees and the REPHRAIN Research Centre Ethics Committee, adhering strictly to the ethical standards for research involving human participants as set out in the 2006 British Society of Criminology Code of Ethics. Ensuring transparency in the project outputs and the safety and well-being of all participants who participated in VIRRAC was crucial. Implementing rigorous data handling and storage procedures ensured participants’ anonymity and confidentiality. Informed consent was obtained from all participants, with a clear explanation of the study’s objectives, the voluntary nature of participation, and the right to withdraw. Data protection measures complied with UK GDPR, including the secure storage of data on encrypted university servers, access limited to authorised research personnel, and the removal of all identifiable information during data analysis and reporting. Due to the complex nature of conducting qualitative research and navigating the nuanced dynamics of participatory research with children and young people (Morrow, Reference Morrow2008), the VIRRAC project followed multiple layers of ethical reflection (Stapleton & Mayock, Reference Stapleton and Mayock2022). Three separate ethical approval applications were granted. The project stakeholder board reviewed all research tools inputting their interdisciplinary advice and expertise.
For research with children and young people, consent was sought from the children, their parents or guardians, and stakeholder partners, who served as gatekeepers. The practical considerations for recruiting young people and liaising with parents were challenging. The decision was made to work closely with gatekeeper organisations to access participants and ensure questions were answered beforehand. This was especially important for the teenage panel, where participants were invited to use VR and visit the metaverse as part of the participatory data collection experience, which was observed by the research team and formed part of the subsequent focus group conversations. Through internal team conversations and stakeholder board input, the decision was made to incorporate an additional level of parental consent and guidance for the teenage panel, specifically tailored to the virtual reality experience. This additional guidance included a health questionnaire to ensure parents and participants were informed before the intervention and to ensure each participant’s safety during the experience. Furthermore, the VIRRAC team conducted all conversations with children and young people in the presence of trusted and experienced gatekeeper organisations or parents, who were invited to sit in during the online focus group sessions with children.
All participants were briefed and provided with a hard copy of information regarding the study and the means of contacting the research team should they have any questions or wish to withdraw from or remove their child from the study. The children were reminded of this prior to and after data collection. Participants and stakeholders reflected on and actively reviewed the VIRRAC project outputs during the final phases of the project. This was crucial for the participatory ethos (Shelbe et al., Reference Schelbe, Chanmugam and Moses2015) of the VIRRAC project to ensure that VIRRAC findings were thoroughly tested, and that participants’ voices were effectively shared.
3.4 The VIRRAC Project Design: Incorporating the Voices of Children and Young People
Designing and implementing research safely is complex in all settings, but can be more complicated when working with particularly vulnerable groups, such as children and young people (Farrell, Reference Farrell2005; Save the Children, 2004). Establishing trust and rapport with the children was essential (UNICEF Innocenti, 2025). In reflection of this, the VIRRAC research team took steps to build trust between researchers and research participants.
Building rapport enhances the quality and depth of data and ensures ethical integrity, trust, and child-centred methodologies that respect the agency and voice of young participants. Christensen and James (Reference Christensen, James, Christensen and James2008) emphasise that children are likelier to share genuine thoughts, emotions, and experiences when they feel safe, respected, and understood. Rapport reduces power imbalances between adult researchers and child participants (Schelbe et al., Reference Schelbe, Chanmugam and Moses2015), creating a more equitable space for dialogue. As also experienced in this research, when children feel comfortable, they are less likely to offer socially desirable answers and more likely to engage openly. Rapport-building supports adherence to ethical guidelines, as these principles foster communication, ensure children understand their role in the research, and make space for assent and dissent throughout the process (Alderson & Morrow, Reference Alderson and Morrow2004). Rapport contributes to longer, deeper, and more reflective conversations, particularly in methods such as interviews, focus groups, or ethnographic observations. Punch (Reference Punch2002) notes that children are highly attuned to adults’ attitudes and can detect when researchers are disinterested or overly formal, which may lead to guarded or performative responses.
In reflection of this, VIRRAC included several steps; using trusted gatekeepers, and, during in-person focus groups, child-friendly interviewing techniques (e.g., sitting alongside and at the same level participants, conducting focus groups in circles, being aware of power dynamics, using participatory techniques, offering breaks). Robust ethical safeguarding protocols were in place, incorporating advice from two ethics committees as well as the project stakeholder board. The research approach ensured that the experiences and perspectives of children and young people were considered equal to those of the professional stakeholders. This is reflected in the study’s findings, recommendations, and child-accessible project outputs which included solutions that might successfully mitigate harm for metaverse users. Consent was sought from the children, the parents/guardians, and the stakeholder partners who supported the study. Focus group design drew upon the literature review, which included an analysis of knowledge gaps, as well as insights from the VIRRAC team and stakeholder board’s collective expertise. This formulation was guided by the aims and conceptual framework of VIRRAC, and sought to investigate both the positive and negative aspects of children’s metaverse experience, along with understandings of safety measures, and suggestions for mitigating harm in virtual environments.
3.4.1 The Stakeholder Advisory Board
VIRRAC encompassed several investigation strands, all guided by a Project Stakeholder Board comprising experts from online safety education, industry, and practice. An expert stakeholder board enhances participatory research, especially when the project aims to improve policy and practice (Hoke et al., Reference Hoke, Rosen, Pileggi, Molinari and Sekhar2023). The board works as a source of rigour, knowledge, comparative benchmark, and as a critical friend. The establishment of the VIRRAC stakeholder board for the shared goal of improved safety in the metaverse was guided by broad principles of co-production (Gerlak et al., Reference Gerlak, Guido and Owen2023) such as shared responsibility and knowledge exchange in an active, continuous, and reciprocal relationship throughout all phases of VIRRAC. Board members enhanced the team’s understanding of metaverse harms by providing real-world examples and explanations.
3.4.2 Applying the Double Diamond Model to VIRRAC
The methodological approach of VIRRAC and the iterative evolution in output development are illustrated in Figure 1.

Figure 1 The double diamond approach
Figure 1Long description
This diagram portrays the Double Diamond approach: a method which is used across sectors as a process to move from a problem to a solution. The diagram itself represents the 4 main phases of the Double Diamond process as adapted by the research team. The diagram is designed to be interpreted by the reader horizontally, from left to right. The diagram includes 2 levels, the top being 4 headings, and the second level, below these headings, is made up of shapes. The very top of the diagram includes the 4 phases of the process horizontally, and these are headed as follows. 1. Discover; 2. Develop; 3. Design; and 4. Test and Deliver. Below these headings, there are two large circular shapes, made up of 2 semi-circle rounded arrows indicating movement from the start to the end of the diagram. The first circular shape falls directly below the first two phases of the Double Diamond. The second circular shape sits alongside the first, directly below the 3rd and 4th phases of the Double Diamond process. Within these 2 circles, there are 4 small triangles which make up 2 horizontal diamonds. The first horizontal diamond includes text which says scoping review and primary data. The second diamond, within the second circle, includes text which says primary data and output review. These triangles reflect the tasks carried out by the research team within the 4 phases of the process. The circular shapes and diamonds within them are situated between 2 anchoring black dots, one on the far left (the start) and the other on the far right (the end) of the diagram. The first dot represents the starting point of the process, before any of the phases have begun, and is titled problem. The second black dot represents the end of the process, and is titled solution.
The double diamond model, pioneered by the Design Council (2005), is used today across a range of contexts as a methodology which orchestrates a process of ‘Design Thinking’ (Caulliraux et al., Reference Caulliraux, Bastos and Araujo2020; The Design Council, 2005). The method consists of four parts: Discover, Define, Develop, and Deliver (the final step was originally termed the ‘implementation’ or ‘launching’ phase). The first part of the double diamond, ‘discover’, refers to exploring an issue, understanding communal needs, and gathering insights. In the case of VIRRAC, reviewing literature and gathering insights from stakeholders and young people are included in this step. The second part of the diamond process is named ‘define’. This is about providing a framework or a series of practical solutions to resolve a problem. The third step, ‘develop’, refers to actively exploring solutions. This was reflected across all engagements with stakeholders and young people within VIRRAC. The final step, ‘test and deliver’, includes reflection and evaluation.
The application of the VIRRAC double diamond allowed for a consistently reflective and iterative exploration of the study objectives, whereby potential solutions emerged from a tested process (Wang et al., Reference Wang, Huang, Xu, Li and Qin2023). The findings, guidance, policy, and practice recommendations were all presented and explored by both the stakeholder board and the youth panels to ensure outputs were an accurate representation of the conversations, and that they were fit for purpose.
3.5 The VIRRAC Logic Model: Introducing the Six Core Phases of VIRRAC
The overarching goal of VIRRAC was to conduct empirical and multidisciplinary research to better understand and respond to harms that children and young people may face in the metaverse.
At conception, the VIRRAC project was designed to include five core phases, as illustrated in Figure 2. Towards the end of project delivery, VIRRAC was awarded further funding by REPHRAIN. This additional funding sought to explore the impact and implications of VIRRAC through guided discussions with experts during additional industry and education-focused roundtables. These were incorporated into the project logic model (Figure 2) illustrated as the final and sixth phase of the VIRRAC project (see Section 5 for details).

Figure 2 The core phases of VIRRAC
Figure 2Long description
This diagram portrays the six core steps of the research project. The diagram has six black circles that sit alongside each other in a straight line, numbered 1 to 6, from left to right. An arrow navigates around the shapes, starting from circle one to circle six, indicating movement from start to end. Each of the circles has text either above or below it to explain that step in the process. Beneath the first circle, there is a box with wording in it that reads Set Up: Scoping Review and Establishment of the Stakeholder Board. The second circle represents the second overarching step of the project, which was the Exploratory Expert Roundtable, conducted with wider experts in the field. The third circle represents the Focus Groups with Children and Young People. The arrow continues to lead to the fourth circle, which represents Outputs developed and reviewed. The fifth circle represents Strategic and Targeted Dissemination, which includes publications and presence at academic and public events. The final circle is just an outline and looks a little different to the prior five circles, and this change in appearance is to highlight that the sixth box was not initially planned and was an additional step in the logic model, made possible by being awarded further funding. The sixth and final circle represents the Education and Industry Roundtables. Below the circles, there is a text box which reads Double diamond engagement with stakeholders and young people throughout. This text box has two little arrows on either side of it pointing outwards.
3.5.1 Phase 1: Setup: Conducting the Scoping Review and Establishing the Stakeholder Board
The VIRRAC team conducted a non-exhaustive multidisciplinary scoping review. The reason for this was twofold; firstly, to gain familiarity with the literature to ensure VIRRAC would establish new learning and avoid duplication or repetition of previous studies; and, secondly, to ensure the empirical research design was justified and reflected best practice throughout.
Interdisciplinary research databases, such as SCOPUS, Google Scholar, and PubMed, were used to conduct the VIRRAC scoping review in early 2024. A Boolean search strategy (Aliyu, Reference Aliyu2017; Bramer et al., Reference Bramer, De Jonge, Rethlefsen, Mast and Kleijnen2018) was used followed by capturing relevant peripheral literature and grey literature online or through existing networks. As discussed in Section 1 of this Element, a universally accepted definition of the metaverse does not currently exist, which proves challenging. Some examples of terms identified include; metaverse, meta* platforms, virtual reality, immersive reality, augmented reality, and metasphere. The scoping review unveiled concerns and risks to be embedded in the design of the methodology and data collection and provided clarity about the glaring gaps in knowledge.
The review also unveiled what the research team categorised as three overarching harm categories for metaverse users, being ‘Physical Impacts’, ‘Inter-personal Harms’, and ‘Exposure to Harmful Content’. These broad categories are depicted in the ‘Risks and Harms of the Metaverse’ Venn (Figure 3).

Figure 3 Risks and harms of the metaverse
While it was evident that academic interest in the metaverse is undoubtedly increasing worldwide, this scoping phase of VIRRAC did highlight the lack of primary research available on the subject. A particularly notable gap emerged when considering empirical research, conducted with metaverse users. Literature was primarily theory-driven, descriptive, or stemmed from hypothetical professional opinions. Primary data is crucial for the development of evidence-based policy and practice, including the ability to make recommendations that improve safety in immersive spaces.
Phase 1 of VIRRAC also included recruitment of the project stakeholder and advisory board. Establishing a multidisciplinary expert Stakeholder Board was a key element of the VIRRAC double diamond approach, with representatives from industry, education, safeguarding, policy, and practice. Incorporating the voices and advice of cross-sector professionals from was key to ensure that a broad spectrum of expertise was available to challenge and explore the findings throughout the research, from start to finish.
The stakeholder board was mobilised in the initial stages of VIRRAC. Recruitment took place early to ensure that the team could draw on the invaluable expertise of board members from the start. Board members were identified and recruited based on their discipline and skill set, utilising existing networks, desktop research, and recommendations from wider associates.
3.5.2 Phase 2: Exploratory Expert Roundtable
The expert roundtable was the first data collection phase within the VIRRAC project. All stakeholders on the project board were invited to participate, along with a wider recruitment process based on desktop research and VIRRAC networks. Within this roundtable, perceptions of existing, near-future, and future risks for children were explored. The aim was to develop an understanding of the knowledge gap and resource needs of stakeholders working with children at risk of abuse and exploitation in the metaverse.
One key challenge faced during this phase was identifying and recruiting experts in the field of technology, but this was overcome through networks within the stakeholder board. The expert roundtable included twelve professionals in total, from differing backgrounds and with a range of expertise in the field, including educational and practice professionals, an industry expert, a child psychologist, and academics.
Designing the roundtable guide was a delicate process as the participants were so diverse in expertise, and the subject matter was novel and underexplored. Based on this, discussions were participant-led where possible, broadly surrounding the following themes (reflective of the core aims and objectives of VIRRAC):
Risks and benefits of the metaverse (for children)
Mental health and well-being in the metaverse
Strategies to empower and include children’s perspectives
Knowledge gaps and resources for protecting children in the metaverse
The development of the roundtable guide was a fluid process. The guide’s structure underwent a series of reviews prior to data collection. The roundtable was conducted online with additional members of the team present to support, facilitate, take notes, and time-keep, to ensure that the roundtable was comprehensive and met the study aims.
3.6 Phase 3: The Teenage Youth Panel Design
The teenage participants were all members of a pre-existing youth panel, led by a gatekeeper organisation with which VIRRAC worked closely. The youth panel included a range of young people from diverse backgrounds, based in a number of geographical locations across the UK, diverse in terms of ethnicity and gender, and were between fourteen and eighteen years old. There are notable complexities of conducting research with younger populations, including avenues of recruitment, ethical considerations, and effective participation (Brennan, Reference Brennan and McElvaney2020). Ultimately, after exploring various avenues, the decision was made to conduct focus groups. The exploratory focus group design was informed by the expert roundtable and the literature.
Focus groups offer an advantage over other forms of qualitative data collection, especially when discussing new phenomena or concepts that are not yet publicly conceptualised in mainstream society (Morgan, Reference Morgan1997). The communal nature of a focus group can inspire a participatory discussion, allowing participants to explore opinions among peers. Considering the ambiguity surrounding the concept of metaverse platforms and the novelty of this area, focus group methodology proved crucial. The team also recognised the potential constraints of collecting meaningful data within a short-scale project, so it was hoped that collecting data in the form of focus groups would allow for a more representative study with more voices included.
As discussed earlier, there are notable complexities with conducting research with young participants. It is crucial to build trust and rapport (Schelbe et al., Reference Schelbe, Chanmugam and Moses2015). Based on this, data collection with the teenage youth panel consisted of three integral elements: a snapshot survey, an intervention experience involving a visit to the metaverse, and a final focus group session. The young people who participated were all members of pre-existing panels led by VIRRAC stakeholder partners. The partner organisations coordinated the recruitment and communication processes. The pre-existing panel consisted of a diverse group of young people, representing various ethnicities and genders. While VIRRAC researchers led ethical approval processes and all aspects of study design, stakeholders were responsible for direct communications with children and caregivers prior to the focus groups, obtaining permission from the children and their legal guardians beforehand.
Two focus groups were scheduled to accommodate a maximum of seven participants per group. The focus groups took place simultaneously in person, and two members of the VIRRAC team were present throughout both sessions, which were co-facilitated by the gate-keeper organisation given their existing relationship with the participants.
While a number of the teenage youth panel had varying experience of the metaverse and VR, some participants who took part had no experience and had never used a VR headset. This range of experience across the participants allowed researchers to capture novel and unfiltered perceptions of the metaverse, facilitating a more varied conversation. The intervention element provided an opportunity for participants to enter the metaverse and engage with a fun program, allowing them to explore the virtual world independently or interact with other group members. The immersive experience was not only about building trust and engaging with the virtual environment but was also overseen by experienced VR professionals, who played a crucial role in facilitating the children’s journey through the metaverse, tailored to the interests and safety of the children.
Throughout the intervention session, comprehensive care and safeguarding measures were observed. This dual-layered safety protocol was designed to protect the children in both the virtual and physical realms. In the virtual world, precautions were taken to expose children only to age-appropriate content and interactions, while in the physical world, measures were in place to monitor well-being.
A pre- and post-intervention survey and two focus groups were conducted. The first survey sought to understand the nature of children’s experiences using 2D and 3D platforms. This aspect of the research provides a snapshot, and although the findings are in no way representative, they did provide an interesting insight into the children’s use of the metaverse.
The focus groups followed VIRRAC’s conceptual framework, focusing on metaverse experiences, perspectives on metaverse harms, and potential solutions.
3.6.1 Phase 4: Online Focus Groups with Children (8–12 Years)
A core stakeholder partner helped to design, recruit, and conduct a series of virtual focus groups with children online, via Microsoft Teams. While the focus groups were initially intended to be conducted in person, the schedule was adapted in response to the recruitment process. The study was moved online to include a more diverse participant base of children from five countries worldwide who had indicated an interest in participating in the research. By doing so, the study benefitted by incorporating the experiences, insights, and cultural nuances of a more representative, global cohort of young metaverse users. Consequently, children from Australia, Canada, the UK, Singapore, and South Africa participated.
The scheduling process proved difficult due to the multiple time zones, extracurricular activities, and commitments held by the participants. Based on this, two of the focus groups were scheduled during the weekend, based on availability of the participants. Three focus groups were conducted in total, and twelve children took part, excluding parents and legal guardians who participated. Due to the nature of conducting research with children, the focus group design and conduct reflected the needs and abilities of children in terms of duration and approach. Parents and guardians were invited to join the sessions. Especially in the case of the youngest participants, the focus groups benefitted from the addition of familiar adults who were able to encourage the children to share experiences and build confidence throughout the discussions.
The stakeholder who co-facilitated was familiar to the children and their respective adults, therefore holding a pre-existing level of trust and rapport. Conducting the focus groups allowed children to participate from the comfort of their own homes. This also meant that the research was, however, dependent on external facts such as reliable Wi-Fi network, working cameras, and working software. These technical considerations were a concern, mitigated by having secondary facilitators ready to take over if needed.
The children who took part in the focus groups were avid virtual reality users with extensive experience of engaging in a variety of metaverse platforms and were able to speak to experiences and ramifications of immersive play.
These focus groups followed the same structure as the Teenage Youth Panel; however, questions were shorter and reflective of the younger respondents taking part. The questions posed to the children were reviewed by the stakeholders (qualified therapeutic practitioners) to ensure language level and content were deemed appropriate for those participating. All elements of this research were reviewed extensively by expert partner organisations who were supporting the team in carrying out this complex element of conducting research with children and young people.
One of the challenges anticipated during the focus groups with children was navigating the dynamics between parents and children, ensuring that the young participants were free to share their experiences, and that the child’s view, rather than that of their parents, was being shared. This potential issue was recognised and carefully considered during planning stages. Parents were briefed on the goals and objectives of the focus groups beforehand to avoid any miscommunication. Focus groups were carefully facilitated by trained researchers alongside trusted gatekeeper organisations.
3.6.2 Phase 4: Output Development and Review
This study included datasets from three cohorts: experts, teenagers, and young children. The data from these streams were analysed in different ways. Thematic analysis (Braun & Clarke, Reference Braun and Clarke2006) was conducted at different time points and shared in a fluid process within the team, and, in some cases, insights were identified through close reading of transcripts, enabling the researcher to focus on nuanced, participant-driven themes. This broad approach to analysis was chosen to capture the richness of individual responses (Fereday & Cochrane, Reference Fereday and Cochrane2006), allowing patterns and themes to emerge organically. Given the exploratory and participatory nature of the research, this analysis approach facilitated a flexible, respondent-centred interpretation of all strands of VIRRAC data, aligning with the study’s objectives.
Insights gathered informed the development of the VIRRAC Toolkit (Davidson et al., Reference Davidson, Martellozzo, Farr, Bradbury and Meggyesfalvi2024), integrating data analysis from the focus groups with findings from the literature review to guide subsequent stages and project outputs. A selection of evidence-led tools for young people, parents, and caregivers were produced.
The lived experiences gathered within the children’s online focus groups were used to curate the VIRRAC educational film resource, an important output of the project.Footnote 2 This resource was co-designed and co-produced (Vargas et al., Reference Vargas, Whelan and Allender2022) in an iterative process with the stakeholder organisation that had co-facilitated the focus groups. By following broad principles of co-production (Gerlak et al., Reference Gerlak, Guido and Owen2023), including sharing of power, responsibility, and skill, and by fostering an active, continuous and reciprocal relationship with the stakeholders as well as young people, the film was able to address the issues that had been identified within the research in a format and narrative that was tested to be accessible to children, and true to the voices of VIRRAC participants. The educational film resource consists of a series of five short films exploring risks associated with the metaverse, offering safety guidance. Each film draws directly on quotes from children who participated in the focus groups, ensuring that participant’s voices and lived experiences are at the heart of the content.
3.6.3 Phase 5: Targeted Dissemination
The outputs, including the Toolkit Report and the educational film resource, were shared in a targeted dissemination strategy. This was a carefully considered phase. The significance of establishing a wide reach of these outputs to professional organisations, academics, parents, caregivers, and young people was a crucial step towards improving the safety of young people across metaverse platforms. The team ensured that outputs included child-accessible formats. VIRRAC tools and films were launched on Safer Internet Day 2024 via a bespoke website tailored for parents, professionals, and practitioners (available at www.virrac.com).
3.6.4 Phase 6: Conducting the Education and Industry Roundtables
VIRRAC was awarded further funding to conduct two additional roundtables to explore the implications of findings and the feasibility of project recommendations (see Section 5). Roundtable discussions were conducted with experienced professionals in two fields: education and online safety. The focus of these final roundtables echoed the discussions with stakeholders and young participants who took part in the study.
The roundtables were intentionally small, to allow for in-depth discussion into the findings and priorities of VIRRAC. They provided practical perspectives on embedding children’s safeguarding needs within the evolving landscape of digital education, and discussed the challenges involved, offering a detailed record of insights and recommendations for safeguarding young children in virtual environments.
3.7 Conclusion
This section has reviewed the methods adopted within the VIRRAC project and considered the key challenges faced in conducting this research. As generally expected, a range of challenges were encountered, from conception to completion, during the complex qualitative research. Challenges included theoretical and methodological conundrums as well as logistical obstacles that the team needed to first recognise and then tackle appropriately, such as navigating the collection of good quality data across different age populations, different mediums, and different time zones. The application of a double diamond approach aided the team in responding and reviewing methods across all phases of the VIRRAC project.
Limitations included restricted access to diverse populations of young people and consequent reliance upon pre-existing panels of children and young people who were already engaging with youth organisations, therefore may not be representative. However, it should be noted that participants were diverse in terms of gender and ethnicity, spanning five countries globally. It should also be noted that VIRRAC findings are based on research with small sample sizes. However, bearing in mind that this was an exploratory qualitative research design, this level of participation was expected and acceptable.
This research engaged with frontline education practitioners in a critical review of VIRRAC findings, focusing on children’s direct experiences with metaverse platforms. While the study initially concentrated on the teenage age group, a significant aspect of the early discussions and recommendations highlighted safeguarding considerations for younger children, who are increasingly interacting with digital environments. Therefore, VIRRAC contributes knowledge that can shape practice and policy development across both primary and secondary school settings; enabling frontline practitioners to introduce a comprehensive approach to safeguarding in virtual reality.
1. Key challenges during setup and project design phases
I) Navigating conducting research in a new and under-researched area
II) The ambiguity of the definition of the metaverse and multiple iterations of the term
III) Ensuring the project stakeholder board was drawing on all relevant fields of expertise
IV) Ensuring outputs were reflective of children and young people’s voices and were accessible to younger audiences
2. Key challenges encountered during data collection phases
I) Multiple complex ethical approval processes
II) Navigating consent and data collection across different age groups and time zones
III) Overcoming barriers to recruitment and participation (both online and in-person)
IV) Navigating the role of parents and guardians during focus groups
V) Ensuring best practice in a participatory research study with young people
3. Key challenges tackled during dissemination phases
I) Ensuring young people’s voices were at the forefront of project outputs
II) Logistical considerations with dissemination formats
III) Ensuring dissemination across a range of environments for a wide, cross-sector reach
The application of the double diamond method meant that study outputs were solution-focused and reviewed by experts and young people. The review conducted in phase 1 and the focus groups evidenced the potentially detrimental psychological and physical impacts of immersive technologies on frequent users but further highlighted the need for additional empirical research. The approach taken within VIRRAC reiterated the need for future research that includes the application of academic theory to aid current understandings of youth safety in the metaverse, as well as tangible solutions to mitigate harm. Crucially, the methods adopted within VIRRAC were chosen with a view to centring the voices of children, young people, and professionals by engaging with them directly to understand their lived experiences, concerns, and expectations regarding online safety in the metaverse. The next two sections turn to the voices of our key stakeholders: children and professionals. Together, they present how risks and protections are experienced and understood in immersive digital environments.
4 Perceptions of Positivity and Risk in Metaverse Platforms: The Voices of Children
4.1 Introduction
The rise of immersive digital environments, often described under the umbrella of the ‘metaverse’, marks a significant evolution in how children experience play, learning, social interaction, and identity formation. As virtual and augmented reality technologies become increasingly accessible and embedded in children’s everyday lives, understanding how young users perceive and engage with these platforms is no longer a peripheral concern – it is central to the future of online safety, digital well-being, and ethical technology design.
In 2021, findings from the American Community Survey (ACS) indicated that approximately 97 per cent of children aged 3–18 had access to home internet (ACS, 2021). Hence, disregarding children’s involvement in scientific enquiry regarding innovative technologies, such as the metaverse, methodological tool development, research discussions, and outcome valorisation, would be an oversight.
Estimating the exact percentage of children engaging in metaverse games is challenging due to varying definitions of the ‘metaverse’ and differences in data collection methods. However, several studies provide insight into children’s participation in virtual gaming environments. A 2021 survey across key video game markets found that 38 per cent of gamers aged 10–20 had played a metaverse game in the past six months (Statista, 2024). For 2D metaverse games, Roblox alone reported over 164 million monthly active users in 2020, including more than half of all American children under sixteen. Additionally, UNICEF’s 2023 report noted that tens of millions of children and young people are actively engaged in virtual environments and game spaces, highlighting the significant level of youth participation.
These findings indicate substantial involvement of children in virtual gaming platforms, many of which exhibit metaverse-like characteristics. However, precise percentages may vary based on regional factors and the evolving nature of these digital environments.
In this section, children are recognised as non-academic stakeholders invited to engage in the research process and contribute their perspectives on matters directly affecting them. This Element celebrates children’s inherent capacities for reflection, analysis, curiosity, discovery, and creativity, acknowledging that while these capacities may differ in developmental stage from those of adults, they remain equally relevant for engagement in an inclusive and participatory scientific research process (Darbellay & Moody, Reference Darbellay, Moody and Lawrence2023). As stated in Section 3, having children’s voices at the heart of research and evaluation is imperative to ensuring that their concerns are heard, and that recommendations made are not reflective of the sole view of adults.
Through this collaborative research approach, the section aims to explore children’s experiences, understand their perceptions of risks, and identify their safety needs. This will begin with a contextual discussion around childhood in a digital age, reflecting on the importance of technology in shaping children’s lives and experiences.
This section sheds light on the predominant use of 2D metaverse platforms for gaming and socialising among children from various countries, including Australia, Canada, the UK, Singapore, and South Africa, employing not only face-to-face but also virtual methods for data collection. This will be achieved through an initial examination of digital childhoods and children’s relationships with technology, prior to discussions around the findings from the VIRRAC Project.
4.2 Digital Childhoods
Research underscores the internet’s significant role in children’s lives, offering opportunities and challenges that impact their development, education, and socialisation. Digital natives is a term that has been used to describe children’s familiarity and relationship with technology, as they know no time without the internet being ubiquitous in their lives (Lee & Gu, Reference Lee and Gu2022). The internet is a vital educational tool, providing children access to various information and learning resources. It facilitates research, supports homework completion, and enables engagement with interactive educational platforms, fostering self-directed learning and critical thinking skills (Edwards et al., Reference Edwards, Nolan and Henderson2018). However, internet access disparities, often called the digital divide, can hinder educational opportunities for children in underprivileged communities, exacerbating existing inequalities (Helsper, Reference Helsper2021). Engaging with the internet from an early age helps children develop essential digital literacy skills (Arsalani et al., Reference Arsalani, Sakhaei, Zamani and Zibaei2022). These competencies are increasingly important in a technology-driven world, preparing children for future academic and professional endeavours. In addition to this, online platforms enable children to connect with peers, share experiences, and participate in communities that align with their interests. For LGBTQ+ youth, in particular, the internet offers a supportive environment where they can explore their identities and find acceptance (Bates et al., Reference Bates, Hobman and Bell2020), often feeling safer online than in physical spaces. This digital interaction can be crucial for developing social skills and a sense of belonging. This is also true for neurodivergent or disabled children, as the metaverse holds potential for inclusivity, allowing children with disabilities to participate in activities that might be challenging in the physical world (Hutson, Reference Hutson2022). For example, VR can simulate otherwise inaccessible environments, providing all children with equal opportunities to explore, learn, and interact. This aspect of the metaverse can contribute to a more inclusive approach to education and socialisation.
The metaverse provides various platforms for entertainment, creativity, and self-expression. Children can engage in activities such as gaming, content creation, and exploring diverse media, enhancing creativity and cognitive development (Blumberg et al., Reference Blumberg, Deater-Deckard and Calvert2019). However, striking a balance between screen time and other activities is essential to promote overall well-being (Belton et al., Reference Belton, Issartel, Behan, Goss and Peers2021). There are mixed discussions surrounding the topic of well-being, technology, and its impact on children (Hollis et al., Reference Hollis, Livingstone and Sonuga-Barke2020). Whilst several consider technology as being responsible for poor mental health outcomes for children (Mustafaoğlu et al., Reference Mustafaoğlu, Zirek, Yasacı and Özdinçler2018), there is conflicting evidence to suggest that for children facing challenges such as mental health issues or social isolation, cyberspace, and environments such as the metaverse could serve as a valuable resource for support by providing therapeutic services (Ullah et al., Reference Ullah, Manickam, Obaidat, Laghari and Uddin2023).
4.3 Risks Children Face Online
As discussed in Sections 1 and 2, it is essential to acknowledge that while the metaverse offers numerous benefits, it also presents significant risks to children, including safeguarding challenges such as ensuring age-appropriate content, safeguarding privacy, and preventing exposure to potential online harms. As children navigate these virtual spaces, parental guidance and robust safety measures are essential to provide a secure and enriching experience, which can only be developed through the examination of lived experiences and the application of those experiences to our understanding of offending and victimisation.
Concerning the risks, whilst the metaverse offers numerous inclusive and accessible benefits to a broad spectrum of children, it is crucial to address potential risks, including exposure to inappropriate content, cyberbullying, online grooming, privacy breaches, and the potential for addiction (Hinz, Reference 79Hinz2023). Children may also inadvertently encounter links to explicit material, such as violent pornography, which can lead to psychological trauma (Gewirtz-Meydan et al., Reference Gewirtz-Meydan, Walsh, Wolak and Finkelhor2018). Whilst such content may be accessible through both 2D and 3D metaverses, the VR element of the experience is still in its infancy regarding the sophistication of ‘realism’, which is supported by Ramirez & LaBarge, (Reference Ramirez and LaBarge2018), who argue that there is a limitation to the extent to which current forms of the metaverse can authentically generate real-world representations, as the representation of selves is a characterisation rather than an accurate depiction. In one sense the characterisation of online users and the environments could minimise the impact of violent or sexual content; however, on the reverse, the lack of realism could distance a person from their actions – to dissociate (Suler, Reference Suler2004), and deny that any real hurt or injury was caused (Matza & Sykes, Reference Sykes and Matza1957). Regardless of which, there is still the opportunity for children to be exposed to sexual interactions, harassment, and exploitation. VRChat has proved to be a good example of such an environment.
VRChat is accessible using VR headsets and poses significant risks to children due to its poor age verification systems that are easy for children to bypass (Khanta, Reference Khanta2024). While it offers an immersive and engaging platform for social interaction, it also exposes young users to inappropriate content, adult interactions, and potentially harmful behaviours (Ortiz, Reference Ortiz2022). In VRChat, users can enter public worlds and communicate in real-time using voice and full-body avatars, which create a heightened sense of presence and realism. However, this same immersion increases the likelihood of encountering explicit language, sexual content, and virtual misconduct that would be inappropriate for children. As mentioned, the platform has limited age verification processes, allowing children to easily access adult-oriented spaces by simply stating they meet the age requirement. For example, in 2022, a BBC News investigation revealed that children were able to access and be exposed to virtual strip clubs and other inappropriate content within metaverse platforms.
Children are also at risk of psychological harm in apps such as VRChat due to the intensity of experiences in VR. VRChat mimics real-life interactions so closely that inappropriate encounters can feel deeply personal and traumatic (Ortiz, Reference Ortiz2022), even though they occur in a virtual space. Additionally, the use of avatars allows individuals to manipulate their identity, making it harder for children to discern who they are genuinely interacting with. This can lead to confusion, emotional distress, or exploitation.
The digital realm facilitates harmful behaviours beyond physical spaces, allowing perpetrators to harass victims anonymously and persistently. Cyberbullying can lead to anxiety, depression, and, in extreme cases, suicidal thoughts among children (Martínez-Monteagudo et al., Reference Martínez-Monteagudo, Delgado, Díaz-Herrero and García-Fernández2020; Maurya et al., Reference Maurya, Muhammad, Dhillon and Maurya2022); and predators can exploit children through the weaknesses they identify in online platform moderation to build trust with minors, often leading to coercion, manipulation, and exploitation that is not only sexual but also financial. The anonymity of the internet allows these individuals to create fake profiles as peers or trustworthy figures, making it challenging for children to discern malicious intent (Ashcroft et al., Reference Ashcroft, Kaati and Meyer2015). Specific to the metaverse, there are no barriers to prevent adults from setting up profiles identifying themselves as children, and creating avatars and identities that reflect that. Unfortunately, there is little to no research explicitly addressing this issue to determine the extent to which it occurs or ways to mitigate such a risk.
The dynamic nature of the metaverse complicates the enforcement of age restrictions and content moderation. Platforms like Horizon Worlds have faced criticism for allowing underage users despite policies intended to restrict access to adults. This gap between policy and practice highlights the challenges in protecting young users (see Section 5). Platforms can enhance content moderation by consulting with children to increase their understanding of what they perceive as harmful and a risk to their online safety.
4.4 Existing Research on Children’s Metaverse Experiences: Findings from the VIRRAC Project
To build upon existing research, the VIRRAC Project focused on identifying not only children’s experiences of using 3D metaverse using VR headsets but also 2D metaverse games in order to encapsulate a broader array of experiences and determine whether there is a comparative difference between the two models. As discussed in Section 3, the methodology of this study was centred around collating data that would contribute to a better understanding of how children use and perceive the 2D and 3D metaverse to determine their perceptions of the risks and vulnerabilities they face and how the immersive nature of virtual reality will shape those risks. This was achieved by exploring two age groups, firstly with children aged between thirteen and eighteen who were part of a teenage youth panel, and secondly with children aged between eight and twelve years. It is important to note that the extent of children’s experiences using metaverse headsets was not measured as the degree of variance between usage spanned from between a singular occurrence and daily usage.
4.5 Perceptions of the Metaverse Based on 2D Experiences
The survey, conducted with the teenage youth panel, revealed that the children used the metaverse extensively in their everyday lives. The majority played Minecraft,Footnote 3 followed by Roblox.Footnote 4 The three children reported using two other online 2D virtual world games: BeatSaber (a music challenge game) and FIFA (a football game). The children most enjoyed using these apps to facilitate play with family and friends and socialise recreationally. The results interestingly showed that the children had no interest in these games for the purpose of meeting new people and engaging with them, instead as a means of maintaining existing relationships. There were many things that the children enjoyed about these online spaces, but mostly, it was the excitement/challenge of the game itself and the boundless environment, which enabled them to move without restriction. Children commented that their other favourite things about playing online virtual games are that they allow them to engage in teamwork and that they view it as a means of ‘escape’ from everyday life. A sentiment can be tied into the theory of Edgework by Lyng (Reference Lyng1990), who postulated that people put themselves into exciting, riskier environments to escape the mundane, restrictive, rule-bound societies in which we live our everyday lives. This is amplified for children, as significantly more restrictions than adults bind them as they have fewer rights and powers as citizens (Wringe, Reference Wringe2020).
The teenagers enjoyed the behaviour of other players the least, revealing that these views were based on concerning experiences of racism, homophobia, and sexual harassment that had been personally experienced or witnessed in the online games. Others commented that there was often a degree of toxic behaviour where other (unknown) players would deliberately sabotage games in play. The toxic behaviour referred to here is what Suler (Reference Suler2004) defined as Toxic Disinhibition, whereby the anonymity and subsequent online disinhibition experienced by online players enable a degree of freedom to, at some degree, act without impunity and engage in behaviours that they would not exhibit in the offline world. Appropriate behaviours are determined by learnt social norms (Huesmann, Reference Huesmann1988) and social control (Hirschi, Reference Hirschi2015). Online interpersonal aggression is common without these offline social behavioural controls (Runions & Bak, Reference Runions and Bak2015; Zimmerman & Ybarra, Reference Zimmerman and Ybarra2016).
Acts of cyber aggression have subsequently been extensively explored and even scaled (DeMarsico et al., Reference DeMarsico, Bounoua, Miglin and Sadeh2022; Hilvert-Bruce & Neil, Reference Hilvert-Bruce and Neill2020; Runions, Bak & Shaw, 2016) as a response to the rapidly rising numbers of hate crime abuse being reported in the UK (Hubbard, Reference Hubbard2020), as well as studies showing that adolescents are more likely to become targets of online hate crime than adults, but also more likely to report it. This could be due to the increased time spent online by adolescents (Davidson & Farr, Reference Davidson and Farr2026). Children demonstrated an understanding of the risks that strangers posed to them, such as grooming and doxing. Doxing is the malicious act of revealing a person’s identity and personal information about them, such as their address, date of birth, phone number, or workplace, with the intent of causing harm or harassment to that person. Doxing is a form of cyberbullying and harassment that falls under several UK laws as a criminal offence should that behaviour be determined as malicious communications, harassment, and a computer misuse offence under the Computer Misuse Act (1990). It could also be regarded as an assault against actual bodily harm in which it can be proven that there has been psychological harm and distress caused.
Interestingly, while the teenagers raised concerns regarding online hate crime, it was not reported as a significant concern regarding risk. This suggests that whilst participants viewed the behaviour as a negative occurrence within online metaverse environments, they did not perceive themselves to be at risk of it compared to grooming or doxing. The teenage participants were confident in their abilities to identify whom they could and could not trust online, such as in a game or chat room. The children were confident that it is impossible to trust someone you do not know online and that knowing someone offline was an important factor in establishing trust. However, children considered the frequency of communication as an enabler in establishing trust with others online that they did not know offline, which is also intrinsic to the modus operandi of grooming, whereby the perpetrator will engage with their target over a varied period to gain the trust of the child (Kloess et al., Reference Kloess, Hamilton-Giachritsis and Beech2019). Studies have shown that children often have a general lack of knowledge or even misinterpretation of the risks and the methodology of groomers online (Calvete et al., Reference Calvete, Fernández-González, González-Cabrera, Machimbarrena and Orue2022; Wood & Wheatcroft, Reference Wood and Wheatcroft2020), which would explain why the adolescent participants viewed communication as indicative of trust.
Despite the teenagers evidencing their understanding of stranger-related risks in online metaverse game environments, further investigation of their behaviours indicated some contradictory findings that suggest that what children know and what they do are conflicting. The teenagers openly, and in the majority, admitted to frequently playing and communicating with strangers online, with a third of the children stating that they communicated with strangers using off-game chat apps, such as Discord, during play, indicating that they take those online interactions into more private and encrypted spaces using their personal devices. Again, a tactic used by online sexual predators whereby they steer conversations with children from online open game spaces and into private chat rooms or apps to facilitate their process of isolation (Chebouni et al., Reference Chebouni, Seghir and Lounis2022; Lykousas, Reference Lykousas2022; Plachkinova, Reference Plachkinova2023). Two-thirds of the children also revealed that they had received private messages in games from people they do not know asking them to chat off-game. In addition to this, two-thirds had also been asked to send an image of themselves to a stranger, which is, again, another methodology used by online sexual predators. The teenagers were confident in their ability to report other users and had done so on a regular basis. It is important to note that one of the most significant challenges identified in reporting was the language used in reporting functions. This difficulty varied depending on the ages of the children, but the majority struggled to understand and accurately define terminology such as ‘exploitation’, indicating that the adultification of language posed a significant barrier to children seeking help.
The propensity to engage and play with strangers online was, comparatively, not something shared by the younger children who participated in the research, who preferred interacting with known users and were less likely to engage with people they did not know online, partly due to the activities engaged in across metaverse platforms:
‘When I am in VR I am only with [a friend they know] and it isn’t a public space so there is nothing to not like, because I’m not with anyone that is unsafe.’ CHILD 3
The focus groups with the children did, however, shed light on the prevalence of potentially risky experiences in the metaverse, as the majority of the participants had experienced situations which had impacted them negatively while engaging with other users online in the metaverse previously. These experiences ranged from witnessing or being direct victims of harassment or hate speech in the metaverse, to pressure to share personal information, to exposure to a range of potentially upsetting or harmful adult content. Overall, despite their young age, the children’s comprehension of the potential dangers of engaging with strangers in the metaverse was palpable:
‘When you are online, there are other people online too that you don’t know … you gotta be safe. They might ask you strange questions, like where do you live, and what’s your name? … you have to stay safe out there.’ CHILD 1
Some of the harms discussed by the children included the exposure to foul language by older users that is not suitable for young and impressionable age-groups. Despite their infancy, the children tended to take steps to engage in spaces with only people they already knew and felt safe with, such as school friends and other young family members, and even shared tips with one another during the focus groups about how to prevent needing to interact with strangers in the metaverse.
For the younger children, the metaverse was a conduit for exploring self through avatars, safe competition amongst peers, and imaginative play, whereby children could keep in touch with school friends and cousins living far away, both easily and from their comfort of their own homes, and therefore their desires and interactions with VR were somewhat different to that of the teenage youth panel.
4.6 Perceptions of the Metaverse Based on 3D Experiences
Children reflected on how ‘real’ immersive experiences can feel, and, in turn, how this can ultimately intensify a player’s feelings and emotions more than forms of 2D gaming have the ability to do. This highlights the emerging differences between children’s previous experiences of computer gaming to their experiences VR across metaverse platforms, and the potential benefits as well as concerns of this:
‘I felt like I was in a different world, then when I looked at my hand, I remembered “Oh yeah it is a VR headset”.’ CHILD 4
Many of children spoke of visiting new places they haven’t seen before, and described climbing mountains, seeing dinosaurs, or walking on planets. The energy and excitement about the range of activities available in the metaverse shared by the younger children highlighted the enticement of virtual reality, and the benefits the immersive metaverse experience can offer, especially for younger generations:
‘There’s like 2 art places called Gravity Sketch and Multi Brush, which I spent a lot of time on with someone and we were like building like architectural things, it is really good.’ CHILD 3
Children also recognised the vastness of metaverse platforms. There are hundreds of online games and social interaction apps to choose from, as was noted by the children, allowing them to choose and engage with different apps depending on their mood. Across the focus groups, the children were very aware of this, and stated that this influenced their choice on the platforms they engaged in. There was an awareness that in some cases they preferred to engage in creative experiences, or skill developing experiences, rather than more playful and social activities.
When asked about their experiences with VR metaverses using headsets, the teenage participants also appreciated the additional features that the technology provided. However, the teenage panel acknowledged that such VR tools increased their experiences of reality confusion, which we defined as their inability to remain psychologically detached from their experiences and have emotional and physiological responses to the environment and situations they were in. This was echoed in the focus groups with the younger children:
While the games generated increased excitement, several stated that their experiences with VR left them feeling overwhelmed.
Their sense of safety primarily depended upon the presence of others with whom they were familiar:
‘I felt safe mostly because the people in the game I had already met in real life. If I was playing with strangers I would have felt differently and most likely quite anxious and scared.’ TEENAGE PANEL, CHILD 4
Being around others whom they knew made them feel less isolated and more confident in the environment that they were in. They felt that engaging in the metaverse directly contributed to the emotions they were experiencing, suggesting that feelings of safety would be more unstable in the presence of strangers and that the emotions they experienced could be less positive and more harmful. Children felt that knowing the true identities of other unknown users was important, yet less so in the 3D metaverse than the 2D metaverse. This was an interesting discovery, as the psychological effects of social interaction in the 3D metaverse are more amplified than in 2D metaverses. Whilst the children felt happy and excited to be in the metaverse, two-thirds of the children did not view the 3D metaverse as a safe space for children as the VR environment will make children want to take risks in an environment that is more REAL than the 2D games that they play, which leads to the discussions made in academic literature regarding the positive and negatives of these REAL, immersive experiences. On the one hand, augmented realities present the opportunity to extensive opportunities to learn that go beyond the boundaries of the classroom and into more decentralised environments in which humans can interact and move amongst historical landmarks, such as Machu Pichu, or geographical climate regions, such as Antarctica, which existing models of education provision cannot provide (Inceoglu & Ciloglugil, Reference Inceoglu and Ciloglugil2022; Lin et al, Reference Lin, Wan, Gan, Chen and Chao2022). It has also proven to be an effective method of delivering treatment for trauma or poor mental health, such as PTSD (Lopez-Ojeda & Harley, Reference Lopez-Ojeda and Harley2022). At the same time, and the reverse of such positive utilisations of immersive technologies, there is the potential for more significant psychological harm to be caused. Whereby the experience of being a victim of harassment, including behaviours of a bully, hate crime, or sexual conduct, can also feel REAL to those experiencing it (Upadhyay et al., Reference Upadhyay, Agrawal and Dwivedi2023). This was acknowledged by one of the children, who had experience using a VR headset, who stated:
‘The fact that you could meet anyone and feel as if you were in the same room as them, regardless of whether you have met them before is a huge risk. Especially as VR is extremely uncensored and players are able to be misogynistic, racist and homophobic without any repercussions.’ TEENAGE PANEL, CHILD 11
Despite their concerns regarding these risks, almost all the teenage participants wanted to spend more time inside the metaverse and purchase a headset, as the 3D metaverse was more exciting than the 2D metaverse games they currently engage with, indicating that their perceived concerns or unsureness were mitigated by the sensations of experiencing the virtual environment. When asked what they viewed as being the most significant risks in the 3D metaverse, the responses were much broader and less concise than prior to their experience using the VR headsets. For example, Grooming and doxing were barely discussed. The teenagers were more concerned about engaging with people they did not know and experiencing reality confusion. The teenagers were also concerned that there was a risk of becoming desensitised to violent content and that this environment could perpetuate feelings of online disinhibition (Suler, Reference Suler2004), whereby they remove a degree of responsibility away from their online actions. Such views indicate a high level of sensitivity and awareness of the risk but also a high level of concern about what they, themselves, and other children are likely to become victims of.
To increase safety in the metaverse, the young participants across both age groups unanimously felt that greater levels of moderation needed to be provided by the tech companies responsible for the platforms that children would be engaging with. To facilitate this, the participants wanted to improve reporting mechanisms, and include restrictions regarding communication, which would require a more rigorous process of age verification to prevent adults from directly communicating with children on platforms.
In addition to this, the children also wanted some form of behaviour modelling guidance. Ideally, this would be a VR situation-based learning tool which has proven, through research, to be highly effective (Zhang, Reference Zhang2023), enabling them to understand how to behave in any environment.
‘Guidance throughout the experience, doesn’t have to be a physical person, could be an automated, preprogramed voice.’ TEENAGE PANEL, CHILD 2
The children believed that it would enable them to understand what behaviour was expected and not expected in VR spaces:
‘… guidance on what is the right thing to be doing in the space.’ TEENAGE PANEL, CHILD 10
As discussed in Section 2, environments without stabilised social constructs of behavioural norms provide opportunities for deviance and victimisation. Establishing a singular set of social norms in such a transient environment is unlikely to ever become established in its entirety. Content moderation, clear guidelines, new user training, and in-feature bots could support the child’s initial engagement with an app in such spaces. It would also serve as a form of moderation support shadow that could enable the child to feel less isolated and vulnerable during their experience. Thus, they tied in their comments regarding elements that made them feel safe during their virtual experience, which was intrinsically tied to their awareness of familiar others being present. Finally, some of the children emphasised the importance of time restrictions on the duration they can spend in these virtual reality spaces. Concerns about the Proteus Effect have been raised for children with unregulated time restrictions (Ratan et al., Reference Ratan, Beyea, Li and Graciano2020), whereby extended immersive experience as an avatar leads to the online user adopting the behaviours and characteristics of their avatar. This is of a specific concern for sexualised characteristics and body image. Research into extended avatar use and the Proteus Effect has indicated that there is a correlation between the two and feelings of adolescent body dysmorphia and anorexia (Hinz, Reference 79Hinz2023), higher emotional states (Stavropoulos et al., Reference Stavropoulos, Pontes, Gomez, Schivinski and Griffiths2020), and aggression (Hawkins et al., Reference Hawkins, Saleem, Gibson and Bushman2021). Raising the question of whether long-term usage of VR and the embodiment development between a child and their avatar and the impact this has on a child’s identity and well-being is once again unknown.
4.7 Conclusion
This section has explored the nuanced and often paradoxical relationship that children and young people have with the metaverse, highlighting both the positive potential and the profound risks they navigate within these immersive digital environments. The children’s insights, drawn from their experiences in 2D and 3D platforms, reveal that while these spaces offer creativity, social connection, and escapism, they are equally fraught with safety concerns that cannot be overlooked.
The findings from the VIRRAC Project reinforce the importance of involving children in discussions and decisions about the technologies they use. Children demonstrated a clear awareness of key risks such as grooming, doxing, and cyberbullying, and their ability to articulate the emotional and psychological impact of these experiences challenges the perception that children are passive consumers of technology. Instead, they are active participants – reflective, critical, and, in many cases, concerned about the environments they inhabit.
Notably, while children expressed confidence in their ability to identify and manage risk, their actual behaviours often contradicted this confidence. This suggests a complex interplay between knowledge, perceived control, and behavioural choices, particularly in the context of immersive platforms that blur the lines between digital and physical realities. The children’s willingness to engage in private conversations with strangers, despite recognising the dangers of grooming and exploitation, points to a need for ongoing education and more intuitive, supportive safety frameworks that align with how children actually use these platforms.
The shift from 2D to 3D metaverse environments appears to deepen children’s emotional and cognitive engagement, while also amplifying the risks. The children’s descriptions of reality confusion, increased disinhibition, and a heightened desire to engage with immersive content – even when they recognised the risks – demonstrate the powerful psychological pull of these spaces. The fact that many still wanted to spend more time in VR after recognising the dangers suggests that the sensory and emotional gratification offered by the metaverse may override their internal risk assessments. This highlights the critical need for pre-emptive, rather than reactive, safety design and regulation.
Children’s desire for stricter moderation, improved age verification, and embedded guidance within metaverse environments reflects a sophisticated understanding of the structural changes needed to make these platforms safer. Their recommendation for VR-based situational learning tools, in particular, suggests that children not only want to be protected – they want to be educated and empowered to protect themselves and others. This echoes broader calls in child-centred research for co-produced solutions that treat children as agents in their own safety, rather than simply as subjects of adult intervention.
Importantly, this section draws attention to a critical research gap: the lack of empirical evidence on how easily adults can pose as children within metaverse platforms. This issue, highlighted by the children’s comments and supported by prior literature on grooming, avatar disinhibition, and identity manipulation, demands urgent scholarly attention. Without clearer data on the prevalence and mechanisms of such deception, efforts to safeguard children remain incomplete.
In conclusion, children’s perspectives offer a vital lens through which to evaluate the real-world implications of safety risks in the metaverse. Their voices underscore the inadequacies of current systems and the urgent need for a collaborative, child-inclusive approach to technology development, regulation, and education. As this section has shown, understanding children’s perceptions of risk is not only about identifying what they fear – it is about listening to what they need, how they think, and what they are already doing to navigate a world that often evolves faster than the protections built to safeguard them. Their insights must guide the design of safer, more inclusive, and more ethically responsible metaverse platforms moving forward.
5 Detection, Protection, and Disruption: Challenges in Policy and Professional Practice
5.1 Introduction
This section examines the practical challenges associated with safeguarding children in the metaverse, drawing upon the findings from this study and related research. It further explores the broader implications of online safety policy in the context of child protection within VR environments, drawing upon findings from focus groups with expert stakeholders in the education and industry sectors (Stage 6 of the VIRRAC Study). Furthermore, the discussion considers the implications of these findings for law enforcement, highlighting key policing challenges in addressing child safeguarding within immersive digital spaces.
Digital spaces, such as the metaverse, provide many positive contributions to the educational, social, and psychological development of children and young people (Muppalla et al., Reference Muppalla, Vuppalapati, Pulliahgaru. and Sreenivasulu2023), providing opportunities for increased inclusivity and its therapeutic support (Israni et al., Reference Israni, Matheny, Matlow and Whicher2020). However, professionals and practitioners who work on the frontline of safeguarding and child protection expressed concerns regarding potential risks such as cyberbullying and harassment, online hate and grooming. These concerns highlight the need for child-friendly, effective monitoring and reporting protocols to enhance safety within virtual environments, sentiments that were echoed by children, as discussed in Section 4. These findings align with recent research, which has identified similar challenges and emphasised the importance of robust safeguarding measures (see, for example, Hinduja & Patchin, Reference Hinduja and Patchin2024).
In the VIRRAC study, stakeholder respondents discussed the potential impact of the metaverse on childhood development, noting the need for further research into cognitive and neuroplastic effects. Concerns were also raised regarding physical risks associated with VR headset use, including eye strain and the risk of injury due to the user’s limited awareness of their physical surroundings. These concerns align with those raised by organisations such as the NSPCC (2024), which has warned of the health and safety risks associated with prolonged VR use amongst children.
The initial VIRRAC findings indicated that children in the metaverse may encounter harm, experience pressure to share personal information, and be exposed to inappropriate content. These concerns were particularly pronounced among younger children who participated in the study. Both expert stakeholders and young participants in the research unanimously acknowledged the need for clearer guidance, increased awareness, adequate support, and robust online safety features to enhance the protection of young users (Davidson et al., Reference Davidson, Martellozzo, Farr, Bradbury and Meggyesfalvi2024). This finding is consistent with recent research funded by the NSPCC and conducted by Allen et al. (Reference Allen, McIntosh, Henderson and Hughes2023), which highlights the various risks children face in the metaverse, including online grooming. The authors emphasise the urgency of robust safeguarding measures, stating that ‘without adequate safeguarding, immersive technologies pose a very real and present threat to children, especially from offenders using VR technology’ (p. 33).
Findings, drawn from the original study, were presented to expert participants and grouped into three key areas:
1. Environment and control: The findings indicated that children want and need clear boundary lines and improved reporting mechanisms. They should ideally use devices in common areas and not hide them away. They should not use devices late at night and need restrictions on time lengths for device usage.
2. Behaviour modelling: The findings indicated that children want behaviour modelling information to understand the warnings around app behaviours, to learn about what positive and negative behaviours look like, and to understand how online behaviour affects others and themselves.
3. Guidance: The findings indicated that children want guidance on the risks and how to stay safe, the safety features, how long to use the metaverse headset, and how data is being extracted.
5.2 Methods and Limitations
As the final phase of the VIRRAC project, two online interactive virtual expert focus groups were conducted with professionals from the education and industry sectors (see Section 3 for a comprehensive methodological overview).
The focus groups comprised a range of practitioners, including schoolteachers, educational charity representatives, national educational safeguarding advisors, industry professionals, law enforcement, technology sector innovators, and people working for relevant non-governmental organisations and in academia. Experts were asked to comment on the key findings in the context of their professional practice and to consider their broader implications.
A key limitation of this phase was the relatively small sample size, which, while diverse, may not fully capture the breadth of perspectives within the education and industry sectors. Furthermore, as the discussions were conducted online, some nuances of in-person interactions may have been lost, potentially affecting the depth of engagement and spontaneous dialogue.
5.3 Findings
5.3.1 Impact upon Children
This section focuses on the impact of virtual reality (VR) on children, emphasising the need for greater attention to how VR environments affect their behaviour, mental health, and overall safety. When exploring children’s behaviour in virtual reality, themes around citizenship, risk-taking, and responsible behaviour were discussed. According to experts, children assume responsibility for their safety in the metaverse, when there needs to be more emphasis on industry taking responsibility for the moderation of their platforms and implementing safety-by-design tailored to the specific nature of the immersive environments. Participants reflected upon the extent to which the very nature of the 3D environment could be integral to minimising the feeling of disinhibition and reduction of empathy in these spaces, a point that has been raised in other research (Corkum et al., Reference Corkum and Shead2023). The immersive nature of the 3D metaverse can create more intense experiences than those previously encountered in ‘traditional’ 2D online spaces, potentially resulting in lasting psychological impact (Gorichanaz et al., Reference Gorichanaz, Lavdas, Mehaffy and Salingaros2023; Kim & Kim, Reference Kim and Kim2023). Practitioners in the research expressed concerns about the impact of VR upon children’s mental health, particularly those with pre-existing vulnerabilities.
The key findings from the focus groups are summarised in Boxes 1 and 2.
I. Up-to-date statutory guidance: UK Keeping Children Safe in Education (KCSIE): Statutory guidance provided needs to include More references to develop technology involving virtual reality and AI.
II. Practitioner training: All designated safeguarding leads and online safety leads should be trained on advancements in technology used by children.
III. Curriculum – PSHE: These lessons could be expanded to include topics relating to online behaviour such as behaviour towards others, citizenship, well-being, reporting, and behaviour modelling.
IV. School awareness raising – Assemblies & Safer Internet Week: Whole school approaches to guidance on online safety as an opportunity to increase sticky learning from PSHE are needed to be included on these occasions.
V. Engaging with parents: An opportunity to educate parents to make them aware of what metaverses are and existing games that their children are using, and to advise on communication, safety features, and safeguarding tips.
I. Child empowerment and safety by design: Allow for emergency exit for users; create user-friendly reporting functions; increase visual messaging regarding behaviour, reporting, and victimisation.
II. Clear and immediate reporting: Implement clear adaptive, optionally anonymous, and immediate reporting functions; display explanations of the reporting process; provide provision of additional support and feedback on reporting actions taken.
III. Harm mitigation: Regularly assess risks to mental health and victimisation, as well as offender networks’ development, and different forms of tech-enabled abuse. Implement safety solutions for ongoing abuse.
The next section considers the implications of the findings from this research and other recent research and the challenges of providing effective legislation and enforcing current legislation are also considered in the UK context.
5.3.2 Prevention: Implications for Educators
Educators identified education as the focus area where online harms in the metaverse could be effectively addressed. Participants emphasised that educational resources designed to help those working with children who are both typically and atypically developed should include some form of situational learning, training, improvement to policy, and improved means of communication with parents. This was viewed as being particularly important for children with disabilities, such as autism spectrum disorder (ASD), as they are more reliant on following a specific order of steps in order to process and embed that learning and the actions they need to take (Nemeth et al., Reference Nemeth, Janacsek, Balogh and Zsuz2021). Starting online safety education on risks and harms in virtual reality was seen as needed from as early as primary school age, as it could assist with developing positive online behaviours at a young age. When it comes to developing resources for staff training, participants recommended that topics such as potential risks and harms in the metaverse, and mitigation strategies, could be delivered as a part of the mandatory safeguarding training sessions that all staff are required to do every two years in the UK.
Respondents identified an often-overlooked issue of the generational divide that exists in educational settings, not only between the students and the staff but also between newer, younger members of staff and those staff with more experience. Participants also believed that the scale of the necessary implementation and response would pose a challenge in terms of educating children. Whilst it could be possible to deliver effective educational sessions to several hundred students through assemblies, the challenge would be if there was a requirement to run sessions with smaller groups of students. For example, if children were to try out the headsets to achieve greater learning outcomes from online safety in the metaverse sessions, it would be challenging with regard to human resources, but also provide a logistical, as well as funding challenge for schools. As this participant stated:
‘The scale is the challenge: you can deliver via an assembly to a hall full of 300 students, but that’s got limited impact for the kind of safeguarding you want to do. That would require.
children to try it out. Trying to do anything on a scale where you’ve got a school of so many.
students are tricky.’ Participant 3
There is some research evidence to suggest that children, even young children, would welcome the integration of VR in the curriculum. Research conducted by Rapti et al. (Reference Rapti, Sapounidis and Tselegkaridis2025) indicates that primary school children favour virtual reality–enhanced teaching methods over more traditional teaching methods. However, educators feared that they lacked the technical skills to deliver VR teaching and were also concerned that VR might have a negative impact on classroom cooperation. Concerns were also raised about the digital literacy of parents and carers in the VIRRAC Study. It was noted that in cities or towns with significant ethnic and cultural diversity, where English is often a second language, there can be challenges in effectively sharing information on safeguarding issues and access to digital literacy may be problematic amongst some vulnerable groups. In addition, experts shared the view that it was important to include parents with additional learning needs in conversations about digital safety, a point raised in other research (Haywood and Sembiante, Reference Haywood and Sembiante2023).
Children with communication impairments, including those with ASD, are increasingly engaging with the metaverse. The research highlights the importance of considering the needs of children with cognitive disorders, particularly in virtual spaces, an issue discussed in detail in Section 4. Neurodiversity emerged as a central theme in the focus group discussions in the focus group discussions, with participants highlighting it as a significant challenge in relation to communication, monitoring, caregiving, additional learning needs, and heightened risk awareness. It was viewed that a child’s ability to communicate effectively is a common error overlooked by safeguarding policies, protocols, and responses to online harms. Impairments intrinsic to autism include difficulties in socialisation, making eye contact, and communication. In virtual worlds, these impairments can be mitigated by the very nature of their design; for example, in many instances, children who struggle to make eye contact offline do not have to make eye contact online. Furthermore, verbal communication is not mandatory in many metaverse platforms, making them an appealing social outlet for non-verbal children or those with significant communication difficulties.
Recent research has also highlighted the benefits of VR environments for neurodivergent students in educational settings, affording more deliberate control over what information they experience and what information they do not (Dahlstrom-Hakki et al., Reference Dahlstrom-Hakki, Alstad and Asbell-Clarke2024). Other recent research has demonstrated how metaverse technologies can transform educational practices, making them more accessible and effective for neuro-diverse students (Salem et al., Reference Salem, Alfandi and Al-kfairy2024). Respondents in the VIRRAC study also acknowledged that the immersive online world can provide empowerment for children with special needs, such as playing alongside other gamers without having to experience the discomfort of verbal challenges or the sensations of rejection. While some of these experiences can be tremendously positive, they can also be used to manipulate and exploit. As a participant pointed out:
‘Socially, interacting with other people, strangers, those of ill intent, children on the autism.
spectrum can be more at risk because they haven’t got those social interaction practices.
In everyday life, they might avoid it, but then they’re suddenly exposed to this massive world.
where they can socialise with everybody, and they haven’t rehearsed that previously.’ Participant 2
5.3.3 Safety Awareness amongst Children
In the original VIRRAC research, children demonstrated an understanding of online safety features, including how to report other users, and were aware of some of the associated risks.
Although educational experts questioned the extent to which children would act on the knowledge in the moment, they expressed a degree of relief in hearing the level of children’s awareness regarding online safety, as this was an issue that is viewed as a challenge for schools (Davidson et al., Reference Davidson, Martellozzo, Farr, Bradbury and Meggyesfalvi2024). This is an important challenge, as early research evaluating the impact of educational awareness raising programmes has suggested that whilst young people, particularly adolescents, may understand key safety messages, they may not translate this understanding into practice. (Davidson et al., Reference Davidson, Martellozzo and Lorenz2009).
Participants viewed access to digital literacy as a significant challenge, especially regarding neurodiversity, socio-economic status, environment, parent and/or carer ability, the volume of pastoral needs, and opportunities for learning. The need for continuous and up-to-date staff training was also emphasised, exacerbated by challenges experienced due to staff turnover in schools.
5.3.4 Protection: Implications for Policymakers
Relevant legislation should be reviewed on a regular basis to ensure that it is appropriate in the context of new developments in technology. While the UK Online Safety Act 2023 aims to address online harms and offences, there is debate about its effectiveness in fully addressing these issues (Brawley, Reference Brawley2024). Close reading of the legislation indicates that it does allow for the inclusion of online grooming of children. S11 6 f of the OSA states that the design of platforms, particularly the extent to which functionality affects the risk of adults contacting children online, must be considered by industry, but the Code produced by the Regulator OFCOM focuses only upon harmful content. Given the potential for grooming in the metaverse, if interpreted correctly, the OSA could address this issue, but is currently absent from the UK Regulators’ Code (Ofcom, n.d.).
Participants in the research agreed on the vital need for widespread awareness about existing legislation regarding child safety in the metaverse as well as the clear identification of potential gaps. They agreed that it would be beneficial to conduct a thorough review of existing safety features implemented by metaverse platforms, which should include identifying best practices that are particularly helpful for small and medium-sized enterprises (SMEs) and start-ups. Such insights would inform future policy, industry guidance, and codes of practice introduced following the Online Safety Act, as well as support the enforcement of existing ones. It was also argued that policymakers should ensure that policy adequately reflects children’s experiences. It is crucial that child online safeguarding is prioritised as a key concern. Policymakers must develop a thorough understanding of immersive technologies and the ways in which these may impact children.
The NSPCC (2024) has suggested that Ofcom collaborate closely with other UK regulatory bodies to develop clear guidance on how immersive technology platforms should assess and respond to associated risks. Furthermore, they advocate for the creation of a dedicated code of practice specifically for immersive environments and third-party VR applications. The NSPCC emphasises the need for clear guidance on how platforms should effectively address cross-platform risks.
The findings from this research suggest a pressing need for revisiting and potentially revising current policies and practices to ensure they adequately cover the unique risks posed by metaverse environments. The main concern raised by experts in this study regarding online child safeguarding in the metaverse lies in the implementation of the recommendations made by the VIRRAC project, specifically relating to the ones developed for practitioners. Exploring different ways of effective implementation of the original research’s findings, the experts highlighted the role and potential of existing policies, content moderation and safety-by-design, training, student numbers, age parameters, timeframes, and funding as factors that can make it challenging to effectively respond to existing and emerging online risks and harms in the metaverse.
Participants identified education policy as the main focus area where online harms in the metaverse could be addressed effectively; participants emphasised that educational resources designed to help those working with children who are both typically and a-typically developed should include some form of situational learning, training, improvement to policy and improved means of communication with parents. This was viewed as being particularly important for children with disabilities, such as ASD as they are more reliant on following a specific order of steps to process and embed that learning and the actions they need to take. Starting online safety education on risks and harms in virtual reality was seen as needed from as early as primary schools, as it could assist with developing online positive behaviours at a younger age.
5.3.5 Prevention: Implications for Industry
‘I feel like it’s the Internet industry, they have the power to really sort this out.’ Participant 3
Although current UK legislation mandates that platforms adopt a safety-by-design approach and moderate content accessible to children, experts emphasised that industry must take greater responsibility for preventing and disrupting harmful and illegal online behaviour, particularly given the evolving threat landscape (Bleakley et al., Reference Bleakley, Martellozzo, Spence and DeMarco2024). The growing trend of offenders using encrypted messaging platforms to transition conversations with children from virtual reality spaces further underscores the need for proactive industry action (Australian eSafety Commissioner, 2023). Participants stressed the importance of improving safety features, such as robust age verification, and ensuring accountability at both the platform design stage and throughout ongoing operations. Research by Allen and McIntosh (Reference Allen and McIntosh2022) highlights that unsupervised children of varying ages, some as young as six, are present in openly accessible VR spaces, meaning young children are inevitably interacting with adults in the metaverse, further reinforcing the urgency of effective age verification measures.
Research from the Centre for Countering Digital Hate (CCDH, 2021) indicates that VRChat is ‘rife with abuse, harassment, racism and pornographic content’. The study found that users, including children, are on average exposed to abusive behaviour every seven minutes. No moderation system can be effective if human moderators are not trained to understand and identify online harms. Moderation is challenging on metaverse platforms, given that interactions are conducted in real time. Without training, important nuances of human behaviour could be missed, and ongoing training for staff would minimise such a risk. Currently, safety features in immersive environments place the onus on the victim to respond to abuse, including blocking and vote-to-remove features. This may present a challenge for children who may not be familiar with the platform’s safety features and who may feel unable to respond in real time. Children may also lack awareness of the abuse until later in the process if they are being groomed for example (McIntosh & Allen, Reference McIntosh and Allen2024).
The role of AI-driven content moderation in safeguarding children in virtual environments will expand. It is important to note that while AI tools are essential in detecting harmful content, their effectiveness remains highly variable, requiring structured evaluation frameworks to assess their reliability and ethical implications (Shneiderman, Reference Shneiderman2020). Utilising proactive AI moderation tools can be incredibly useful; however, some have considered the challenges in AI-based moderation, particularly in the context of false positives, contextual misinterpretation, and the circumvention of detection systems by offenders (Palos, Reference Palos2025). AI moderation systems, though increasingly sophisticated, may struggle to identify nuanced risks, such as grooming tactics disguised through coded language or psychological manipulation within immersive virtual environments (RoX818, 2025).
The findings from this research echo these concerns, underscoring the need for metaverse platforms to adopt transparent AI auditing mechanisms. Having a systematic evaluation of AI moderation tools is crucial to ensure that content moderation strategies are both effective and fair. AI alone cannot replace human oversight. Instead, hybrid approaches combining AI detection with trained human moderators should be prioritised. Robust evaluation frameworks could minimise the risks of AI moderation over-policing children’s online interactions, for example, removing legitimate content and failing to detect more sophisticated, context-dependent harms. Future research could focus on developing adaptive AI models capable of learning from real-time moderation failures while also integrating insights from criminology, child psychology, and digital safety research.
Ultimately, industry must take the lead in risk mitigation and foster a culture of safety-by-design within the metaverse. This includes implementing comprehensive safety features, robust reporting systems, and proactive content moderation. Essential safety-by-design features should include:
Age assurance and verification
Age-appropriate content and spaces
Human moderation so that the onus is not on victims or witnesses to report after an offence has taken place
Industry engagement with children, parents, and victims is also crucial in informing safety-by-design features during the design phase. Allen and McIntosh (Reference Allen, McIntosh, Henderson and Hughes2023) suggest that industry data collection and sharing practices must protect children’s data and adhere to the ICO’s age-appropriate design code and that importantly reporting methods should be proactive and child-friendly, utilising user-friendly methods, such as tutorials or interactive demonstrations underpinned by a ‘positive culture of reporting (and) easily accessible routes for users to make complaints, flag harmful behaviour, and seek immediate recourse’ (Allen et al., Reference Allen, McIntosh, Henderson and Hughes2023: 34) They also suggest that ‘funding from major VR platforms for programmes that educate parents and caregivers about the risks and how to mitigate them would make a significant positive impact’ (Allen et al., Reference Allen, McIntosh, Henderson and Hughes2023: 35).
Finally, there needs to be greater transparency regarding the considerable amount of data held by the industry. Increased access to data is essential in allowing civil society and academics to capture emerging harms and their scale appropriately. At the least, companies should provide greater data transparency on the platform’s child user base, the CSEA activity detected on the platform, and CSEA reports (Allen & McIntosh, Reference Allen, McIntosh, Henderson and Hughes2023). Europol have commented that ‘the platforms providing these services (metaverse) will have to provide a safe environment for children and provide safeguards against these experiences, by moderating content and behaviour that goes against their terms of use’ (Europol, 2022).
5.3.6 Disruption: Implications for Policing
The metaverse presents many challenges for policing, particularly in the context of online grooming, providing sex offenders opportunities to engage with children in real time and to escalate behaviour without having to leave the environment (Europol, 2022; Martellozzo, Reference Martellozzo2013). It is very difficult for children to distinguish adults from other children on VR platforms. Haptics and advances in tactile technologies provide a sensory dimension to user interactions; this presents another challenge in safeguarding children online as offenders could be virtually present in a child’s room ‘with the ability to physically sexually abuse them through the child’s haptic devices – without the offenders even having to leave their homes’ (Europol 2022: 3). A metaverse may also allow users to produce virtual child sexual abuse material (CSAM) as well. With the development of photorealistic avatars, these images could represent real children. It will be important to find appropriate regulations for these kinds of situations and measures for police detection, removal, and prevention. Furthermore, the technical means to investigate such platforms and adequate preventative measures are currently lacking, making it difficult for law enforcement to act immediately in real time. Another policing challenge focuses on the difficulty in monitoring/logging evidence both in real time and after the abuse. Responsibility will fall on the organisations that provide the platforms to monitor and moderate what happens on their platforms, provide law enforcement with timely information, and, potentially, provide the tools to enable effective investigation (Allen & McIntosh, Reference Allen and McIntosh2022). An additional problem arises in the context of the difficulty of conducting investigations across jurisdictions, particularly given that some countries will not legislate against crimes committed in the metaverse unless they can be prosecuted under existing criminal legislation (Europol, 2022).
Interoperability has been identified by Allen and McIntosh (Reference Allen and McIntosh2022) as another policing challenge. Interoperability of metaverse systems means that
‘assets, contacts, ‘friends’ or avatars could transfer from one metaverse platform to another. It could also mean that if an asset has economic value on one platform, its value could transfer through to another: a virtual pair of designer trainers, for instance, could be purchased in one metaverse world, virtually worn to a VR nightclub on a different metaverse platform, and then later sold in a completely different VR world’ (Allen & McIntosh, Reference Allen and McIntosh2022: 10).
Interoperability has implications for child safety in that offenders wishing to groom a child could operate on different platforms beginning with a more mainstream VR platform and then moving the child to a less moderated platform. This behaviour would be particularly difficult to police and would present challenges in the context of evidence gathering as the abuse would occur in real time on multiple platforms.
The number of metaverse platforms will continue to increase, making it almost impossible to police in the context of limited resources, as is patrolling the streets with the limited resources available. The key challenge for law enforcement will be attempting to monitor behaviour that is both ephemeral and even more context-dependent. Interactions in these worlds may become ephemeral, leaving no trace of an interaction. It will become hard to gather evidence and to determine actual events (Europol, 2022).
5.4 Conclusion
This section has explored the implications of the VIRRAC Study and other key research findings in the context of child safety within the metaverse. It is undeniable that children and young people can benefit enormously from their experience on VR platforms, but the risks they face, including harassment, grooming, hate, and other forms of harm, are considerable. Understanding these challenges is crucial to responding effectively and ensuring the protection of children in these environments.
The real-time nature of the environment presents unique safeguarding challenges for industry, for law enforcement, and for children as users. It is clear from this research and other research that a whole-of-society approach is needed for effective prevention. All key stakeholders must play an active role in maintaining child safety in virtual spaces. Industry, in particular, must lead the way and assume primary responsibility for ensuring that VR platforms have the safety of children, no matter how technically challenging, enshrined into their safety by design policy and practice. The OSA in the UK (England and Wales) requires that industry undertake regular risk assessments in respect of child safeguarding. However, there must be a proactive approach beyond this to work effectively with law enforcement and other key stakeholders in building truly safe VR experiences. Other stakeholders, such as educators, must also remain updated on technological advances and how these impact child online safeguarding and need to work proactively with parents in this space. Regulation must be robust, constantly reviewed to ensure relevance, and regulators must engage proactively with the full range of key stakeholders to ensure correct interpretation and effective implementation of legislation.
Theory has an important contribution to make in furthering our understanding of motivations and behaviours on 2D and 3D platforms. Theoretical constructs from criminology and psychology (as discussed in Section 2) have much to offer in furthering an understanding of offending, victimisation, and young people’s behaviour in the metaverse in the context of anonymity, neutralisation, and self-control, for example, but a more eclectic approach may help to broaden understanding: what, for example, can be drawn from anthropology and neuro -science, what does the study of small scale societies add to the discussion regarding the development of networks and groups, and what impact does VR have on brain function and development in children? Developing a more eclectic approach to constructing a conceptual framework may enrich our understanding.
Research in this area is critical to ensure that findings remain relevant, as the fast-paced development of technology continually reshapes the online landscape. This presents a unique challenge for criminology and other academic fields, which must adapt to the rapid emergence of new technologies. Traditional theories may still hold value. Their relevance depends on their application to the increasingly complex online world, particularly in spaces like the metaverse (see Section 2). Understanding these evolving dynamics is essential for developing effective responses and ensuring the safety of children in these rapidly changing digital environments.
6 Conclusions: Rethinking Safeguarding in a World That Flows
6.1 Introduction
The ancient concept of Panta Rhei, that everything flows, resonates with our contemporary digital landscape, where the metaverse emerges not as a fixed terrain but as a constantly shifting, immersive ecosystem. In Section 1, this Element has argued that Heraclitus’ philosophy is metaphorically fitting and illuminating the metaverse as a space where identities, behaviours, risks, and relationships are in continuous transformation. For children, this fluidity offers both opportunity and risks. As digital natives, they demonstrate remarkable adaptability, insight, and agency. However, they also confront a technological landscape that often outpaces protective policy and practice structures, ethical guidelines, and adult understanding. Within this fluid environment, safeguarding children becomes both urgent and complex.
6.2 The VIRRAC Study
The VIRRAC study offers one of the first empirical windows into the lived experiences of children within virtual reality platforms, revealing both the attraction and the risk of these spaces. Children’s accounts, presented in Section 4, paint a picture of vibrant digital lives shaped by creativity and connection, yet troubled by disinhibition, exploitation, victimisation, and emotional dissonance. Their ability to articulate risks, such as grooming, doxing, and the emotional toll of anonymity, challenges the long-standing perception of children as passive digital users. Nevertheless, the disjuncture between what children know and how they behave online reveals the inadequacy of information alone as a protective strategy in immersive environments.
Critically, this Element contends that current safeguarding approaches remain trapped in paradigms designed for earlier forms of digital interaction: static websites, 2D platforms, and transactional communication models. In contrast, the metaverse demands a recalibration of how we conceptualise safety, regulation, and child agency. The immersive and embodied nature of VR environments introduces new ethical, psychological, and developmental questions that outpace existing legal frameworks and challenge traditional criminological models.
As explained in Section 2, theoretically, the application of criminological frameworks, from Anomie to Edgework, reveals both their continued relevance and their limitations when transplanted into virtual settings. While they provide useful lenses for interpreting deviance and victimisation, they often fail to account for the non-linearity, interactivity, and altered perception of space and time that characterise the metaverse; many of these theoretical frameworks are rooted in real-world understanding and have limited long-term relevance in the context of this fluid digital landscape. Future theoretical developments must consider this ontological shift: crime, victimhood, and protection are no longer tied to physicality but redefined by digital embodiment. This Element has critically demonstrated that criminological theory, policy, and practice must evolve alongside technological development. Traditional frameworks help us understand motivations and behaviours within the metaverse, but they require reinterpretation to remain relevant in a context where identity is fluid, space is boundless, and time is compressed. Theories like disinhibition, edgework, and social disorganisation must now account for avatars, algorithms, and the immersive pull of virtual experience.
At the policy level, findings presented in Section 5 point to an urgent need for clearer regulation, industry accountability, and more intuitive safety-by-design approaches. The voices of children consistently emphasise a desire for practical guidance, strong boundaries, and adult presence, not to limit their experiences, but to ensure those experiences are empowering rather than exploitative. What becomes clear across this Element is that safeguarding in the metaverse is not merely a technical challenge; it is an epistemological one. It forces us to reconsider what we mean by ‘harm’, ‘safety’, ‘identity’, and even ‘reality’. The findings presented by both the children and stakeholders suggest that co-produced safety mechanisms, where children are treated as equal experts and are actively involved in the design, implementation, and evaluation of protection tools, are not optional but essential. Children not only identify risks; they also imagine solutions, and their accounts should be included in any future developments. Their calls for embedded safety guidance, behavioural modelling, and intuitive, age-appropriate moderation reflect a nuanced understanding of what is needed to navigate these new frontiers.
6.3 Final Remarks
Despite children’s insights and the increasing concerns of professionals, the current policy response to risks remains fragmented and reactive. There is a blatant lack of coordinated regulation, a shortage of empirical data on adult deception in VR, and a critical lag in law enforcement capacity.
Meanwhile, some commercial platforms continue to prioritise user engagement and data extraction over ethical responsibility. The prevailing business model of many technology companies relies on maximising time-on-platform and harvesting vast quantities of user data to feed algorithmic recommendation systems and monetise attention through targeted advertising (Zuboff, Reference Zuboff, Calhoun, Gerteis and Hughes2023). In the metaverse, this logic becomes even more potent. Immersive platforms are designed to captivate users through multisensory experiences, gamified interactions, and real-time feedback loops that not only deepen engagement but also increase the volume of data collected, ranging from eye movements and voice patterns to behavioural responses and biometric cues (Parsons, Reference Parsons2021).
Children, while often savvy and self-protective in digital spaces, are developmentally ill-equipped to critically assess immersive XR environments. The metaverse amplifies risks through unique affordances: embodied presence, spatialised voice, persistent avatars, and haptic feedback combine to simulate intimacy and realism in ways that far exceed the text-based or image-driven interactions of platforms like Facebook or Instagram. These features intensify emotional engagement, lower perceived risk, and create new vectors for grooming, manipulation, and exploitation. Yet, this amplified risk architecture operates within opaque systems where accountability is minimal, and terms of service are often inscrutable. Children’s data privacy rights, enshrined in instruments like the UNCRC (1989) and General Comment No. 25 (2021), are routinely neglected or overridden by platform design choices, as scholars have consistently shown (Livingstone & Third, Reference Livingstone, Davidson and Bryce2017; Livingstone & Sylwander, Reference Livingstone and Sylwander2025). As Livingstone and Sylwander (Reference Livingstone and Sylwander2025) note, digital governance frameworks have consistently failed to centre children’s rights, often treating them as an afterthought in regulatory and design processes.
The absence of ethical responsibility is further compounded by a lack of meaningful accountability. Regulatory attempts, such as the UK’s Age Appropriate Design Code (ICO, 2020) or the EU’s Digital Services Act (2022), represent important steps towards child-centric protections, but enforcement remains patchy, and technological innovation continues to outpace policy adaptation. Moreover, these efforts tend to rely heavily on industry self-regulation, which research has shown to be insufficient in mitigating risks, particularly when commercial incentives are misaligned with child safety goals (Byron, Reference Byron2008).
Without a systemic shift that foregrounds children’s rights and voices, immersive technologies risk becoming spaces where vulnerability is not only possible but structurally embedded. The architecture of the metaverse is highly immersive, emotionally stimulating, and socially dynamic; this can intensify risks such as grooming, exploitation, and harmful content exposure, particularly in the absence of consistent moderation and robust safety design (Davidson et al., Reference Davidson, Martellozzo, Farr, Bradbury and Meggyesfalvi2024). Moreover, many safety features remain reactive rather than preventative, placing the burden of navigation and protection on children themselves.
A rights-based, preventative approach is needed, one that treats children not as passive users but as active digital citizens whose voices, needs, and lived experiences must inform platform development. This means co-designing safety mechanisms with children and embedding these within platform infrastructures from the outset (Third et al., Reference Third, Collin, Walsh and Black2019). It also requires a reframing of platform success: shifting metrics away from engagement and monetisation towards trust, safety, inclusion, and well-being. However, this shift must also be informed by rigorous, transparent research. Future scholarship would benefit from stronger methodological clarity and deeper engagement with theory, particularly theories of embodiment, power, digital governance, and childhood vulnerability. Conceptual tools from criminology, surveillance studies, and childhood studies could help unpack how XR technologies alter the dynamics of risk and control in ways that are not immediately visible.
A whole-of-society approach is essential, one that combines safety-by-design, legal safeguards, platform accountability, and education initiatives for children, parents, and educators. There is also a growing Safety Tech sector, particularly in the UK and the United States, developing promising real-time moderation and abuse detection tools (DCMS, 2021). But these technological responses must be complemented by social and policy measures that centre children’s dignity and autonomy.
Until such a paradigm shift occurs, the metaverse risks reproducing and amplifying the same structural inequalities and exploitative dynamics that have plagued earlier iterations of the digital world. In this context, safeguarding is not a peripheral concern but a central requirement for ethical technological futures. Only through sustained pressure on policymakers, platform developers, and researchers to centre children’s rights can immersive environments evolve into spaces of empowerment rather than exploitation.
In closing, this Element has sought to frame the metaverse not simply as a technological innovation but as a philosophical and social rupture that demands new ways of thinking, new models of protection, and new forms of collaboration.
Safeguarding children also in the metaverse is critical. If we accept Heraclitus’ premise that nothing stays still, then our approaches to protection must be equally dynamic, adaptive, and child-informed. The metaverse, in all its promise and peril, calls on us to imagine not only what children need to be safe, but who we need to become to safeguard them in a world that is constantly being rebuilt, in real time and in virtual space.
6.4 Recommendations for Future Research
The VIRRAC Study, though small in scale, revealed key findings into the risks children face in immersive digital environments, particularly within the metaverse.
A strategic research agenda is essential; one that anticipates emerging risks and supports timely, evidence-based interventions. Several priority areas stand out. First, future research must explore how immersive features like presence, proximity, and embodied avatar interactions affect children’s emotional experiences, personal boundaries, and perceptions of risk. Unlike traditional media, XR technologies evoke a strong sense of physical and psychological realism that may change how children respond to harmful encounters.
Second, there is a pressing need for longitudinal and mixed-methods research to understand how harm is experienced and internalised over time. XR-related harms may not be immediately visible and can manifest later as emotional distress, desensitisation, or social withdrawal. Tracking these experiences will help identify patterns of resilience and vulnerability.
Third, developing age-appropriate frameworks to identify and measure harm in immersive environments is vital. With the rise of advanced technologies such as haptics, AI-driven avatars, and photorealistic VR, existing definitions of harm are increasingly inadequate. These hybrid digital-physical experiences challenge conventional safeguarding concepts.
Co-creation must become a foundational approach in future studies. Involving children, technologists, educators, and safeguarding professionals in shaping research questions and methods will ensure studies are grounded in lived experiences and remain relevant. Criminological research also plays a critical role, particularly in examining behaviours around grooming, coercion, anonymity, and consent – concepts that may need redefinition in immersive contexts.
At the platform level, future research should evaluate the effectiveness of moderation systems and explore XR-specific safety-by-design models. As metaverse platforms transition from social hubs to commercial spaces, safety must be embedded from the outset, not retrofitted after harm occurs.
The convergence of virtual and physical environments introduces new safeguarding challenges, including spatial tracking, real-time behavioural manipulation, and data privacy. As the distinction between online and offline becomes less meaningful, safeguarding efforts must address these blurred boundaries.
Children’s rights and digital agency must remain central to the research agenda. Studies should address how XR environments impact children’s autonomy, dignity, and psychological well-being, with attention to ethical concerns like surveillance and consent.
Legal and regulatory frameworks also require evaluation to determine whether they are fit for purpose in an XR-driven future. Research should address jurisdictional complexities in governing virtual environments.
Finally, frontline professionals such as educators, healthcare workers, and social workers, often first to detect harm, need support through targeted research and training.
Abbreviations
The following table lists the acronyms and abbreviations used throughout this Element:
- Acronym
Full Term
- AI
Artificial Intelligence
- ASD
Autism Spectrum Disorder
- CCDH
Centre for Countering Digital Hate
- CSAM
Child Sexual Abuse Material
- CSEA
Child Sexual Exploitation and Abuse
- DCMS
Department for Digital, Culture, Media and Sport (UK)
- EU
European Union
- EUROPOL
European Union Agency for Law Enforcement Cooperation
- FIFA
Fédération Internationale de Football Association (video game)
- GDPR
General Data Protection Regulation
- ICO
Information Commissioner’s Office (UK)
- KCSIE
Keeping Children Safe in Education
- LGBTQ+
Lesbian, Gay, Bisexual, Transgender, Queer/Questioning, and others
- NSPCC
National Society for the Prevention of Cruelty to Children
- OFCOM
Office of Communications (UK)
- OSA
Online Safety Act (2023)
- PTSD
Post-Traumatic Stress Disorder
- REPHRAIN
National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online
- SCOPUS
Scientific Committee on Problems of the Environment (database)
- SID
Safer Internet Day
- SMEs
Small and Medium-Sized Enterprises
- UKRI
UK Research and Innovation
- UNCRC
United Nations Convention on the Rights of the Child
- UNICEF
United Nations International Children’s Emergency Fund
- VIRRAC
Virtual Reality Risk and Resilience in Children
- VR
Virtual Reality
- VRChat
Virtual Reality Chat
- WWW
World Wide Web
Acknowledgements
The authors would like to thank REPHRAIN (National Research Centre on Privacy, Harm Reduction and Adversarial Influence Online) for generously funding this project and for recognising the need to deepen the understanding of immersive technologies. We would also like to extend our thanks to our dedicated research assistant, Boglarka Meggyesfalvi, whose support was essential throughout the course of the project and to Middlesex University and the University of East London.
Our appreciation goes to the children and expert participants who engaged in our study. Their openness and willingness to share their experiences made this work possible. We also wish to acknowledge the contributions of our key stakeholders, especially ChildNet and Nina Jane Patel, whose expertise and guidance have been vital in shaping the direction of this research.
Lastly, we are grateful to all those who have provided insight, feedback, and encouragement throughout this journey.
Funding Statement
This research was funded by REPHRAIN, which is funded by UK Research and Innovation (UKRI). The views expressed in this publication are those of the authors and do not necessarily reflect those of REPHRAIN or UKRI.
George Mason University, Virginia
Hebrew University of Jerusalem
Advisory Board
Professor Catrien Bijleveld, VU University Amsterdam
Professor Francis Cullen, University of Cincinnati
Professor Manuel Eisner, Cambridge University
Professor Elizabeth Groff, Temple University
Professor Cynthia Lum, George Mason University
Professor Lorraine Mazerolle, University of Queensland
Professor Daniel Nagin, Carnegie Mellon University
Professor Ojmarrh Mitchell, University of California, Irvine
Professor Alex Piquero, University of Miami
Professor Richard Rosenfeld, University of Missouri
About the Series
Elements in Criminology seeks to identify key contributions in theory and empirical research that help to identify, enable, and stake out advances in contemporary criminology. The series focuses on radical new ways of understanding and framing criminology, whether of place, communities, persons, or situations. The relevance of criminology for preventing and controlling crime is also be a key focus of this series.



