Skip to main content Accessibility help
×
Hostname: page-component-68c7f8b79f-gnk9b Total loading time: 0 Render date: 2025-12-19T02:04:32.840Z Has data issue: false hasContentIssue false

5 - Cyber Disinformation Risk Perception and Control

Integration of the Extended Theory of Planned Behavior and a Structural Equation Model

from Part I - Challenges to Democratic Institutions

Published online by Cambridge University Press:  aN Invalid Date NaN

Scott J. Shackelford
Affiliation:
Indiana University, Bloomington
Frédérick Douzet
Affiliation:
Paris 8 University
Christopher Ankersen
Affiliation:
New York University

Summary

Cyber disinformation is a global, very sophisticated phenomenon, capable of producing negative consequences on democratic values and institutions. This chapter argues that individual behavior of users plays a key role in the control of the phenomenon and aims to identify factors that impact on users’ behavioral intentions and cyber hygiene behavior. This chapter integrates the Extended Theory of Planned Behavior and a Structural Equation Model, realized through Partial Least Square – –Structural Equation Modeling, applied to the cyber disinformation phenomenon. The research data were collected using a questionnaire administered in Poland and Romania and analyzed using the Structural Equation Model. The model’s parameters were processed using the SmartPLS software. The reliability of the variables was assessed using Cronbach’s Alpha and Composite Reliability. The research revealed the applicability of the Theory of Planned Behavior model and found that Moral Norms and Perceived Behavioral Control have an impact on Behavioral Intention and Cyber Hygiene Behavior. The findings of this chapter can provide stakeholders with important insights that can lead to improved responses to the phenomenon.

Information

Type
Chapter
Information
Securing Democracies
Defending Against Cyber Attacks and Disinformation in the Digital Age
, pp. 93 - 116
Publisher: Cambridge University Press
Print publication year: 2026
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

5 Cyber Disinformation Risk Perception and Control Integration of the Extended Theory of Planned Behavior and a Structural Equation Model

The extensive use of digital platforms and services has expanded widely the range of potential attacks and targets, rendering individuals and the democratic values and institutions vulnerable to a substantial number of cyber-enabled threats. These threats can be sophisticated, conducted on a large scale, and capable of producing significant, viral consequences. Among these threats, cyber disinformation is regarded as a major threat. The phenomenon is widespread and complex, in certain cases, part of hybrid warfare, involving various cyberattacks by nefarious actors, which deceptively distribute fake or incomplete materials, with a view to influencing people’s opinions or behavior.

Disinformation can involve numerous vectors and take several forms. The goal of disinformation campaigns is to promote or sustain certain economic or political interests, discrimination, phobia, hate speech, or harass individuals (European Parliament, 2022). Instances of alleged disinformation can be encountered with respect to a large variety of aspects, such as food (Diekman, Ryan, & Oliver, Reference Diekman, Ryan and Oliver2023); migrants (Culloty et al., Reference Culloty, Suiter, Viriri and Creta2022); fossil fuel;Footnote 1 sexual preferences (Carratalá, Reference Carratalá2023); health hazards;Footnote 2 politics;Footnote 3 and so on.

Successful disinformation campaigns can negatively affect fundamental freedoms, undermine trust, subvert attention, change attitudes, sow confusion, exacerbate divides, or interfere with decision-making processes. Consequently, such campaigns can rightly be considered attacks on knowledge integrity (Pérez-Escolar, Lilleker, & Tapia-Frade, Reference Pérez-Escolar, Lilleker and Tapia-Frade2023, p. 77). The potential consequences can be disquieting, negatively affecting democratic values and institutions (Jungherr & Schroeder, Reference Jungherr and Schroeder2021; Schünemann, Reference Schünemann, Cavelty and Wenger2022). The concerns over cyber disinformation are notable worldwide and received significant attention from researchers (Buchanan & Benson, Reference Buchanan and Benson2019; Nenadić, Reference Nenadić2019; Olan et al., Reference Olan, Jayawickrama, Arakpogun, Suklan and Liu2022; Pierri, Artoni, & Ceri, Reference Pierri, Artoni and Ceri2020; Tenove & Tworek, Reference Tenove and Tworek2019; Ternovski, Kalla, & Aronow, Reference Ternovski, Kalla and Aronow2022; Vaccari & Chadwick, Reference Vaccari and Chadwick2020; Weikmann & Lecheler, Reference Weikmann and Lecheler2022).

While there are laws that address the phenomenon (e.g., 18 U.S. Code § 35, the German Network Enforcement Act, the French Law on the fight against the manipulation of information), strengthened codes of practice (e.g., the European Commission’s Strengthened Code of Practice on Disinformation 2022), assignment of anti-disinformation attributions to governmental agencies (e.g., the U.S. Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency), awareness campaigns, and implementation of disinformation detection and blocking algorithms or filters, the control of the phenomenon still poses significant challenges.

The control of the cyber disinformation phenomenon plays a significant role in the protection of democratic values and systems. This chapter argues that an essential role in the control of the phenomenon is played by the individual behavior of users and aims to identify factors that impact on Behavioral Intentions (BIs) and Cyber Hygiene Behavior (CHB), in the circumstances of cyber disinformation. The chapter integrates the Extended Theory of Planned Behavior (ETPB) and a Structural Equation Model. The research data were collected using a questionnaire. The model’s parameters were processed using the SmartPLS software.

The rest of this chapter is organized as follows. The next section outlines the phenomenon’s main attributes and explains how cyber-enabled means can threaten democratic values and institutions. The third section discusses aspects regarding structural equation modeling (SEM), applied to disinformation. The fourth section presents the conceptual model and the proposed hypotheses. Finally, the fifth section presents the model evaluation. The chapter concludes with implications of findings.

Cyber Disinformation Attributes

Disinformation” is a term difficult to define because the phenomenon is complex (Ó Fathaigh, Helberger, & Appelman, Reference Ó Fathaigh, Helberger and Appelman2021) and covers many forms, such as “fabrications, fakeness, falsity, lies, deception, misinformation, disinformation, propaganda, conspiracy theory, satire or just anything with which one disagrees” (Andersen & Søe, Reference Andersen and Søe2020, p. 6). Wardle and Derakhshan (Reference Wardle and Derakhshan2017), for instance, contrast “disinformation,” referred to as intentionally false or deceptive communication, with “misinformation,” understood as communications that may contain false claims, however, not intended to cause or inflict harm. The European Commission (Reference von der Leyen2020, p. 18) clearly distinguishes between misinformation, information influence operation, foreign interference in the information space, and disinformation, defining the latter as “false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm.”

Disinformation can be orchestrated by individuals or by organized groups (state or non-state) and involve various sources, such as regular citizens, political leaders or officials, attention-seeking trolls, profiteers, or propagandistic media (Watson, Reference Watson2021). Several factors were identified that favor the phenomenon, such as the tendency to believe unreliable statements (European Parliamentary Research Service, 2020) or people’s difficulties in identifying disinformation (Machete & Turpin, Reference Machete, Turpin, Hattingh, Matthee, Smuts, Pappas, 114Dwivedi and Mantymaki2020), the identity confirming problems, and deficiencies in platform filtering (Krafft & Donovan, Reference Krafft and Donovan2020).

According to Bontcheva et al. (Reference Bontcheva, Posetti, Teyssou, Meyer, Gregory, Hanot and Maynard2020, pp. 22–23), disinformation can take various formats, such as false claims or textual narratives; altered, fake, or decontextualized audio and/or video; and fake websites and manipulated datasets. Cyber disinformation campaigns can involve, for example, deceptive advertising, propaganda, or the dissemination of forged materials, such as videos, photographs, audios, documents (including, for instance, fake web pages or maps), created through image altering or airbrushing or cover-up, or by audio camouflage.

One of the characteristics of disinformation campaigns regards the existence of deceptive goals. According to Fallis (Reference Fallis, Floridi and Illari2014, pp. 142–146), the deceptive goals can be classified as regarding the accuracy of the content, the source believing the content, the content source identity, and the content accuracy implication. Disinformation can cause significant harm, as it has the real potential to confuse or manipulate people, suppress the truth or critical voices, generate distrust in democratic institutions or norms, and even disrupt democratic processes (Bontcheva et al., Reference Bontcheva, Posetti, Teyssou, Meyer, Gregory, Hanot and Maynard2020).

Social media is often regarded as a highly effective vector to promote political goals via disinformation campaigns (Aïmeur, Amri, & Brassard, Reference Aïmeur, Amri and Brassard2023). Twitter, now called “X,” for example, was used to spread misleading election memes and graphics, designed to demoralize opponent’s voters, and even deter them from exercising their right to vote, which went viral.Footnote 4 For an illustration of the importance attached to X as a disinformation vector, according to Statista research, the number of disinformation and pro-Russian posts in Poland via X amounted, in January 2022, to 25,910 disinformation and pro-Russian posts on X, increasing to 358,945 during the year (Statista, 2023).

In practice, disinformation campaigns employ an impressive array of tactics, including the impersonation of organizations or real people; creation of fake or misleading online personas or websites; creation of deepfakes or synthetic media; devise or amplification of certain theories; astroturfing and flooding; exploitation of gaps in information; manipulation of unsuspecting people; and spread of targeted content (Cybersecurity and Infrastructure Security Agency, 2022). Of particular concern, given their massive disinformation potential, are the deepfakes. In video and/or audio form, deepfakes nowadays are very realistic, allowing morphing attacks and the creation of unreal faces or voices, allowing personalized messages to individuals. Deepfakes can negatively affect the credibility of individuals, disrupt markets, facilitate frauds, manipulate public opinions, incite people to various forms of violence, and support extremist narratives, social unrest, or political polarization (Mattioli et al., Reference Mattioli, Malatras, Hunter, Biasibetti Penso, Bertram and Neubert2023; Trend Micro, 2020). Moreover, deepfakes undermine conversations about reality and can disrupt democratic politics (Chesney & Citron, Reference Chesney and Citron2019).

Personalization algorithms can be employed to facilitate the spreading of disinformation, potentially making it thrive on digital platforms (Borges & Gambarato, Reference Borges and Gambarato2019). The techniques or means employed in disinformation campaigns may also include, for instance, bots. These are increasingly difficult to distinguish from humans and can be effectively used to produce disinformation content, targeted at predetermined or general users (Edwards et al., Reference Edwards, Beattie, Edwards and Spence2016). For instance, bots are used to disseminate election disinformation (Knight Foundation, 2018) or disinformation regarding health issues (Benson, Reference Benson2020).

Artificial neural network (ANN) and deep learning methods can be used in disinformation campaigns, with unlawful or nefarious potential (Rana et al., Reference Rana, Nobi, Murali and Sung2022). Amplifiers, such as influential people or artificial intelligence tools, via, for instance, cross-platform coordination or the manipulation of engagement metrics, can be used to maximize engagement or the spread of disinformation through networks, to retweet or to follow X accounts or to share Facebook, now called Meta, posts (European Commission, 2023; Michaelis, Jafarian, & Biswas, Reference Michaelis, Jafarian and Biswas2022). Clickbait is another disinformation method used to attract online users to click on disinformation links (Collins et al., Reference Collins, Hoang, Nguyen and Hwang2021).

Structural Equation Modeling of Disinformation

The original approach to SEM assumed that the use of technical systems could be explained and predicted by user motivation directly influenced by external factors (i.e., the functionalities and capabilities of those technical systems) (Chuttur, Reference Chuttur2009; Davis, Reference Davis1985). The Technology Acceptance Model (TAM) theory was proposed to explore behavior and user acceptance of Information Communication Technology (ICT) based on the social psychology perspective. The TAM theory assumes that two factors determine users’ acceptance of a technology: (1) perceived usefulness and (2) perceived ease of use. The first refers to the user’s belief in the degree to which they think that using the technology enhances their job performance, productivity, or overall effectiveness. The second represents the user’s perception of how easy it is to apply the technology. In general, the acceptance of technology is a critical element and a necessary condition in the implementation of ICT in everyday life. An extensive literature survey was conducted utilizing the Scopus database, which covers 18,639 papers on TAM, published between 1964 and 2023.

Over decades, several theoretical models have been developed to understand the acceptance and the use of ICT. Researchers have been hesitant in selecting the appropriate theoretical model for the evaluation of the acceptance and usage of ICT. Recognition of the ICT needs and the ICT acceptance by individuals in business organizations is usually the beginning stage of any business activity and this understanding can be helpful to find the way of future implementation of the ICT.

In general, TAM models are estimated through the SEM, which is an approach for testing the hypotheses on relations among observable and latent variables (Sabi et al., Reference Sabi, Uzoka, Langmia and Njeh2016). The SEM is a statistical method applied in various fields of social sciences to estimate relationships among specified variables and verification of hypotheses if those relations are statistically dependable and valid. In this study, the SEM is realized through the Partial Least Square–Structural Equation Modeling (PLS_SEM), which represents the composite-based SEM method (Hair Jr. et al., Reference Hair, Hult, Ringle and Sarstedt2017). Partial Least Square (PLS) is a statistical method for estimation of relationships between independent variables and dependent variables.

For the past thirty years, the research community has been strongly involved in the identification of the factors that have an impact on technology acceptance. The theory of reasoned action (TRA) and the theory of planned behavior (TPB) were predecessors of the TAM (Marikyan & Papagiannidis, Reference Marikyan and Papagiannidis2021; Park & Park, Reference Park and Park2020). The TRA explains and predicts human behaviors considering their attitudes and Subjective Norms (SN). That theory assumes that individuals make rational decisions based on their attitudes and social norms.

The TPB is an extension of the TRA, also provided by Ajzen (Reference Ajzen2005). The TPB explains and predicts individual behavior based on human intentions, which are dependent on three factors, that is, attitude identified with personal beliefs, SN referring to the social pressure, and Perceived Behavioral Control (PBC) encompassing self-efficacy, perceived obstacles, and facilitators. The TAM had further modifications (i.e., TAM2, TAM3); however, researchers have utilized the Unified Theory of Acceptance and Use of Technology (UTAUT) model, which suggests that the actual use of technology is determined by BI. The perceived BI is dependent on four key constructs: performance expectancy, effort expectancy, social influence, and facilitating conditions. The effect of variables is moderated by age, gender, experience, and voluntariness of use (Venkatesh et al., Reference Venkatesh, Morris, Davis and Davis2003). Further, researchers have noticed the importance of factors reflecting the costs and benefits of behavior, as well as the context of use.

Venkatesh, Thong, and Xu (Reference Venkatesh, Thong and Xu2012) proposed the UTAUT2 model, which was developed to examine technology use in organizational settings. The authors of the UTAUT2 argue that the use of technology by individuals is determined by the following constructs: performance expectancy, effort expectancy, social influence, facilitating conditions, hedonic motivation, price value, and habit, moderated by age, gender, and experience. In the UTAUT and UTAU2 models, BI has an impact on use behavior.

This study aims to identify factors having an impact on Behavioral Intentions (BIs) and Cyber Hygiene Behavior (CHB) in the circumstances of wide dissemination of disinformation in cyberspace. The BIs comprise an individual’s predispositions and willingness to behave in a specific way. This concept is included in various human behavior theories to analyze and predict human actions. CHB covers all practices that allow avoiding the risk to be a victim of cyberattacks and reduce cyber vulnerabilities.

This study considered state-of-the-art research work on disinformation attitudes. The Scopus literature survey revealed 4,526 publications on the “disinformation” keyword search. These papers were published between 1974 and 2023. However, there are only nineteen publications on the “disinformation” AND “structural equation modelling,” query, all between 2020 and 2023. Certain countries (i.e., China, Russia, and Turkey) have professionalized operations online to support social media campaigns and create an alternative informational space and effectively disseminate persuasive messages through symbols and emotions. Therefore, their action monitoring as well as online trolling are subjects of research (Alieva, Moffitt, & Carley, Reference Al-Shanfari, Yassin, Tabook, Ismail and Ismail2022; Uyheng, Moffitt, & Carley, Reference Uyheng, Moffitt and Carley2022).

Several researchers examine the phenomenon of disinformation as a threat in the sphere of cybersecurity (Caramancion et al., Reference Caramancion, Li, Dubois and Jung2022; Carrapico & Farrand, Reference Carrapico and Farrand2021). Hence, cybersecurity issues have been included as observable variables (i.e., Security (SEC) items) in the proposed survey. Arayankalam and Krishnan (Reference Arayankalam and Krishnan2021) formulated some hypotheses, which were positively verified, concerning disinformation, as follows:

  • Disinformation through social media is positively associated with online media development.

  • Online media are positively associated with its social media-induced offline violence.

  • The government control negatively moderates disinformation.

In social psychology, the TPB is one of the most influential behavioral models. The TPB links beliefs to behavior and assumes that the user’s behavior is determined by their intentions to perform that behavior. In the conceptual model proposed by Jia et al. (Reference Jia, Yu, Feng, Ning, Cao, Shang, Gao and Yu2022), for example, several factors have an impact on Behavioral Attitudes (BA), Subjective Norm (SN), and Perceived Behavioral Control (PBC). Further, the three variables determine BI. Moreover, Shirahada and Zhang (Reference Shirahada and Zhang2022) argue that TPB is used to predict and explain human intentions in a particular context. Intentions are influenced by SN, attitudes, and PBC. SN concern the expectations of other people and social pressures regarding desirable behavior. Attitude refers to evaluations of behavior, and PBC refers to the ease of performing a behavior.

Maheri, Rezapour, and Didarloo (Reference Maheri, Rezapour and Didarloo2022) argue that the Perceived Risk (PR) refers to subjective assessments of the risk of disinformation and its potential consequences. SN refer to respondents’ beliefs that significant others think they should or should not engage in a particular behavior. PBC concerns participants’ perceptions of their ability to do verification.

Romero-Galisteo et al. (Reference Romero-Galisteo, Gonzalez-Sanches, Galvez-Ruiz, Palomo-Carrion, Casuso-Holgado and Pinero-Pinto2022) consider that TPB explains the degree of correlation between variables, that is, entrepreneurial intention, perceived feasibility, and perceived desirability. Beyond that, Savari, Mombeni, and Izadi (Reference Savari, Mombeni and Izadi2022) develop an Extended Theory of Planned Behavior (ETPB) including the variables, that is, Descriptive Norms (DN), Moral Norms (MN), Habits (HA), and Justification (JU). In their theoretical framework, Attitude, SN, DN, and PBC have impacts on Intention, while PBC, MN, and variables such as HA and JU influence Behavior.

MN define a sense of inherent moral commitment, according to a value system. The concept of DN explains personal attitude about how much other people exhibit a certain behavior. The norms are introduced as people learn not only from their own experiences but also from analyses of behaviors of others. Comparable extension of TPB was provided by Khani Jeihooni et al. (Reference Khani Jeihooni, Layeghiasl, Yari and Rakhshani2022), who included attitude, among others, PBC, SN, and BIs in their survey.

According to Ababneh, Ahmed, and Dedousis (Reference Ababneh, Ahmed and Dedousis2022), the TPB proposes four major predictors of human behavior (i.e., attitude toward the behavior, SN, BI, and PBC). Ajzen (Reference Ajzen1991, p. 183) argues that attitude, norms, and control determine intentions and behavior. Cabeza-Ramirez et al. (Reference Cabeza-Ramirez, Sanchez-Canizares, Santos-Roldan and Fuentes-Garcia2022) notice that literature has rarely considered the possible perception of risk associated with desirable behavior. Similarly, the risk is included in the TPB model proposed by Zhang et al. (Reference Zhang, Shi, Chen and Zhang2022). Security is considered in the TPB model developed by Al-Shanfari et al. (Reference Al-Shanfari, Yassin, Tabook, Ismail and Ismail2022). Using the SEM method, they revealed factors having an impact on information security behavior adoption and employees’ training.

The Conceptual Model and Hypotheses

Considering the literature survey on latent variables included in the TPB models, this study noticed that there is no standardized approach: The models are formulated according to the preferences of researchers and some extensions are possible. Therefore, this study focuses on the application of the ETPB model; however, additional constructs are added, which are expected to present the context of behavior.

This study defines the BA as the degree to which a person believes that they can properly recognize disinformation. The BA influences the decision on whether to accept or reject the information. The BA reveals the extent to which a person believes that the use of information is needed and not harmful. BA refers to personal predispositions to act in a specific way, regarding a particular object, person, situation, concept, or technology.

Beyond that, this study proposes to include three types of norms, that is, MN, SN, and DN. MN result from personal internal beliefs not to tolerate disinformation. In general, MN are principles or rules that govern human behavior and establishing what is right or wrong in a social community. SN concern a personal perception of social pressure or influence to perform or not perform a particular action. SN result from personal motivation, normative beliefs, individuals’ knowledge, and impacts and experiences of third-party people, who may have influence on the questionnaire recipient. DN concern the perceptions that individuals have about behaviors exhibited by others in a community.

In this study, DN reveal the degree to which the recipient creates themselves as the image of a person who knows how to avoid disinformation. PBC means personal beliefs that an individual has capabilities, that is, competencies and resources, to control factors that may influence their behavior. In this study, PBC refers to the degree to which the recipient believes in their abilities to self-control and avoid disinformation. Therefore, this study proposes the following hypotheses:

H1: Behavioral Attitude (BA) has an impact on Behavioral Intention (BI).

H2: Moral Norms (MN) have a positive impact on Behavioral Intention (BI).

H3: Subjective Norms (SN) have a positive impact on Behavioral Intention (BI).

H4: Descriptive Norms (DN) have a positive impact on Behavioral Intention (BI).

H5: Perceived Behavioral Control (PBC) has a positive impact on Behavioral Intention (BI).

H6: Perceived Behavioral Control (PBC) has a positive impact on Cyber Hygiene Behavior (CHB).

H7: Behavioral Intention (BI) has a positive impact on Cyber Hygiene Behavior (CHB).

Beyond variables considered in the TPB model, this study added other variables. Two of them, that is, HA and JU, have been introduced to the ETPB model by Savari, Mombeni, and Izadi (Reference Savari, Mombeni and Izadi2022). HA are repetitive actions, which are performed regularly or automatically in human lives. They can be positive (i.e., good habits, e.g., teeth cleaning) and negative (i.e., unhealthy habits, e.g., avoiding physical activities). In this paper, HA includes individual practices and routines applied by the recipients, particularly avoiding internet news. JU refers to collecting and revealing the reasons for a particular action, decision, or belief. JU means a personal explanation of regulations, policies, and administrative practices to avoid disinformation.

This study also considered the impact of variables combined with security, Anxiety (AN), and risk. Hence, the conceptual model covers the impact of three additional factors that may influence CHB, which covers practices and habits to maintain an elevated level of cybersecurity and protection of digital assets. It may also include prevention to maintain mental health and avoiding unreliable and untested, unchecked, and malicious information. Cyber AN is a degree to which a person hesitates to use internet information because of its harmfulness. PR is defined as a degree of risk recognition by an individual. SEC means level of knowledge on Information Technology (IT) tools to protect in case of a human agent or software attack. Hence, the next hypotheses are as follows:

H8: Justification (JU) has an impact on Cyber Hygiene Behavior (CHB).

H9: Habits (HA) have an impact on Cyber Hygiene Behavior (CHB).

H10: Perceived Risks (PRs) have an impact on Cyber Hygiene Behavior (CHB).

H11: Security (SEC) has an impact on Cyber Hygiene Behavior (CHB).

H12: Anxiety (AN) has an impact on Cyber Hygiene Behavior (CHB).

Figure 5.1 includes the conceptual model of variables having an impact on CHB. In this theoretical framework, relationships among constructs, that is, latent variables, as well as between constructs and their assigned indicators, that is, items or observable variables, are shown with arrows.

Content of image described in text.

Figure 5.1 Conceptual model.

Observable Indicators for Cyber Hygiene Behavior Model

For the past thirty years, ICT, in general, and the internet have played a significant role in communications among people in all sectors of life (i.e., education, administration, business, health care, and agriculture). The benefits of ICT do overcome risks and wastes caused by disinformation. To evaluate young peoples’ behavior and recognize factors having an impact on their BIs and actions, the TPB model has been specified and estimated. The literature survey on TAM, UTAUT, and the TPB models led to observations that researchers focus on the latent variable’s identification. However, the specification of observable items, such as indicators, should also be discussed.

Considering the items identified in literature and proposed by other researchers, this study items are included in Table 5.1.

Table 5.1

Items included in the survey

Table 5.1a
A table consists of 12 latent variables that were included in a survey for a cyber hygiene behavior model. The first five variables, their corresponding items, mean R O and mean P L are listed. See long description.
Table 5.1Table 5.1aLong description

A table consists of 12 latent variables that were included in a survey for a cyber hygiene behavior model. The latent variables are anxiety, perceived risk, security, moral norms, behavioral attitude, subjective norms, descriptive norms, perceived behavioral control, behavioral intention, habits, justification, and cyber hygiene behavior. There are four columns namely, latent variables, item, mean R O, and mean P L. Each latent variable has a varying number of subcategories. The following are the row-wise details for the first five variables with their respective data categories in the columns from left to right.

Anxiety, A N, A N 1: I feel apprehensive about finding fake news on the internet, 4.115, 3.245.

Anxiety, A N, A N 2: I hesitate to use social media for fear of finding fake news, 2.285, 1.685.

Anxiety, A N, A N 3: Fake news are threats to democratic values and democratic institutions, 5.394, 4.925.

Perceived Risk, P R, P R 1: Buying products promoted by an unreliable source adds to the uncertainty about the results, 5.782, 5.780.

Perceived Risk, P R, P R 2: Disinformation destroys a positive image and reputation, 5.842, 5.890.

Perceived Risk, P R, P R 3: I accept the risk to enable learning from uncertain sources, 3.194, 5.080.

Perceived Risk, P R, P R 4: I think there is no risk in using social media to meet new people, 2.291, 2.900.

Security, S E C, S E C 1: Anti-spamming software allows me to avoid fake news, 3.982, 4.070.

Security, S E C, S E C 2: Internet service provider warns me about fake news, 2.327, 3.500.

Security, S E C, S E C 3: I pay consideration to website artifacts, i.e., Padlock or https, 4.024, 4.755.

Moral Norms, M N, M N 1: Avoiding fake news dissemination is a matter of conscience for me, 5.600, 4.040.

Moral Norms, M N, M N 2: I feel compelled by my conscience to punish fake news providers, 4.279, 5.040.

Moral Norms, M N, M N 3: I feel uncomfortable when I observe that other people tolerate fake news dissemination, 5.497, 4.660.

Moral Norms, M N, M N 4: I feel responsible for the true information inserted by me on the internet, 6.115, 5.110.

Behavioral Attitude, B A, B A 1: I like to be engaged in the activity for fake news recognition, 4.121, 2.830.

Behavioral Attitude, B A, B A 2: I believe that constant monitoring of COVID-19 news has a positive impact on my mental health, 2.521, 3.410.

Behavioral Attitude, B A, B A 3: I have enough responsibility not to read fake news, 5.370, 4.995.

Behavioral Attitude, B A, B A 4: I think it is better to verify the information provenance, 6.467, 5.825.

Behavioral Attitude, B A, B A 5: I think that unreliable source of data may provide fake news, 5.697, 4.660.

Behavioral Attitude, B A, B A 6: I think that losers and crazy people provide fake news on the internet, 3.618, 3.940.

Behavioral Attitude, B A, B A 7: I think fake news is like a joke, 2.455, 3.685.

Table 5.1b
A table consists of 12 latent variables that were included in a survey for a cyber hygiene behavior model. The following five variables, their corresponding items, mean R O and mean P L are listed. See long description.
Table 5.1Table 5.1bLong description

Table 5.1 continues with the respective data entries of items, mean R O, and mean P L from left to right, for the following five latent variables:

Subjective Norms, S N, S N 1: Some of my colleagues have been deceived by fake news, 5.091, 4.620.

Subjective Norms, S N, S N 2: Public opinion will affect my choice of the internet news, 3.393, 4.300.

Subjective Norms, S N, S N 3: People whom I work with help each other to recognize fake news, 4.327, 4.360.

Subjective Norms, S N, S N 4: People whom I trust warn me and explain to me the fake news, 5.164, 4.930.

Descriptive Norms, D N, D N 1: I think most of my friends know how to avoid fake news, 4.709, 4.875.

Descriptive Norms, D N, D N 2: I am sure that people around me do not read unreliable news, 3.273, 3.895.

Descriptive Norms, D N, D N 3: I believe that most of my family thinks that reading unreliable news is unreasonable and wrong, 4.685, 5.040.

Descriptive Norms, D N, D N 4: Reading fake news is disgusting to the people around me, 4.006, 4.120.

Perceived Behavioral Control, P B C, P B C 1: My technical ability is sufficient to avoid disinformation, 5.079, 5.270.

Perceived Behavioral Control, P B C, P B C 2: I purposefully avoid nonverified information, 5.364, 5.005.

Perceived Behavioral Control, P B C, P B C 3: I know how to avoid fake news, 5.267, 5.335.

Perceived Behavioral Control, P B C, P B C 4: I think I have good self-control, 5.818, 5.175.

Behavioral Intention, B I, B I 1: I would like to know more about the possibilities of verifying internet information, 6.236, 4.945.

Behavioral Intention, B I, B I 2: I will recommend my friends or relatives to verify information from uncertain or unknown sources, 6.073, 4.890.

Behavioral Intention, B I, B I 3: Post COVID-19, I carefully check information on it, 5.442, 4.435.

Behavioral Intention, B I, B I 4: I will take good care of myself, particularly when I am browsing unsafe portals, 5.933, 5.525.

Behavioral Intention, B I, B I 5: I am still looking for news that allows me to verify the information received earlier, 5.418, 4.655.

Habits, H A, H A 1: I do not think about the fake news on the internet because I do not read internet news, 2.539, 3.450.

Habits, H A, H A 2: I habitually always pay attention to reliability of news and always check the source of information, 5.273, 4.910.

Habits, H A, H A 3: I always read reliable information on the internet because it has become a habit for me, 4.836, 4.570.

Table 5.1c
A table consists of 12 latent variables that were included in a survey for a cyber hygiene behavior model. The remaining two variables, their corresponding items, mean R O and mean P L are listed. See long description.
Table 5.1Table 5.1cLong description

Table 5.1 continues with the respective data entries of items, mean R O, and mean P L from left to right, for the remaining two latent variables:

Justification, J U, J U 1: Due to the fake news dissemination, people do not trust each other and the internet is not a reliable source of information, 4.685, 4.855.

Justification, J U, J U 2: Governmental activities to punish and reduce fake news are small and hard to notice, 5.685, 5.050.

Justification, J U, J U 3: The habit of reducing fake news on the internet is usually forgotten when people need to receive important information, example on COVID-19 risks, 5.333, 4.550.

Justification, J U, J U 4: Increasing the punishment for fake news is often overlooked because there is so much everyday news and people do not remember nor recognize what is false or true, 5.497, 4.985.

Cyber Hygiene Behavior, C H B, C H B 1: I avoid constantly studying the news on gossip portals, 5.358, 5.135.

Cyber Hygiene Behavior, C H B, C H B 2: I will not encourage others to study the gossip portal news, 5.430, 5.545.

Cyber Hygiene Behavior, C H B, C H B 3: I immediately remove emails from unknown senders, 4.873, 4.610.

Cyber Hygiene Behavior, C H B, C H B 4: I do not click on links or attachments from uncollected emails or texts, 6.515, 5.970.

The research data were collected using a questionnaire and analyzed using SEM. The survey respondents were students at the University of Economics in Katowice (Poland) and the Babeş-Bolyai University (Romania). The questionnaires were distributed to bachelor, master, and doctoral-level students. The responses to the questionnaire were voluntary and anonymized. This research collected 200 questionnaires from the University of Economics in Katowice and 165 questionnaires from the Babeş-Bolyai University.

The students were asked to express their degree of agreement or disagreement with the statements in Table 5.1 by marking the answers on the seven-grade Likert scale, considering the following meanings: 1 – absolutely disagree; 2 – disagree; 3 – rather disagree; 4 – irrelevant; 5 – rather agree; 6 – agree; and 7 – definitely agree.

Table 5.1 contains the items included in the survey and presents a list of questions with acronyms and a set of latent variables. The last column in Table 5.1 includes average (Mean) values of these research indicators. The Pearson correlation rate for the two last columns in Table 5.1 is 0.7854; hence, authors conclude on high comparability of responses of recipients from these two populations under research.

The TPB Model Evaluation

The presented conceptual model (see Figure 5.1) consists just of items connected to the variables. SmartPLS3 was used to calculate the model (Ringle, Hair, & Sarstedt, Reference Ringle, Hair and Sarstedt2014). In the first run, the model was calculated with the PLS algorithm. The number of iterations was set to 1,000 and the stop criterion to 10−X with selected 7. Then the model was calculated with the Bootstrap algorithm, in which the number of samples was set to 5,000 for the full version with bias-corrected and accelerated in two-tailed distribution. The significant level was equal to 0.05.

The conceptual model (Figure 5.1) was estimated twice. Firstly, data were gathered in Poland, then in Romania. The reliability of the variables was evaluated using Cronbach’s Alpha and Composite Reliability (CR). The results for reliability and validity are presented for the overall sample. Cronbach’s Alpha is a way of assessing reliability by comparing the amount of shared variance, or covariance, among the items in a psychological test or questionnaire (Collins, Reference Collins2007). CR is an “indicator of the shared variance among the observed variables used as an indicator of a latent construct” (Fornell & Larcker, Reference Fornell and Larcker1981). In psychology, all of Cronbach’s Alpha and CR values are recommended to be higher than 0.600. Cronbach’s Alpha values of 0.60 to 0.70 are acceptable in exploratory research, while values between 0.70 and 0.90 are regarded as satisfactory (Nunally & Bernstein, Reference Nunnally and Bernstein1994). The Average Variance Extracted (AVE) and CR values are to be higher or close to 0.500 and 0.700, respectively, which corroborates convergent validity. Fornell and Larcker (Reference Fornell and Larcker1981) said that if AVE is less than 0.5, but CR is higher than 0.6, the validity of the construct is still adequate. The results for reliability and validity for the overall sample of 200 records from Poland are included in Tables 5.2 and 5.3. Unfortunately, in the first estimation, the chosen observed variables have not explained well the latent variables; therefore, model fitting was necessary. The preliminary conceptual model as unreliable has been changed, and Figure 5.2 includes the secondary estimated model covering the following hypotheses:

Table 5.2Construct reliability and validity – preliminary model estimated (sample size: 200 records from Poland)
ConstructCronbach’s Alpharho_AComposite ReliabilityAverage Variance Extracted (AVE)
AN0.4540.5530.6960.447
BA0.3730.4850.6180.234
BI0.6780.7040.7950.442
CHB0.6090.6170.7750.468
DN0.5920.5430.6730.389
HA0.2720.5210.5080.404
JU0.6470.6780.7790.473
MN0.5990.6970.7520.443
PBC0.6850.700.8060.512
PR−0.1150.2670.0050.321
SEC0.3250.2260.5330.356
SN0.4100.7170.6380.377
Table 5.3Construct reliability and validity – secondary model estimated (sample size: 200 records from Poland)
ConstructCronbach’s Alpharho_AComposite ReliabilityAverage Variance Extracted (AVE)
BI0.6780.7030.7950.442
CHB0.6090.6260.7740.467
DN0.5920.5400.6650.386
JU0.6470.6770.7790.473
MN0.5990.6950.7510.443
PBC0.6850.7010.8060.512
Content of image described in text.

Figure 5.2 The final model with estimated coefficients (sample size: 200 records from Poland).

H1: Behavioral Intention (BI) has a positive impact on Cyber Hygiene Behavior (CHB).

H2: Descriptive Norms (DN) have a positive impact on Behavioral Intention (BI).

H3: Justification (JU) has an impact on Cyber Hygiene Behavior (CHB).

H4: Moral Norms (MN) have a positive impact on Behavioral Intention (BI).

H5: Perceived Behavioral Control has a positive impact on Behavioral Intention (BI).

H6: Perceived Behavioral Control has a positive impact on Cyber Hygiene Behavior (CHB).

Path coefficients and R2 constructs are included in Table 5.4.

Table 5.4PLS Algorithm R2 and Path Coefficients (sample size: 200 records from Poland)
A correlation matrix table for 6 constructs with the P L S algorithm, R square values. See long description.
Table 5.4Long description

A table gives the relationships between six constructs represented by the following codes, B I, C H B, D N, J U, M N, and P B C, along with R 2 values.

  • The only R square values given are for B I and C H B, and they are 0.350, and 0.246 respectively.

  • B I correlates with D N, M N, and P B C with path coefficients, 0.376. 0.142, 0.290. and 0.376, respectively.

  • C H B correlates with B I and J U with path coefficients, 0.169 and 0.304, respectively.

The goodness of the model is estimated by the strength of each structural path, determined by the R2 value for the dependent variables (Jankelová, Joniaková, & Skorková, Reference Jankelová, Joniaková and Skorková2021). Generally, R2 is a statistical measure of the goodness of the fit of a regression model. For the dependent variables, the R2 value should be equal to or over 0.125 (Falk & Miller, Reference Falk and Miller1992). The results in Table 5.4 show that all R2 values are over 0.1.

The R2 ranges from 0 to 1, higher values indicating stronger explanatory power. As a general guideline, R2 values of 0.75, 0.50, and 0.25 can be considered substantial, moderate, and weak, respectively, in many social science disciplines (Hair Jr. et al., Reference Hair, Hult, Ringle, Sarstedt, Danks and Ray2021). But acceptable R2 values are based on the research context, and in some disciplines, an R2 value as low as 0.10 is considered satisfactory, that is, for the large sample size research, it is statistically significant, but substantively meaningless (Falk & Miller, Reference Falk and Miller1992).

Table 5.5 covers the Bootstrapping Path Coefficients values for the final model as well as the decisions on the proposed hypotheses’ acceptance or rejection.

Table 5.5Bootstrapping Path Coefficients for final model (sample size: 200 records from Poland)
Hypothesis NoHypothesis (impact direction à)Original sampleSample meanStandard deviationT-statisticsP valuesDecision
H1BI à CHB0.1690.1740.0782.1750.030Accepted
H2DN à BI0.1420.1560.0851.6730.094Rejected
H3JU à CHB0.3040.3160.0684.4530.000Accepted
H4MN à BI0.2900.2930.0654.4760.000Accepted
H5PBC à BI0.3760.3780.0636.0190.000Accepted
H6PBC à CHB0.1500.1500.0751.9940.046Accepted

The results of the tests indicate that the proposed constructs (i.e., JU, MN, PBC) have a weak impact on the intention and behavior of students (expressed as BI, CHB) to avoid the disinformation. If a P value is below a certain threshold, then the corresponding hypothesis is assumed to be supported. The threshold is usually 0.05 (Kock, Reference Kock2014). Therefore, in this research, hypotheses H1, H3, H4, H5, and H6 are supported, but hypothesis H2 is rejected. This means that: (1) Behavioral Intention (BI) impacts on Cyber Hygiene Behavior (CHB); (2) Justification (JU) has a positive impact on Cyber Hygiene Behavior; (3) Moral Norms (MN) have a weak impact on Behavioral Intention (BI); (4) Perceived Behavioral Control (PBC) has a positive impact on Behavioral Intention (BI); and (5) Personal Behavioral Control (PBC) has an impact on Cyber Hygiene Behavior (CHB).

Next, the study estimated the conceptual model considering data from Romania. However, for these data also, the reliability and validity measures have low values (Table 5.6), and authors eliminated some variables from that model.

Table 5.6Construct reliability and validity – preliminary model estimated (sample size: 165 records from Romania)
ConstructCronbach’s Alpharho_AComposite ReliabilityAverage Variance Extracted (AVE)
AN0.4530.4610.7330.480
BA0.2940.5530.4160.244
BI0.7070.7210.8080.459
CHB0.4870.5050.7030.381
DN0.6120.6240.7740.463
HA0.0450.7110.5080.593
JU0.6400.6420.7860.480
MN0.7050.7260.8190.535
PBC0.6741.1040.7110.400
PR0.0190.3830.0910.304
SEC0.459−0.2540.1770.309
SN0.4700.5370.6740.391

Since the chosen observed variables have not explained well the latent variables, model fitting was necessary. The preliminary conceptual model as unreliable has been changed, and Figure 5.3 includes the secondary estimated model covering the following hypotheses:

Content of image described in text.

Figure 5.3 The final model with estimated coefficients (sample size: 165 records from Romania).

H1: Behavioral Intention (BI) has a positive impact on Cyber Hygiene Behavior (CHB).

H2: Descriptive Norms (DN) have a positive impact on Behavioral Intention (BI).

H3: Justification (JU) has an impact on Cyber Hygiene Behavior (CHB).

H4: Moral Norms (MN) have a positive impact on Behavioral Intention (BI).

H5: Perceived Behavioral Control (PBC) has a positive impact on Behavioral Intention (BI).

H6: Perceived Behavioral Control (PBC) has a positive impact on Cyber Hygiene Behavior (CHB).

The same reliability and validity verification was done for the Romania model (Table 5.7).

Table 5.7Construct reliability and validity – secondary model estimated (sample size: 165 records from Romania)
ConstructCronbach’s Alpharho_AComposite ReliabilityAverage Variance Extracted (AVE)
BI0.7070.7260.8080.458
CHB0.5430.5590.8120.685
DN0.6120.6220.7740.463
JU0.6400.6770.7830.476
MN0.7050.7260.8190.535
PBC0.6741.2390.6880.382

Path Coefficients and R2 constructs are included in Table 5.8.

Table 5.8PLS Algorithm R2 and Path Coefficients (sample size: 165 records from Romania)
A correlation matrix table with R square values and relationships between constructs B I, C H B, D N, J U, M N, and P B C. See long description.
Table 5.8Long description

A table gives the relationships between the 6 constructs and the corresponding R square values. Below are the details.

  • The R square values are given only for B I and C H B they are 0.376, and 0.333, respectively.

  • B I correlates with D N, M N, and P B C with path coefficients negative 0.043, 0.509, and 0.233, respectively.

  • C H B correlates with B I, J U, and P B C with path coefficients 0.462, 0.050, and 0.185, respectively.

Table 5.9 covers the Bootstrapping Path Coefficients values for the final model as well as the decisions on the proposed hypotheses acceptance or rejection.

Table 5.9Bootstrapping Path Coefficients for the final model (sample size: 165 records from Romania)
Hypothesis NoHypothesis (impact direction à)Original sampleSample meanStandard deviationT-statisticsP valuesDecision
H1BI à CHB0.4620.4500.1014.5780.000Accepted
H2DN à BI−0.043−0.0070.0980.4340.665Rejected
H3JU à CHB0.0500.0760.0850.5890.556Rejected
H4MN à BI0.5090.5190.0717.1310.000Accepted
H5PBC à BI0.2330.2320.0902.5910.010Accepted
H6PBC à CHB0.1850.1910.0772.4130.016Accepted

The results of the tests indicate that the proposed constructs (i.e., MN, PBC) have a weak or moderate impact on the intention and behavior of students (expressed as BI, CHB) to avoid the disinformation. In this research, the threshold of the P value is also 0.05. Therefore, in this research, hypotheses H1, H4, H5, and H6 are supported, but hypotheses H2 and H3 are rejected. This means that: (1) Behavioral Intention (BI) impacts on Cyber Hygiene Behavior (CHB); (2) Moral Norms (MN) have a moderate impact on Behavioral Intention (BI); (3) Perceived Behavioral Control (PBC) has a weak positive impact on Behavioral Intention (BI); (4) Perceived Behavioral Control (PBC) has a weak positive impact on Cyber Hygiene Behavior (CHB).

Conclusion

Cyber disinformation is a complex and concerning phenomenon. Successful disinformation campaigns can have a significant negative effect on democratic values and institutions. Defending democracy in the digital age requires a complex approach. The individual behavior of users can influence the spread and effects of the phenomenon.

This chapter argued that users’ behavior plays an essential role in this phenomenon and aimed to identify factors that impact on users’ BIs and CHB. The chapter integrated the ETPB and a Structural Equation Model, realized through PLS–SEM, applied to the cyber disinformation phenomenon. The analysis of the self-assessment survey on disinformation risk perception and control revealed that responses are highly similar, with a correlation rate of 0.7854. The research revealed the applicability of the TPB model and found that MN and PBC have an impact on BI and CHB.

The findings of this chapter provide valuable insights that can be used to improve the overall responses to the phenomenon, such as policies, programs, and clinics, and to elaborate educational materials. To effectively address the phenomenon’s relevant vectors, tactics, or methods, there is a clear need for a complex strategy, with multiple components, including research, to better understand the phenomenon’s attributes and the behavior of users; frequent risk assessments; increased empowerment of people to detect and report disinformation; improved fact-checking procedures; enhanced international anti-disinformation enforcement and cooperation; technical assistance programs; better defined responsibility for secondary liability; awareness raising and education programs, with a view to improve the critical thinking abilities of people.

Footnotes

1 District of Columbia v. Exxon Mobil Corp., Civil Action No. 20-1932 (TJK) (D.C. 2022).

2 RJ Reynolds Tobacco Co. v. Rouse, 307 So.3d 89 (Fla. Dist. Ct. App. 2020).

3 Nat’l Coal. on Black Civic Participation v. Wohl, 498 F. Supp. 3d 457 (S.D.N.Y. 2020).

4 United States v. Mackey, No. 21-CR-80 (AMD)(SB) (E.D.N.Y. Oct. 17, 2023).

References

Ababneh, K. I., Ahmed, K., & Dedousis, E. (2022). Predictors of cheating in online exams among business students during the Covid pandemic: Testing the theory of planned behavior. International Journal of Management Education, 20(3), 115. https://doi.org/10.1016/j/ijme.2922.100713Google Scholar
Aïmeur, E., Amri, S., & Brassard, G. (2023). Fake news, disinformation and misinformation in social media: a review. Social Network Analysis and Mining, 13(1), 30. https://link.springer.com/article/10.1007/s13278-023-01028-510.1007/s13278-023-01028-5CrossRefGoogle Scholar
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179211. https://doi.org/10.1016/0749-5978(91)90020-TCrossRefGoogle Scholar
Ajzen, I. (2005). Attitudes, personality, and behavior. New York: Open University Press.Google Scholar
Al-Shanfari, I., Yassin, W., Tabook, N., Ismail, R., & Ismail, A. (2022). Determinants of information security awareness and behaviour strategies in public sector organizations among employees. International Journal of Advanced Computer Science and Applications, 13(8), 479490.Google Scholar
Alieva, I., Moffitt, J. D., & Carley, K. M. (2022). How disinformation operations against Russian opposition leader Alexei Navalny influence the international audience on Twitter. Social Network Analysis and Mining, 12(1), 113. https://doi.org/10.1007/s13278-022-00908-6CrossRefGoogle ScholarPubMed
Andersen, J. & Søe, S. O. (2020). Communicative actions we live by: The problem with fact-checking, tagging or flagging fake news – The case of Facebook. European Journal of Communication, 35(2), 126139. https://doi.org/10.1177/0267323119894489CrossRefGoogle Scholar
Arayankalam, J. & Krishnan, S. (2021). Relating foreign disinformation through social media, domestic online media fractionalization, government’s control over cyberspace, and social media-induced offline violence: Insights from the agenda-building theoretical perspective. Technological Forecasting and Social Change, 166, 114. https://doi.org/10.1016/j.techfore.2021.120661CrossRefGoogle Scholar
Benson, T. (2020, July 29). Twitter bots are spreading massive amounts of COVID-19 misinformation. IEEE Spectrum. https://spectrum.ieee.org/twitter-bots-are-spreading-massive-amounts-of-covid-19-misinformationGoogle Scholar
Bontcheva, K., Posetti, J., Teyssou, D., Meyer, T., Gregory, S., Hanot, C., & Maynard, D. (2020). Balancing act: Countering digital disinformation while respecting freedom of expression. United Nations Educational, Scientific and Cultural Organization (UNESCO). https://unesco.org/en/articles/balancing-act-countering-digital-disinformation-while-respecting-freedom-expressionGoogle Scholar
Borges, P. M. & Gambarato, R. R. (2019). The role of beliefs and behavior on Facebook: A semiotic approach to algorithms, fake news, and transmedia journalism. International Journal of Communication, 13, 603618. https://ijoc.org/index.php/ijoc/article/view/10304/2550Google Scholar
Buchanan, T. & Benson, V. (2019). Spreading disinformation on Facebook: Do trust in message source, risk propensity, or personality affect the organic reach of “fake news”? Social Media+ Society, 5(4), 19. https://doi.org/10.1177/2056305119888654Google Scholar
Cabeza-Ramirez, L. J., Sanchez-Canizares, S. M., Santos-Roldan, L. M., & Fuentes-Garcia, F. J. (2022). Impact of the perceived risk in influencers’ product recommendations on their followers’ purchase attitudes and intention. Technological Forecasting and Social Change, 184, 116. https://doi.org/10.1016/j.techfore.2022.121997CrossRefGoogle Scholar
Caramancion, K. M., Li, Y., Dubois, E., & Jung, E. S. (2022). The missing case of disinformation from the cybersecurity risk continuum: A comparative assessment of disinformation with other cyber threats. Data, 7(4), 118. https://doi.org/10.3390/data7040049CrossRefGoogle Scholar
Carrapico, H. & Farrand, B. (2021). When trust fades, Facebook is no longer a friend: Shifting privatisation dynamics in the context of cybersecurity as a result of disinformation, populism and political uncertainty. JCMS: Journal of Common Market Studies, 59(5), 11601176. https://doi.org/10.1111/jcms.13175Google Scholar
Carratalá, A. (2023). Disinformation and sexual and gender diversity in Spain: Twitter users’ response, and the perception of LGBTQI+ organisations. Social Sciences, 12(4), 119. https://doi.org/10.3390/socsci12040206CrossRefGoogle Scholar
Chesney, B., & Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. California Law Review, 107, 17531820.Google Scholar
Chuttur, M. (2009). Overview of the technology acceptance model: Origins, developments and future directions. Sprouts: Working Papers on Information Systems, 9(37), 122.Google Scholar
Collins, B., Hoang, D. T., Nguyen, N. T., & Hwang, D. (2021). Trends in combating fake news on social media – A survey. Journal of Information and Telecommunication, 5(2), 247266. https://doi.org/10.1080/24751839.2020.1847379CrossRefGoogle Scholar
Collins, L. M. (2007). Research design and methods, encyclopedia of gerontology. In Encyclopedia of gerontology (pp. 433442). Elsevier. https://doi.org/10.1016/B0-12-370870-2/00162-1CrossRefGoogle Scholar
Culloty, E., Suiter, J., Viriri, I., & Creta, S. (2022). Disinformation about migration: An age-old issue with new tech dimensions. In World migration report (pp. 123). International Organization for Migration.Google Scholar
Cybersecurity and Infrastructure Security Agency (CISA). (2022). Tactics of disinformation. CISA. https://cisa.gov/sites/default/files/publications/tactics-of-disinformation_508.pdfGoogle Scholar
Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user information systems: Theory and results. Doctoral dissertation, Massachusetts Institute of Technology. https://researchgate.net/publication/35465050Google Scholar
Diekman, C., Ryan, C. D., & Oliver, T. L. (2023). Misinformation and disinformation in food science and nutrition: Impact on practice. Journal of Nutrition, 153(1), 39. https://doi.org/10.1016/j.tjnut.2022.10.001CrossRefGoogle ScholarPubMed
Edwards, C., Beattie, A. J., Edwards, A., & Spence, P. R. (2016). Differences in perceptions of communication quality between a Twitterbot and human agent for information seeking and learning. Computers in Human Behavior, 65, 666671. https://doi.org/10.1016/j.chb.2016.07.003CrossRefGoogle Scholar
European Commission. (2020). On the European Democracy Action Plan, COM (2020), 790. EU Monitor.Google Scholar
European Commission. (2023). Digital services act: Application of the risk management framework to Russian disinformation campaigns. Publications Office of the European Union.Google Scholar
European Parliament. (2022). Resolution of 9 March 2022 on foreign interference in all democratic processes in the European Union, including disinformation (2020/2268(INI)) (2022/C 347/07). European Parliament. https://europarl.europa.eu/doceo/document/TA-9-2022-0064_EN.htmlGoogle Scholar
European Parliamentary Research Service. (2020, April 9). Disinformation and science: A survey of the gullibility of students with regard to false scientific news. European Parliament, Panel for the Future of Science and Technology (STOA). https://europarl.europa.eu/RegData/etudes/STUD/2020/656300/EPRS_STU(2020)656300_EN.pdfGoogle Scholar
Falk, R. F., & Miller, N. B. (1992). A primer for soft modeling. The University of Akron Press.Google Scholar
Fallis, D. (2014). The varieties of disinformation. In Floridi, L., & Illari, P. (Eds.), The philosophy of information quality (pp. 135161). Synthese Library.10.1007/978-3-319-07121-3_8CrossRefGoogle Scholar
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18(1), 3950. https://doi.org/10.2307/3151312CrossRefGoogle Scholar
Hair, J. F., Jr., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2017). A primer on partial least squares structural equation modeling (PLS-SEM). Sage.Google Scholar
Hair, J. F., Jr., Hult, G. T. M., Ringle, C. M., Sarstedt, M., Danks, N. P., & Ray, S. (2021). Partial least squares structural equation modeling (PLS-SEM) using R: A workbook. Springer Nature.10.1007/978-3-030-80519-7CrossRefGoogle Scholar
Jankelová, N., Joniaková, Z., & Skorková, Z. (2021). Perceived organizational support and work engagement of first-line managers in healthcare – The mediation role of feedback seeking behavior. Journal of Multidisciplinary Healthcare, 14, 31093123. https://doi.org/10.2147/JMDH.S326563CrossRefGoogle ScholarPubMed
Jia, H., Yu, J., Feng, T., Ning, L., Cao, P., Shang, P., Gao, S., & Yu, X. (2022). Factors influencing medical personnel to work in primary health care institutions: An extended theory of planned behavior. International Journal of Environmental Research and Public Health, 19(5), 115. https://doi.org/10.3390/ijerph19052785CrossRefGoogle ScholarPubMed
Jungherr, A., & Schroeder, R. (2021). Disinformation and the structural transformations of the public arena: Addressing the actual challenges to democracy. Social Media+ Society, 7(1), 113. https://doi.org/10.1177/2056305121988928Google Scholar
Khani Jeihooni, A., Layeghiasl, M., Yari, A., & Rakhshani, T. (2022). The effect of educational intervention based on the theory of planned behavior on improving physical and nutrition status of obese and overweight women. BMC Women’s Health, 22(1), 19. https://doi.org/10.1186/s12905-022-01-01593-5CrossRefGoogle ScholarPubMed
Knight Foundation. (2018, October). Disinformation, ‘fake news’ and influence campaigns on Twitter. Knight Foundation. https://s3.amazonaws.com/kf-site-legacy-media/feature_assets/www/misinfo/kf-disinformation-report.0cdbb232.pdfGoogle Scholar
Kock, N. (2014). Stable P value calculation methods in PLS-SEM. ScriptWarp Systems. https://researchgate.net/publication/269989910_Stable_P_value_calculation_methods_in_PLS-SEMGoogle Scholar
Krafft, P. M., & Donovan, J. (2020). Disinformation by design: The use of evidence collages and platform filtering in a media manipulation campaign. Political Communication, 37(2), 194214. https://doi.org/10.1080/10584609.2019.1686094CrossRefGoogle Scholar
Machete, P., & Turpin, M. (2020). The use of critical thinking to identify fake news: A systematic literature review. In Hattingh, M., Matthee, M., Smuts, H., Pappas, I., Dwivedi, Y. K., & Mantymaki, M. (Eds.), Responsible design, implementation and use of information and communication technology (pp. 235246). Springer International Publishing. https://doi.org/10.1007/978-3-030-45002-1_20CrossRefGoogle Scholar
Maheri, M., Rezapour, B., & Didarloo, A. (2022). Predictors of colorectal cancer screening intention based on the integrated theory of planned behavior among the average-risk individuals. BMC Public Health, 22(1), 111. https://doi.org/10.1186/s12889-022-14191-9CrossRefGoogle ScholarPubMed
Marikyan, M., & Papagiannidis, P. (2021). Unified theory of acceptance and use of technology. TheoryHub Book.Google Scholar
Mattioli, R., Malatras, A., Hunter, E. N., Biasibetti Penso, M. G., Bertram, D., & Neubert, I. (2023). Identifying emerging cyber security threats and challenges for 2030. ENISA.Google Scholar
Michaelis, M., Jafarian, J. H., & Biswas, A. (2022). The dangers of money and corporate power relating to online disinformation. In 2022 23rd IEEE international conference on mobile data management (MDM) (pp. 470475). IEEE.10.1109/MDM55031.2022.00101CrossRefGoogle Scholar
Nenadić, I. (2019). Unpacking the “European approach” to tackling challenges of disinformation and political manipulation. Internet Policy Review, 8(4), 122. https://doi.org/10.14763/2019.4.1436CrossRefGoogle Scholar
Nunnally, J. C., & Bernstein, I. H. (1994). Psychometric theory. McGraw-Hill.Google Scholar
Ó Fathaigh, R., Helberger, N., & Appelman, N. (2021). The perils of legally defining disinformation. Internet Policy Review, 10(4), 125. https://doi.org/10.14763/2021.4.1584CrossRefGoogle Scholar
Olan, F., Jayawickrama, U., Arakpogun, E. O., Suklan, J., & Liu, S. (2022). Fake news on social media: The impact on society. Information Systems Frontiers, 26, 443–458. https://doi.org/10.1007/s10796-022-10242-zCrossRefGoogle Scholar
Park, E. S., & Park, M. S. (2020). Factors of the technology acceptance model for construction IT. Applied Sciences, 10(22), 115. https://doi.org/10.3390/app10228299CrossRefGoogle Scholar
Pérez-Escolar, M., Lilleker, D., & Tapia-Frade, A. (2023). A systematic literature review of the phenomenon of disinformation and misinformation. Media and Communication, 11(2), 7687. https://doi.org/10.17645/mac.v11i2.6453CrossRefGoogle Scholar
Pierri, F., Artoni, A., & Ceri, S. (2020). Investigating Italian disinformation spreading on Twitter in the context of 2019 European elections. PLOS One, 15(1), 123. https://doi.org/10.1371/journal.pone.0227821CrossRefGoogle ScholarPubMed
Rana, M. S., Nobi, M. N., Murali, B., & Sung, A. H. (2022). Deepfake detection: A systematic literature review. IEEE Access, 10, 2549425513.10.1109/ACCESS.2022.3154404CrossRefGoogle Scholar
Ringle, C. M., Hair, J. F., & Sarstedt, M. (2014). PLS-SEM: Looking back and moving forward. Long Range Planning, 47(3), 132137. https://doi.org/10.1016/j.lrp.2014.02.008Google Scholar
Romero-Galisteo, R.-P., Gonzalez-Sanches, M., Galvez-Ruiz, P., Palomo-Carrion, R., Casuso-Holgado, M. J., & Pinero-Pinto, E. (2022). Entrepreneurial intention, expectations of success and self-efficacy in undergraduate students of health sciences. BMC Medical Education, 22(1), 17. https://doi.org/10.1186/s12909-022-03731-xCrossRefGoogle ScholarPubMed
Sabi, H. M., Uzoka, F. M. E., Langmia, K., & Njeh, F. N. (2016). Conceptualizing a model for adoption of cloud computing in education. International Journal of Information Management, 36(2), 183191. https://doi.org/10.1016/j.ijinfomgt.2015.11.010CrossRefGoogle Scholar
Savari, M., Mombeni, A. S., & Izadi, H. (2022). Socio-psychological determinants of Iranian rural households’ adoption of water consumption curtailment behaviors. Scientific Reports, 12(1), 112. https://doi.org/10.1038/s41598-022-17560-xCrossRefGoogle ScholarPubMed
Schünemann, W. J. (2022). A threat to democracies?: An overview of theoretical approaches and empirical measurements for studying the effects of disinformation. In Cavelty, M. D. & Wenger, A. (Eds.), Cyber security politics (pp. 3247). Routledge.10.4324/9781003110224-4CrossRefGoogle Scholar
Shirahada, K., & Zhang, Y. (2022). Counterproductive knowledge behavior in volunteer work: Perspectives from the theory of planned behavior and well-being theory. Journal of Knowledge Management, 26(11), 2241. https://doi.org/10.1108/JKM-08-2021-0612CrossRefGoogle Scholar
Statista. (2023). Number of disinformation and pro-Russian Twitter posts in Poland 2023. Statista. https://statista.com/statistics/1365122/number-of-disinformation-and-pro-russian-twitter-posts-poland/Google Scholar
Tenove, C., & Tworek, H. J. S. (2019). Online disinformation and harmful speech: Dangers for democratic participation and possible policy responses. Journal of Parliamentary and Political Law, 13, 215232.Google Scholar
Ternovski, J., Kalla, J., & Aronow, P. (2022). The negative consequences of informing voters about deepfakes: Evidence from two survey experiments. Journal of Online Trust and Safety, 1(2), 116. https://doi.org/10.54501/jots.v1i2.28CrossRefGoogle Scholar
Trend Micro. (2020). Malicious uses and abuses of artificial intelligence. United Nations Interregional Crime & Justice Research Institute. https://unicri.it/sites/default/files/2020-11/AI%20MLC.pdfGoogle Scholar
Uyheng, J., Moffitt, J. D., & Carley, K. M. (2022). The language and targets of online trolling: A psycholinguistic approach for social cybersecurity. Information Processing and Management, 59(5), 115. https://doi.org/10.1016/j.ipm.2022.103012CrossRefGoogle Scholar
Vaccari, C. & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media and Society, 6 (1), 113. https://doi.org/10.1177/2056305120903408Google Scholar
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425478. https://doi.org/10.2307/30036540CrossRefGoogle Scholar
Venkatesh, V., Thong, J. Y. L., & Xu, X. (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Quarterly, 36(1), 157178. https://doi.org/10.2307/41410412CrossRefGoogle Scholar
Wardle, C., & Derakhshan, H. (2017, September 27). Information disorder: Toward an interdisciplinary framework for research and policy making. Council of Europe. https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277cGoogle Scholar
Watson, A. (2021). Main sources of disinformation as ranked by journalists worldwide as of June 2020. Statista. https://statista.com/statistics/1249671/journalists-cite-sources-disinformation-worldwide/Google Scholar
Weikmann, T., & Lecheler, S. (2022). Visual disinformation in a digital age: A literature synthesis and research agenda. New Media & Society, 25(12), 36963713. https://doi.org/10.1177/14614448221141648.CrossRefGoogle Scholar
Zhang, H., Shi, Z., Chen, J., & Zhang, Z. (2022). Understanding combined health and business risk behavior: Small tourism firm owners reopening amid Covid-19 in Pingyao, China. Behavioral Sciences (Basel), 12(10), 118. https://doi.org/10.3390/bs12100358Google ScholarPubMed
Figure 0

Figure 5.1 Conceptual model.

Figure 1

Table 5.1a Table 5.1Table 5.1a long description.

Figure 2

Table 5.1b Table 5.1Table 5.1b long description.

Figure 3

Table 5.1c Table 5.1Table 5.1c long description.

Figure 4

Table 5.2 Construct reliability and validity – preliminary model estimated (sample size: 200 records from Poland)

Figure 5

Table 5.3 Construct reliability and validity – secondary model estimated (sample size: 200 records from Poland)

Figure 6

Figure 5.2 The final model with estimated coefficients (sample size: 200 records from Poland).

Figure 7

Table 5.4 PLS Algorithm R2 and Path Coefficients (sample size: 200 records from Poland)Table 5.4 long description.

Figure 8

Table 5.5 Bootstrapping Path Coefficients for final model (sample size: 200 records from Poland)

Figure 9

Table 5.6 Construct reliability and validity – preliminary model estimated (sample size: 165 records from Romania)

Figure 10

Figure 5.3 The final model with estimated coefficients (sample size: 165 records from Romania).

Figure 11

Table 5.7 Construct reliability and validity – secondary model estimated (sample size: 165 records from Romania)

Figure 12

Table 5.8 PLS Algorithm R2 and Path Coefficients (sample size: 165 records from Romania)Table 5.8 long description.

Figure 13

Table 5.9 Bootstrapping Path Coefficients for the final model (sample size: 165 records from Romania)

Accessibility standard: WCAG 2.2 AAA

Why this information is here

This section outlines the accessibility features of this content - including support for screen readers, full keyboard navigation and high-contrast display options. This may not be relevant for you.

Accessibility Information

The HTML of this book complies with version 2.2 of the Web Content Accessibility Guidelines (WCAG), offering more comprehensive accessibility measures for a broad range of users and attains the highest (AAA) level of WCAG compliance, optimising the user experience by meeting the most extensive accessibility guidelines.

Content Navigation

Table of contents navigation
Allows you to navigate directly to chapters, sections, or non‐text items through a linked table of contents, reducing the need for extensive scrolling.
Index navigation
Provides an interactive index, letting you go straight to where a term or subject appears in the text without manual searching.

Reading Order & Textual Equivalents

Single logical reading order
You will encounter all content (including footnotes, captions, etc.) in a clear, sequential flow, making it easier to follow with assistive tools like screen readers.
Short alternative textual descriptions
You get concise descriptions (for images, charts, or media clips), ensuring you do not miss crucial information when visual or audio elements are not accessible.
Full alternative textual descriptions
You get more than just short alt text: you have comprehensive text equivalents, transcripts, captions, or audio descriptions for substantial non‐text content, which is especially helpful for complex visuals or multimedia.
Visualised data also available as non-graphical data
You can access graphs or charts in a text or tabular format, so you are not excluded if you cannot process visual displays.

Visual Accessibility

Use of colour is not sole means of conveying information
You will still understand key ideas or prompts without relying solely on colour, which is especially helpful if you have colour vision deficiencies.
Use of high contrast between text and background colour
You benefit from high‐contrast text, which improves legibility if you have low vision or if you are reading in less‐than‐ideal lighting conditions.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×