RESEARCH
Peer reviewed scientific video journal
Video encyclopedia of advanced research methods
Visualizing science through experiment videos
EDUCATION
Video textbooks for undergraduate courses
Visual demonstrations of key scientific experiments
BUSINESS
Video textbooks for business education
OTHERS
Interactive video based quizzes for formative assessments
Products
RESEARCH
JoVE Journal
Peer reviewed scientific video journal
JoVE Encyclopedia of Experiments
Video encyclopedia of advanced research methods
EDUCATION
JoVE Core
Video textbooks for undergraduates
JoVE Science Education
Visual demonstrations of key scientific experiments
JoVE Lab Manual
Videos of experiments for undergraduate lab courses
BUSINESS
JoVE Business
Video textbooks for business education
Solutions
Language
English
Menu
Menu
Menu
Menu
A subscription to JoVE is required to view this content. Sign in or start your free trial.
Research Article
Erratum Notice
Important: There has been an erratum issued for this article. View Erratum Notice
Retraction Notice
The article Assisted Selection of Biomarkers by Linear Discriminant Analysis Effect Size (LEfSe) in Microbiome Data (10.3791/61715) has been retracted by the journal upon the authors' request due to a conflict regarding the data and methodology. View Retraction Notice
This study examines how AI integration affects English learners psychologically and cognitively. It finds AI use increases anxiety (FoMO), harming reading comprehension, while digital burnout weakens the link between comprehension and vocabulary. The goal is to highlight the need for balanced AI use to support learning.
This study examines how artificial intelligence (AI) integration in language learning influences psychological and cognitive outcomes among English as a Foreign Language (EFL) learners. Specifically, it investigates the effects of perceived AI dependency, ethical concerns, and generative AI usage on Fear of Missing Out (FoMO), reading comprehension measured by reading strategies scale, and vocabulary acquisition, while considering the moderating role of digital burnout. A cross-sectional survey design was employed with n = 450 EFL learners, and data were analyzed using partial least squares structural equation modeling (PLS-SEM) to assess both direct and mediated relationships. The findings reveal that AI-related factors significantly predict higher FoMO, which in turn negatively impacts reading comprehension. While reading comprehension strongly supports vocabulary acquisition, digital burnout weakens this relationship. The study highlights the dual nature of AI in education, facilitating learning outcomes while simultaneously contributing to psychological strain. These results underscore the importance of balanced AI integration in language learning, emphasizing the need for strategies that mitigate digital anxiety and burnout. Theoretical implications extend research on technology-mediated learning by identifying FoMO as a critical affective pathway in AI-driven educational contexts. Practical recommendations include incorporating digital wellness practices alongside AI tools to optimize learning benefits while minimizing adverse psychological effects.
The rapid integration of Generative Artificial Intelligence (AI) into education has transformed traditional language learning paradigms, particularly in English as a Foreign Language (EFL) contexts1. Tools such as AI-powered vocabulary assistants, automated writing feedback systems, and adaptive learning platforms promise enhanced efficiency and personalized instruction2. However, this technological shift has also introduced psychological and pedagogical challenges, including perceived AI dependency, digital burnout, and ethical concerns, which may mediate learning outcomes3. One understudied yet critical phenomenon in this domain is the Fear of Missing Out (FoMO). In this psychological state, learners feel compelled to adopt AI tools due to social comparison pressures, potentially at the expense of deep cognitive engagement4. While existing research has explored AI's role in language acquisition5, few studies have examined how Generative AI interacts with perceived dependency and FoMO to shape vocabulary acquisition, leaving a significant gap in understanding the dual forces both facilitative and detrimental of AI in education6,7.
A review of the literature reveals three key gaps. First, studies on AI in EFL learning have predominantly focused on performance outcomes (test scores, fluency improvements) rather than the psychological mechanisms driving AI adoption8. For instance, while AI's efficiency in vocabulary retention is well-documented, little is known about how perceived dependency, the belief that one cannot learn effectively without AI, impacts motivation and long-term retention9. Second, although FoMO has been extensively studied in social media contexts its role in AI-mediated learning environments remains unexplored. Given that AI tools often provide real-time, socially visible feedback (peer benchmarking features), learners may experience heightened anxiety about falling behind, exacerbating reliance on AI10. Third, while ethical concerns about AI (data privacy, algorithmic bias) are widely acknowledged11, their moderating effect on dependency and learning efficacy has not been empirically tested in EFL settings. These gaps collectively underscore the need for a holistic investigation that bridges technology acceptance, motivational psychology, and cognitive learning theories.
This study addresses these gaps by proposing a novel conceptual model that examines how Generative AI usage, perceived AI dependency and perceived AI ethical concern influence FoMO among EFL learners, the mediating role of FoMO in linking all three independent variables to EFL students' reading comprehension, and the moderating effect of digital burnout between reading comprehension and vocabulary acquisition. The proposed research contributes to the development of a comprehensive theoretical framework that can be used to study the effects of AI dependency on intrinsic motivation and learner autonomy through the lenses of Self-Determination Theory (SDT)12, the psychological mechanisms behind FoMO in learning environments mediated by AI on Social Comparison Theory13 and the effects of AI dependency tools on cognitive load through the Cognitive Load Theory (CLT)14.
The theoretical contributions of this study are threefold. First, it extends SDT by conceptualizing AI dependency as a potential threat to learner autonomy, challenging the assumption that AI universally enhances motivation. Second, it applies Social Comparison Theory to AI-mediated learning, revealing how FoMO transforms from a social-media phenomenon into a driver of educational tool adoption15. Third, it refines CLT by demonstrating that AI's cognitive benefits (reduced extraneous load) may be counteracted by dependency-induced germane load16, offering a nuanced view of AI's impact on memory encoding. These theoretical advancements provide a foundation for future research on the psychology of AI-assisted learning.
To ensure methodological rigor and practical applicability, several key considerations guided the design. The target population of Chinese university EFL learners was selected due to their high exposure to and institutional integration of AI tools, providing a relevant context for examining AI dependency. Construct boundaries are clearly delineated using validated, multi-item scales for each latent variable, preventing conceptual overlap. The selection of these specific scales was justified by their established reliability and their direct operationalization of the complex psychological and behavioral constructs under investigation (multidimensional ethical concerns, learning-specific digital burnout). For the planned mediation and moderation analyses, we acknowledge that the cross-sectional data, while sufficient for predictive modeling via PLS-SEM, limit definitive causal inference for the mediated and moderated pathways. The sample size of 465 far exceeds the minimum requirements for statistical power in both regression and PLS-SEM, adhering to the '10-times rule' for the maximum number of structural paths, thus ensuring stable and generalizable parameter estimates for the proposed model.
The study's focal constructs are defined as follows: Perceived AI Dependency refers to a learner's belief in their reliance on AI tools for efficiency and task completion. Perceived AI Ethical Concerns encompass apprehensions regarding algorithmic transparency, fairness, safety, and accountability. Generative AI Usage denotes the frequency and extent of employing AI models for generating text or images. Fear of Missing Out (FoMO) is the anxiety that others might be having rewarding experiences from which one is absent, here contextualized to AI-driven learning. Digital Burnout is the state of emotional and mental exhaustion resulting from prolonged digital engagement. Finally, the learning outcomes are captured by Reading Comprehension, the ability to understand and employ strategies with written material, and Vocabulary Acquisition, the self-perceived effectiveness in learning and using new words.
The structure of this article is as follows. Section 1 deals with background, significance, research gaps and contribution of the present study. Section 2 reviews prior work on AI in EFL learning, dependency, and FoMO, synthesizing key debates. Section 3 details the methodology which includes research design, combining surveys to measure perceived dependency/FoMO and experimental tasks to assess vocabulary retention. Section 4 presents the findings, including mediation/moderation analyses. Section 5 discusses implications for theory, pedagogy, and AI design, while Section 6 concludes with limitations and future directions.
This study was conducted in compliance with ethical guidelines for human subject's research at Yangzhou University. Informed consent was obtained from all participants prior to their involvement, ensuring they were fully informed of the study's purpose, survey-based procedures, voluntary nature, and their right to withdraw at any time without consequence. All methods adhered to ethical standards for human research, ensuring participant confidentiality, data anonymization, and protection throughout the study. The study involved non-invasive, survey-based data collection with minimal risk to participants. The study adheres to ethical guidelines for educational research, ensuring participant anonymity and voluntary participation. Informed consent was obtained electronically, with clear information on data usage and storage. No personally identifiable information was collected, and responses are stored securely on password-protected servers. Participants retain the right to withdraw at any stage. Potential risks (survey fatigue) are mitigated through concise questionnaire design, and findings are reported in aggregate form to prevent individual identification.
Literature Review
Perceived AI Dependency, Perceived AI ethical concerns and Fear of missing out (FoMo)
The dynamic incorporation of the use of Artificial Intelligence (AI) in education has dramatically given a new twist to language learning, especially among students of the English as a Foreign Language (EFL)17. Intelligent tutoring systems, automatic writing evaluation, and adaptive learning platforms are AI-based and provide an efficient and individualized learning experience18. Nevertheless, along with this technological development, there have been some psychological issues, such as escalated Fear of Missing Out (FoMO), which is a type of social anxiety because people fear they are not included in pleasurable activities19. According to the research findings, the perceived AI dependency and the perceived AI ethical concerns can reinforce the FoMO levels among EFL learners and alter their interactions with AI-based education in the future.
In addition, as AI finds its place in the language teaching process, students can become dependent on these tools to fix their grammar and vocabulary and improve their pronunciation20. This dependency has the potential to cause psychological pressure, and the students may feel they must consistently use AI tools to prevent academic lag. Research findings on digital dependency led to the conclusion that overdependence on technology creates anxiety due to the fear of not having access to such devices21. Within the domain of EFL learning, students who believe AI is necessary could feel FoMO because they think fellow students are harnessing more of such technologies22. The phenomenon is consistent with the social comparison theory23, which postulates that people compare their progress with that of others. When AI-enhanced learning is seen as a competitive signal, students can be afraid of missing out on academic success unless they maximize their involvement in the use of such tools24. In addition, the perpetual connection fostered by AI tools can distort the differences between education and entertainment, which further strengthens addictive domesticating behaviours that further enhance FoMO25.
H1: Perceived AI dependency positively influences Fear of Missing Out (FoMO) among EFL learners.
H2: Perceived AI ethical concerns positively influence Fear of Missing Out (FoMO) among EFL learners.
Generative AI (Use of ChatGPT) and Fear of missing out (FoMo)
The current increase in the popularity of generative AI (ChatGPT) in the field of education has led to new conditions of psychological experiences in the process of learning a language 26,27. The study has recently identified that the use of such high-tech AI tools may significantly transform the emotional state of the learners, particularly Fear of Missing Out (FoMO)28. The more generative AI is programmed to provide instantaneous feedback, to generate learning material, and simulate a conversation practice, the more students can develop anxieties and become worried that they will be left behind in case they do not employ these tools to the greatest potential29. The process suggests a broader trend of digital learning whereby technological advancement not only makes it possible to do more things in learning, but it also exposes learning experiences to new levels of anxiety30.
The psychological effects of the generative AI tools on language learners may be explained by referring to the self-determination theory, according to which competence, autonomy, and relatedness are basic psychological needs14. In cases where students believe that their peers are getting superior results with the help of AI, attributed to FoMO not being addressed31, it is also exacerbated by threats to their sense of competence. This impact can be especially severe in challenging academic settings as students compare their progress and abilities32. What is more, the unlimited access to generative AI tools may also lead to the issue of constant use, as learners fear that they will fall behind and lose significant progress.
FoMO also comes in the form of the social dynamics of AI implementation in learning communities33. Once generative AI becomes normalized in the educational setting, it is likely to create a sense of social pressure to align with the new-fangled digital practices encouraged in the educational environment34. Such pressure may be especially strong with learning a foreign language, where AI is available to help with writing, grammar, and communication without apparent effort on the part of the learner35. The idea of being left out in this transformation may be a motivation to adopt compulsive usage strategies, even among learners who could otherwise be more inclined to learn in more conventional ways36. New studies stress the necessity to understand the mediation effect of various learner characteristics on the relationship between generative AI use and FoMO in more detail. The answers to these questions depend on such factors as the level of digital literacy and self-regulation skills, and learning motivation, which can define whether AI tools are empowering resources or the cause of anxiety37. Based on the above literature following hypothesis is proposed:
H3: Increased Generative AI Usage positively influences Fear of Missing Out (FoMO) among EFL learners.
Fear of missing out (FoMo), Reading Comprehension and Vocabulary Acquisition
The psychological concept of Fear of Missing Out (FoMO) has become increasingly influential on learning behaviors online, highlighting a complex relationship with second language learning and academic outcomes38. Recent research shows that FoMO, depending on the language learning dimension39, may have a differentiated effect on the learning of several dimensions of language learning, especially on reading comprehension and vocabulary acquisition40. This dependence seems to be motivated by cognitive load41, and attention control systems42, because individuals with a high FoMO show a unique pattern of textual material interactions.
The empirical studies indicate that a moderate degree of FoMO increases engagement in reading materials when learners feel that reading materials are socially and academically worthwhile pursuits43. Overindulgence in FoMO is, however, associated with incomplete reading and low information deep processing44, which could degrade comprehension. The sensation seems to be more prevalent in the digital reading space, where notifications and social media entrapment increase attentional disruptions45. Within vocabulary acquisition, FoMO has a more complicated relationship, although the exposure to new lexical items might be enhanced because of increased engagement46,47, retention is harmed because of the compulsive learning behaviours.
Emerging evidence suggests that there is cultural and individual variation in these relationships. By comparison, collectivist learners seem to be more prone to the negative consequences of FoMO on reading comprehension48, which may be related to their increased tendency to compare socially49. On the other hand, FoMO experiences have no impact on comprehension and vocabulary learning in learners who have high self-regulation strategies 50. Based on the above literature following hypotheses are proposed:
H4: Fear of Missing Out (FoMO) negatively influences Reading Comprehension.
H5: Reading Comprehension positively influences Vocabulary Acquisition.
Moderating Effect of Digital Burnout in Learning
Digital burnout is a recent issue with psychological and social implications that affect the processes of language acquisition51,52. Emotionally and cognitively draining, at the same time, I lacked motivation due to a long range of technology-mediated academic learning. Digital burnout adversely affected the necessary cognitive capacities used to process language53. Active neurocognitive studies indicate that occupational burnout causes fatigue, which interferes with working memory capacity54, which is a key component of both word memory and reading55. This mental fatigue could be behind the new evidence of weak semantic encoding in digitally burned-out learners when they are asked to perform vocabulary learning tasks56.
Digital burnout manifests uniquely in EFL contexts due to the distinct cognitive and affective demands of language acquisition. Unlike general academic burnout, EFL digital burnout is exacerbated by the constant cognitive load of processing unfamiliar phonology, syntax, and semantics through digital interfaces, which can overwhelm working memory more severely than content-based learning57. Furthermore, the affective filter is heightened by the performative nature of language practice on AI platforms, where learners may experience anxiety from immediate, algorithm-driven corrections and the pressure to achieve native-like proficiency58. This combination of intensified cognitive load and unique socio-affective pressure in digitally mediated language environments creates a specific burnout profile that directly impedes the nuanced processes of vocabulary encoding and reading comprehension59.
Digital burnout and reading comprehension show very subtle relationships in vocabulary development. Although incidental vocabulary knowledge is likely to be gained through contextual inference when students have strong reading skills60, the positive correlation between vocabulary and reading performance decreases in students with high burnout61. Eye-tracking measurements indicate that verbal senescence learners also show decreased fixation times over new lexical objects during reading62, implying a decline in mental concentration of vocabulary-dense backgrounds. This is like the attentional control theory, which holds that cognitive fatigue affects the skills of the executive functions involved in simultaneous comprehension and lexical processing. It is quite possible to note that the moderating role of digital burnout is evident only with individual differences in self-regulation and the design of learning environments63. Being resilient to the adverse effects of burnout is also exemplified by learners with high-performing metacognitive strategies since they will maintain an advantage in vocabulary even when faced with comprehension impediments. Based on the above literature following hypotheses are proposed:
H6: Digital Burnout in Learning negatively influences Vocabulary Acquisition.
H7: Digital Burnout in Learning negatively moderates the relationship between Reading Comprehension and Vocabulary Acquisition, such that higher burnout weakens the positive effect of comprehension on vocabulary gains.
Mediating Effect of Fear of Missing Out (FoMo)
The introduction of artificial intelligence in education has added complicated psychological systems that determine the results of reading comprehension64. The emerging data indicate that Fear of Missing Out (FoMO) is a vital mediator of the link between learner engagement with AI technologies and their reading achievement65. The role of this mediation effect seems especially prominent in settings where AI tools have been promoted as means of facilitating more efficient learning, thus driving psychological pressures that can end up hindering higher-order thinking66. To ensure that autonomous learning anxieties are not developed by learners realising an irrational reliance on AI-assistance67, recent studies focus on determining how perceived AI dependency levels influence the development of anxiety regarding autonomous learning capabilities68. This anxiety is converted into FoMO when students think of what they are unable to do as compared to artificial intelligence-enhanced results69, and showing a tendency to move cognitive resources away from deep understanding steps.
Ethical issues related to the application of AI correspond to the FoMO pathways, as well as the learners facing moral conflict due to the use of AI show increased vigilance towards the actions of their peers, which helps to form attentional separations between ethics and reading comprehension70. This is reflected in eye-tracking studies where the reader makes more regressive eye movements when reading65. The mediation effect seems to be especially pronounced when the ethical issues have not been clarified, leaving a persistent cognitive burden that impairs reading71. Generative AI tools create new types of mediation with the ability to create outputs in practically an instant. Based on the above literature following hypotheses are proposed:
H8: Fear of Missing Out (FoMO) mediates the relationship between Perceived AI Dependency and Reading Comprehension
H9: Fear of Missing Out (FoMO) mediates the relationship between Perceived AI ethical concerns and Reading Comprehension
H10: Fear of Missing Out (FoMO) mediates the relationship between Generative AI Usage and Reading Comprehension
Theoretical Framework
This study is principally guided by Self-Determination Theory (SDT)12, which provides a cohesive framework for understanding how AI tools impact the fundamental psychological needs underpinning learner motivation and cognitive engagement. According to the model, Perceived AI Dependency directly jeopardizes the primary SDT needs of autonomy and competence and may undermine intrinsic motivation to learn the language by developing excessive dependence on external regulation of the achievement of such goals as reading comprehension and vocabulary acquisition. In this context, Fear of Missing Out (FoMO) is understood in the framework of relatedness, which is described by the secondary Social Comparison Theory72. The social norm that has developed among peers due to the extensive use of AI causes an FoMO state, leading to anxiety about missing out, which subsequently increases the dependency and consequently lowers the motivation to learn independently even further.
This motivational dynamic is explained by the cognitive outcomes as described by the Cognitive Load Theory (CLT)73. Although generative AI can enhance learning by minimizing the extraneous cognitive load (e.g., through instant definition), overly reliant on this can be counter-productive to acquire vocabulary by reducing the germane cognitive load needed to encode the lexicon deeply and store it in long-term memory. This is further aggravated by the condition of Digital Burnout which constitutes a drainage of the cognitive resources that undermines the linkage between understanding and learning. In turn, SDT offers the general account of the motivational routes, CLT elaborates on the ensuing cognitive processes and Social Comparison Theory describes the ensuing social-affective stimulus (FoMO), which combine in a compound explanation of AI-dual influence on language learning.
Research Design
This study adopts a quantitative empirical research design to examine the relationships between AI-related factors (perceived dependency, ethical concerns, generative AI usage), Fear of Missing Out (FoMO), digital burnout, and language learning outcomes (reading comprehension and vocabulary acquisition). A cross-sectional survey-based approach is employed, allowing for the collection of data at a single point in time to assess correlations and mediation/moderation effects. The design is particularly suited for testing the proposed theoretical model, as it enables the use of structural equation modelling (SEM) to analyse complex relationships between latent constructs.
Population and sampling
The target population consists of EFL learners in Chinese universities, a context where AI-assisted language learning tools are increasingly integrated into curricula. A convenience sampling technique was used to recruit 465 participants, with the sample size determined through a priori power analysis using G*Power 3.174. For the planned multiple regression analyses (α = 0.05, power = 0.95, medium effect size f² = 0.15), the analysis indicated a minimum required sample size of 166 participants, confirming that our sample of 465 provides sufficient statistical power. The sample size is also adequate to conduct SEM analysis, and it follows the requirement of PLS-SEM, which indicates that the minimum number of sample points should be 10 times the maximum number of structural paths to a construct in the model75. The samples will be recruited using the sample units of several universities located in various geographical regions of China to obtain high results in the generalizability aspect. In contrast, the demographic variables (applicant age, gender, the level of English proficiency, and the frequency of AI tool use) will be recorded and used as control elements in future studies.
To ensure the analytical integrity of our findings, multiple quality control measures were implemented throughout the data collection and preparation process. The initial survey distribution yielded an 87% response rate. Following data collection, rigorous screening procedures were applied: incomplete surveys with over 10% missing data were excluded listwise, and any remaining sporadic missing values at the item level were addressed using the Expectation-Maximization imputation algorithm in SPSS to preserve statistical power while minimizing bias. Additionally, embedded attention-check questions identified and facilitated the removal of inattentive respondents. These comprehensive procedures, combined with screening for multivariate outliers, ensured the final analytical dataset (N=450) met high standards of reliability for subsequent structural equation modeling.
Data collection procedure
Data were collected through a mixed-mode approach, utilizing both online and on-site methods to maximize participation and accessibility. The research team collaborated with course instructors in the targeted universities, who distributed the unique survey link directly to students through official WeChat class groups. This method leveraged trusted channels to improve response rates. For on-site data collection, the lead researcher and trained assistants visited designated classrooms during scheduled sessions. After a brief introduction to the study, they distributed the paper-based survey packets. The procedure for obtaining informed consent was tailored to each mode. In the online version, participants were presented with a digital consent form on the first page. They were required to select "I have read and understood the information above, and I voluntarily agree to participate" before the questionnaire items were activated. For on-site participants, a detailed information sheet was attached to the survey packet, and written consent was obtained by having participants sign a physical consent form before receiving the questionnaire. The complete questionnaire was designed to be completed within 15-20 min to minimize respondent fatigue. To ensure data quality, two attention-check items ("Please select 'Strongly Disagree' for this statement") were embedded within the survey. Incomplete submissions with more than 10% missing data and those failing the attention checks were automatically flagged by the system and subsequently excluded. The data collection period spanned four weeks, during which two reminder messages were sent via the initial distribution channels. Following collection, the dataset underwent a rigorous cleaning process, including checks for missing values, univariate and multivariate outliers, and violations of normality assumptions, prior to statistical analysis.
Measurements and scale adaptation
Perceived AI dependency
Perceived AI dependency was measured using a 5-item scale adapted from Morales-García et al.76. The scale measures how dependent people are on artificial intelligence in their daily activities and the decision-making process. The items do reflect several facets of dependency, including the perceived need to use AI tools to be efficient and the inability to work without AI support.
Perceived AI ethical concerns
Ethical concerns were assessed using a multidimensional scale developed by Kim and Ko77. To strengthen the validity and relevance for an EFL population, the scale's four dimensions were framed within the specific context of AI-driven language learning. This scale consists of four critical dimensions: transparency (8 items), fairness (5 items), safety (5 items), and responsibility (8 items). Each dimension evaluates different ethical challenges, such as the clarity of AI decision-making processes, biases in AI algorithms, risks associated with AI usage, and accountability in AI deployment. This focused operationalization ensures the items map directly onto the ethical dilemmas EFL learners are likely to encounter.
Generative AI usage
Generative AI usage was operationalized using an 8-item scale from Abbas et al.78. This scale measures the level and regularity of use of generative AI models, including text and image-making devices. The products examine various applications, such as academic, professional, and personal applications, and help to understand how people incorporate them into their lives and use them. This comprehensive approach allows us to capture the holistic integration of Generative AI into the learners' ecosystem, which is a prerequisite for understanding its broader psychological and cognitive effects.
Fear of Missing Out (FoMO)
Fear of Missing Out was measured using the 17-item scale by Mazlum and Atalay79. This scale was selected for its nuanced assessment of the anxiety that peers are having rewarding experiences from which one is absent. It is particularly appropriate for this study as it captures elements highly relevant to digitally mediated learning environments, including the compulsive need to stay connected online and the fear of missing out on superior learning strategies, resources, or peer advancements facilitated by AI, thereby directly linking AI usage to psychosocial anxiety.
Reading comprehension
Reading comprehension was measured using reading strategies adapted from Mokhtari et al.80, which includes three dimensions: Global Reading Strategies 5 items, Problem-Solving Strategies 5 items, and Support Reading Strategies 5 items. This scale evaluates how individuals' approach and understand written material, focusing on their ability to synthesize information, resolve difficulties in comprehension, and utilize external aids to enhance understanding.
Digital burnout in learning
Digital burnout in learning was measured using a scale adapted from Erten and Özdemir81, consisting of three dimensions: Digital Aging (12 items), Digital Deprivation (6 items), and Emotional Exhaustion (6 items). This scale assesses the negative psychological effects of prolonged digital engagement, including fatigue from excessive screen time, feelings of disconnection, and mental exhaustion due to online learning demands.
Vocabulary acquisition
Vocabulary acquisition was evaluated using a 4-item scale from Li et al.82. The scale is used to measure the effectiveness of vocabulary learning strategies in how people embrace, remember, and use new words in various contexts. The material evaluates self-reported competence and confidence in the use of vocabulary.
Data analysis plan
The study adopts a comprehensive, two-phase analytical strategy to examine the hypothesized relationships rigorously. Initial data screening and preliminary analyses are conducted using SPSS 28, focusing on descriptive statistics to assess data distributions, scale reliability analyses (Cronbach's alpha), and bivariate correlations to examine preliminary associations between key constructs. All Likert-scale items were coded on a 5-point interval scale (1=Strongly Disagree to 5=Strongly Agree). This foundational analysis ensures data quality and provides essential insights into the basic characteristics of the dataset before proceeding to more complex modeling. For the primary analysis, SmartPLS 4 is employed to implement Partial Least Squares Structural Equation Modeling (PLS-SEM), selected for its robust handling of complex predictive models involving latent variables83. The analysis begins with a thorough evaluation of the measurement model, where confirmatory factor analysis establishes the psychometric properties of the scales. Key assessments include tests of internal consistency (composite reliability), convergent validity (average variance extracted), and discriminant validity (Fornell-Larcker criterion and heterotrait-monotrait ratio). These steps ensure the constructs are both theoretically sound and empirically distinct.
Subsequently, the structural model is examined to test the hypothesized relationships among constructs. Path coefficients are analyzed for their magnitude, direction, and statistical significance, while effect sizes (f²) are computed to assess substantive impact. The model's predictive relevance is evaluated using the Stone-Geisser Q² statistic, with blindfolding procedures to validate the model's capability to predict endogenous variables. For mediation analyses, the study employs the Preacher and Hayes84, bootstrapping approach, which provides robust confidence intervals for indirect effects while controlling for potential confounding variables. Moderation effects are tested through interaction terms created using the PLS product indicator method, with simple slope analysis conducted to interpret significant interaction effects. All analyses use 5,000 bootstrap samples to generate stable estimates and ensure the robustness of the findings.
For the primary analysis, SmartPLS 4 is employed to implement Partial Least Squares Structural Equation Modeling (PLS-SEM), selected for its robust handling of complex predictive models involving latent variables83. The choice of PLS-SEM over covariance-based SEM (CB-SEM) is justified by the study's primary objective of predicting endogenous variables (vocabulary acquisition) and its complex model featuring mediating and moderating mechanisms85. Furthermore, PLS-SEM is preferred over multiple regression as it simultaneously estimates the relationships between all latent constructs within the entire model, accounting for measurement error. While experimental designs offer strong causal inference, this study's cross-sectional nature aims to establish predictive relationships in a real-world learning context, for which PLS-SEM is well-suited86. Finally, this study forward for the data analysis and the findings of this study can be seen in the Results section below.
Descriptive statistics
The descriptive statistics offer an overview of the important variables in this study, i.e., perceived AI dependency, ethical concerns, and generative AI usage, Fear of missing out (FoMO), reading comprehension, digital burnout, and vocabulary acquisition. The analysis will show central tendencies, dispersion, and distributional characteristics of the data, providing an understanding of the interest of the participants in AI technologies and the psychological and reasoning results thereof. Normality and variability across constructs were measured using measures including the mean, standard deviation, skewness, and kurtosis. The findings reveal different degrees of AI integration, ethical concerns, and learning-related issues, which inform subsequent inferential analysis.
Table 1 gives an in-depth description of the most important variables analyzed in this paper, such as central tendency, variability, distributional, and reliability measurements. The findings show that the respondents rated moderate to high levels of perceived AI dependency (M = 3.72, SD = 0.91) and AI ethical concerns (M = 4.05, SD = 0.76), the highest level of which was the subscale of fairness (M = 4.12). The use of Generative AI was of a moderate nature (M = 3.45, SD = 1.02), whereas Fear of Missing Out (FoMO) was of a relatively high level (M = 3.88, SD = 0.95). There was a moderate mean in reading comprehension and vocabulary acquisition (M = 3.65 and M = 3.78, respectively), but the digital burnout was high (M = 3.95, SD = 0.93), especially in digital aging subscale (M = 4.02). The levels of reliability of all variables were good and Cronbach 84-94 and composite reliability (CR) ranged between 0.86 and 0.95, which validated the internal consistency of the scales. The values of skewness and kurtosis indicate that most variables had similar results, but some had a slight negative skewness, which would mean that they tended to provide higher scores.
Figure 2 shows descriptive statistics for all key variables, including mean scores (with standard deviations) and reliability metrics (Cronbach's α and Composite Reliability). This dual-panel approach enables rapid comparison of central tendencies and measurement quality across constructs.
Table 2 discovers important correlations between the main variables of the study and indicates both anticipated and interesting ones. Perceived AI dependency is positively correlated with AI ethical concerns (r = +0.32, p =0.01), generative AI use (r = +0.45, p =0.01), and displayed digital burnout (r = +0.38, p =0.01), which indicates that the higher the perceived reliance on AI, the higher the ethical concerns, and AI tool use. There is a positive relationship between FoMO and the use of generative AI (r =.39, p <.01), meaning that the intense use of AI can be correlated with anxiety about missing out. Interestingly, reading strategies are positively but moderately correlated with vocabulary acquisition (r =.51, p <.01) and weakly correlated with AI dependency (r = -.12, p <.05), indicating divergent cognitive effects. The results of digital burnout exhibit positive, but significant, correlations with AI-related variables (rs =.25 to.44) and negative correlation with reading strategies (r = -.19), which suggests that digital fatigue can lead to the deterioration of learning results.
Common Method Variance (CMV) Bias
Since the research is based on self-reported measures, and the data are obtained in the form of a single survey, it is essential to measure the Common Method Variance (CMV) bias to be informed about the validity of the results. CMV has the tendency to inflate or deflate observed relationships between variables due to common measurement techniques and not because of underlying constructs. To solve this issue, we used procedural and statistical resolutions such as the psychological separation of scale items and the Harman single factor test.
Table 3 presents a comprehensive assessment of Common Method Variance (CMV) using five different statistical approaches, all indicating minimal bias in the study's measurements. The key findings show: (1) Harman's test reveals the first factor explains only 38.7% of variance (below the 50% threshold), (2) all VIF values fall within acceptable limits (1.12-2.89), (3) a non-significant marker variable correlation (r=.04), (4) minimal R² change (Δ=.08) in the LMF approach, and (5) only 12.3% method variance in CLF analysis. Collectively, these results confirm that common method bias does not significantly distort the study's findings, supporting the validity of the observed relationships between constructs. The multiple-method approach strengthens this conclusion by demonstrating consistent evidence across different CMV detection techniques
Measurement model results
It is essential to establish the validity and reliability of the measurement model as a measure of assuring the accuracy of the structural relationships in this study. The measurement model analysis assesses the quality of the observed indicators as a reflectance of the corresponding latent constructs in terms of factor loadings, composite reliability (CR), average extracted variance (AVE), and variance inflation factors (VIF). A high level of psychometric indicates that the constructs are discrete, internally consistent, and without multicollinearity, which stands as a good basis to propose the formulated hypotheses in the structural model.
Table 4 presents a comprehensive validation of the measurement model, demonstrating strong psychometric properties for all constructs. The results show excellent reliability with composite reliability (CR) values ranging from 0.85 to 0.95 (all exceeding the 0.7 threshold) and convergent validity with average variance extracted (AVE) values between 0.58 and 0.73 (surpassing the 0.5 benchmark). All indicator loadings are statistically significant (p<0.001) and substantial (0.81-0.92), while outer VIF values (1.28-1.62) indicate no concerning multicollinearity. The HTMT ratios (0.33-0.49) are well below the 0.85 discriminant validity threshold, and the maximum shared variance (MSV) and average shared variance (ASV) values are appropriately lower than corresponding AVE values. These results collectively confirm that the measurement model meets rigorous standards for reliability, convergent validity, and discriminant validity, establishing a solid foundation for subsequent structural model analysis. The t-values (16.78-27.89) further reinforce the statistical significance of all measured relationships.
Figure 3 illustrates the structural and measurement model tested through Partial Least Squares Structural Equation Modeling (PLS-SEM). Latent constructs are denoted by blue circles and the observed indicators are denoted by yellow rectangles. Path coefficients are represented on the connecting arrows, whereas outer loadings represent how well the relationship is formed between constructs and indicators.
Table 5 presents the Heterotrait-Monotrait (HTMT) ratio of correlations to assess discriminant validity between constructs, with all values well below the conservative threshold of 0.85 (indicated by **p<0.01 and *p<0.05). The strongest relationships emerge between Generative AI Usage and Perceived AI Dependency (0.52), Digital Burnout and FoMO (0.51), and Reading Comprehension and Vocabulary Acquisition (0.57), while weaker associations (0.05-0.22) demonstrate distinctness between unrelated constructs. The pattern of correlations aligns with theoretical expectations, showing moderate connections between technology-related factors (AI Dependency, AI Usage) and psychological states (FoMO, Digital Burnout), while maintaining appropriate separation between conceptually distinct variables. These results robustly confirm that all seven constructs in the model exhibit sufficient discriminant validity, as no HTMT values approach the 0.85 threshold and all confidence intervals (not shown but available) excluded 1.0, supporting their treatment as empirically distinct in subsequent analyses.
Hypotheses Testing
The important aspect of the research is the hypothesis testing that would be used to test the proposed relationships between AI-related factors and psychological states and the learning outcomes among EFL learners. In this section, the findings of the path analysis are provided to test the direct effects (H1-H7) and the mediation/moderation effects (H8-H10). The analysis uses a bootstrapping (5,000 samples) to have strong estimates of path coefficients, significance level and effect sizes to give empirical support to theoretical framework.
Table 6 presents the results of direct hypothesis testing through path analysis, revealing several statistically significant relationships (all p<0.001) with varying effect sizes. The strongest effects emerge for Generative AI Usage's positive influence on FoMO (β=0.34, large effect f²=0.22) and Reading Strategy's positive impact on Vocabulary Acquisition (β=0.52, very large effect f²=0.37). At the same time, AI Ethical Concerns show a relatively weaker but still significant effect on FoMO (β=0.15, small effect f²=0.09). All hypothesized direct paths were supported, with FoMO demonstrating a substantial negative effect on Reading Comprehension (β=-0.31) and Digital Burnout negatively affecting Vocabulary Acquisition (β=-0.19). The tight confidence intervals (none including zero) and robust t-values (all >4.0) confirm the reliability of these estimates. At the same time, the effect sizes (ranging from small to large) provide meaningful insights into the relative strength of each relationship within the model's nomological network.
Table 7 shows the results of mediation and moderation analyses, revealing significant indirect and interaction effects that deepen our understanding of the study's conceptual framework. The mediation analyses (H8-H10) demonstrate that FoMO partially mediates the negative relationships between all three AI-related factors (Dependency, Ethical Concerns, and Usage) and Reading Strategies, with the strongest indirect effect occurring for Generative AI Usage (β=-0.11, p<0.001). The moderation analysis (H7) reveals a significant buffering effect (β=-0.14, p<0.001), indicating that Digital Burnout weakens the positive relationship between Reading Strategies and Vocabulary Acquisition. All effects are statistically significant (p≤0.001) with confidence intervals excluding zero, and the partial mediation effects suggest FoMO operates alongside other mechanisms in transmitting AI-related influences to learning outcomes. These findings collectively highlight the complex interplay between technological, psychological, and cognitive factors in AI-enhanced language learning environments.
Figure 4 shows that the positive association between Reading Comprehension and Vocabulary is strongest when Digital Burnout is low (steepest blue line). As Digital Burnout increases (orange and red lines), the slope becomes flatter, indicating that the beneficial effect of Reading Comprehension on Vocabulary is buffered it weakens as Digital Burnout rises.
Predictive Validity of Inner Model using PLS-Predict
It is important to evaluate the predictive validity of the structural model to ascertain its strength in predicting out of sample observations. PLS-Predict uses a 10-fold cross-validation method (k = 10) to create prediction errors, and compares the root mean squared error (RMSE) of PLS-SEM model and naive benchmarks (e.g., LM benchmark). This discussion guarantees the practical use of the model by researchers conducting future studies and practitioners in real-life situations since it determines whether it can be used to predict better than simple linear models.
Table 8 shows the PLS-Predict results assessing the model's out-of-sample predictive power for key endogenous variables (FoMO, Reading Strategies, and Vocabulary Acquisition). The consistently negative RMSE differences (ranging from -0.05 to -0.06) across all indicators demonstrate that the PLS-SEM model outperforms the linear model (LM) benchmark, with statistically significant improvements (**p<0.01, *p<0.05). The Q² predict values (0.17-0.29) indicate small-to-medium predictive relevance, with Vocabulary Acquisition showing the strongest predictive power (Q²=0.27-0.29). Practical prediction accuracy is further supported by low MAE (0.55-0.74) and MAPE values (8.3%-13.1%), along with excellent prediction interval coverage (88.9%-93.0%) close to the nominal 90% level.
Figure 5 presents the PLS-Predict results for key endogenous variables, PLS-SEM RMSE values represented as bars, and 90 percent of the prediction interval coverage represented as a line graph per indicator, with constructs as a grouping factor. The dual-axis visualization allows one to compare predictive accuracy/reliability of different constructs directly. The smaller the RMSE and the larger the coverage percentages the more predictive performance is strong. Most importantly, Vocabulary Acquisition indicators are characterized by the lowest value of RMSE and the broadest coverage amongst constructs
Data Availability
The data is uploaded as a supplementary file.

Figure 1: Research Model. Please click here to view a larger version of this figure.

Figure 2: Descriptive statistics and reliability metrics comparision. (Top Panel) Mean scores with standard deviation error bars; (Bottom Panel) Cronbach's α and Composite Reliability (CR) for each variable. Please click here to view a larger version of this figure.

Figure 3: Finalized measurement model using PLS (SEM). Please click here to view a larger version of this figure.

Figure 4: The moderating effect of Digital Burnout on the relationship between Reading Comprehension and Vocabulary. Please click here to view a larger version of this figure.

Figure 5: PLS-Predict Results: RMSE (bars) and 90% Prediction Interval Coverage (line) by Indicator, grouped by Construct. Please click here to view a larger version of this figure.
Table 1: Descriptive Statistics of Key Variables. Please click here to download this Table.
Table 2: Correlation Matrix Please click here to download this Table.
Table 3: Common Method Variance (CMV) Assessment. Please click here to download this Table.
Table 4: Results of Measurement Model. Please click here to download this Table.
Table 5: Discriminant Validity Assessment (HTMT Ratio with 95% Confidence Intervals). Please click here to download this Table.
Table 6: Direct Effects Testing Results. Please click here to download this Table.
Table 7: Mediation and Moderation Analysis Results. Please click here to download this Table.
Table 8: PLS-Predict Results for Key Endogenous Variables. Please click here to download this Table.
The current study examined the complex interplay between AI-related factors, psychological states, and learning outcomes among EFL learners, yielding several key findings. Our methodological approach, specifically the use of PLS-SEM, was critical for modeling these complex relationships. Unlike covariance-based SEM (CB-SEM), which is better suited for theory confirmation, PLS-SEM's predictive orientation and ability to handle complex models with mediating and moderating variables made it the optimal choice for this exploratory research85. A key methodological step that bolstered reproducibility was the rigorous two-stage analytical protocol in SmartPLS 4, beginning with the validation of the measurement model. This ensured that constructs like "Generative AI Usage" and "Reading Strategies" were empirically distinct and reliable before testing the structural paths, thereby strengthening the validity of the identified relationships.
The analysis revealed that perceived AI dependency, ethical concerns, and generative AI usage were all significant predictors of Fear of Missing Out (FoMO), with generative AI usage demonstrating the most substantial effect (β = 0.34). This aligns with prior research linking technology engagement with anxiety87,88, but extends it by differentiating between various AI-specific antecedents. The powerful association with generative AI usage may reflect the immersive nature of these tools, which constantly update content and capabilities, potentially exacerbating users' concerns about being left behind. Our findings regarding FoMO's negative impact on reading comprehension (β = -0.31) further specify this relationship in AI-driven learning environments. The mediation analyses confirmed that FoMO serves as a psychological mechanism through which AI-related factors impair reading skills, a finding robustly tested using the bootstrapping approach. This pattern echoes the attention-drain hypothesis89, suggesting that the cognitive resources expended on managing AI-related anxieties may deplete the mental capacity available for deep reading processes.
The model also illuminated AI's dual role. The positive path from generative AI usage to reading strategies (β = 0.28), and subsequently to vocabulary acquisition (β = 0.52), suggests a facilitative cognitive pathway. The model has moderate predictive relevance (Q 2 = 0.27-0.29), which can be demonstrated by an example of a student utilizing a generative AI application to simplify and put a complex piece of English writing into context. The AI offers direct definitions and sample sentences of unknown vocabulary, thus increasing the efficacy of the reading strategy of the student and directly contributing to vocabulary learning with the help of contextual inferences. This advantage is, however, offset by the indirect negative impacts of AI. Most importantly, the mediating role of digital burnout (β = -0.14) creates a significant boundary condition, which was checked through the product indicator approach in PLS. It indicates that the cognitive utility of reading could be significantly reduced among learners with elevated burnout rates, which confirms the conservation of resources theory90. This observation assists in reconciling the contradictory findings of earlier research by showing how the differences in digital fatigue may change the very process of language learning in individuals.
In part, our findings contradict some homogenous positive perspectives of AI in the educational field91, as they involve a two-sided impact. These conflicting forces were successfully captured by the approach to the methodology, though there are some limitations that should be mentioned. Although the scales were well chosen, the self-reported measure of vocabulary acquisition, despite being practical, is a limitation; future deployment of the method can be enhanced with the use of objective tests. Moreover, the cross-sectional design, being effectively used in uncovering these intricate relationships that would be later proved by experimental means, does not allow making causal conclusions. Finally, the paper shows that the practical value of the PLS-SEM protocol consists in the possibility to chart the positive and negative routes of educational technology at the same time, offering a much more detailed and practical model to educators and policymakers that would want to implement AI tools in their educational institutions effectively and without causing significant psychological harm.
The results of the current study have significant implications for language learning institutions that introduce AI technologies into their curricula. To mitigate the psychological risks identified, educators can implement specific pedagogical strategies. For instance, to counter FoMO, instructors can design structured, collaborative AI tasks such as peer-reviewed AI essay editing or group projects where AI-generated outlines are critically evaluated to emphasize process over product and reduce social comparison. To address AI dependency, "metacognitive wrappers" can be used, where students briefly reflect on their reasoning before and after using an AI tool, reinforcing their autonomous learning skills. Furthermore, to combat digital burnout, curricula should explicitly schedule "unplugged" retrieval practice sessions focused on using newly acquired vocabulary in low-stakes, interpersonal discussions, thereby strengthening memory without digital mediation. These targeted interventions should be supported by institutional policies that establish reasonable use guidelines and provide educator training on identifying AI-related fatigue, aligning with UNESCO's human-centered approach to educational technology.
This article presents an established analysis of how integration of AI into EFL learning settings affects psychological conditions and performance and presents some challenges and opportunities. The results indicate that AI tools have potential in learning new vocabulary and improving reading comprehension, but also come with greater Fear of Missing Out (FoMO) and digital burnout, which otherwise act as barriers to learning. The emergence of FoMO as a mediating mechanism and digital burnout as a significant moderator makes a theoretical contribution to growing knowledge that there are complex pathways through which technology influences language acquisition. These observations highlight the importance of refined, evidence-based efforts towards the implementation of AI in education that maximizes its cognitive advantages and minimizes its psychological expenses. The strong predictive validity of the study demonstrates that these relations have practical relevance and must be taken into account by educators and policymakers when designing learning settings that incorporate AI. Future studies ought to conduct personalized interventions to address AI-related anxieties and maximize its instructional value so that technology innovation can meaningfully serve the overall needs of learners.
Although the research identifies great insights into the psychological and cognitive implications of AI in EFL education, the study has several limitations. First, self-reported data might involve a specific type of response bias, especially that based on sensitive variables such as FoMO and digital burnout; bi-modal designs would be beneficial to such research in the future, with the inclusion of behavioral or physiological metrics. Second, the study is cross-sectional; thus, the data cannot establish causal direction or rule out reverse causality, meaning we can only identify associations rather than confirm that AI dependency increases FoMO. Consequently, longitudinal or experimental designs are necessary to validate the proposed causal pathways. Third, the sample was limited to university-level EFL learners, which may preclude extrapolation across educational levels or age groups; it would be good to take a sample of K-12 learners or learners in workplace language training programs. Fourth, the research considered AI tools in general without differentiating between them, and future research should address this issue to assess the relative impact of various applications of AI tools. These shortcomings point to several promising research avenues, including: (1) experimental designs that manipulate the level of AI exposure to determine any effects of dosage, (2) cross-cultural comparisons to determine how sociocultural variables moderate study results, (3) creation of AI-specific digital wellbeing interventions, and (4) investigating factors that help buffer the adverse effects of AI, particularly self-regulation skills.
All authors declare no conflicts of interest.
This research is funded by Jiangsu Provincial Social Science Fund, Grant No: (23YYB011). This work was supported by Prince Sultan University under the Language and Communication Research Laboratory under Grant RL-CH-2019/9/1.
| Laptop Computer | Dell Technologies | N/A | Used for data processing and documentation |
| Statistical Software (SPSS) | IBM Corp. | Version 26.0 | Used for statistical analysis |
| Microsoft Excel | Microsoft | Office 365 | Data entry and basic data handling |