OBM Neurobiology

(ISSN 2573-4407)

OBM Neurobiology is an international peer-reviewed Open Access journal published quarterly online by LIDSEN Publishing Inc. By design, the scope of OBM Neurobiology is broad, so as to reflect the multidisciplinary nature of the field of Neurobiology that interfaces biology with the fundamental and clinical neurosciences. As such, OBM Neurobiology embraces rigorous multidisciplinary investigations into the form and function of neurons and glia that make up the nervous system, either individually or in ensemble, in health or disease. OBM Neurobiology welcomes original contributions that employ a combination of molecular, cellular, systems and behavioral approaches to report novel neuroanatomical, neuropharmacological, neurophysiological and neurobehavioral findings related to the following aspects of the nervous system: Signal Transduction and Neurotransmission; Neural Circuits and Systems Neurobiology; Nervous System Development and Aging; Neurobiology of Nervous System Diseases (e.g., Developmental Brain Disorders; Neurodegenerative Disorders).

OBM Neurobiology publishes a variety of article types (Original Research, Review, Communication, Opinion, Comment, Conference Report, Technical Note, Book Review, etc.). Although the OBM Neurobiology Editorial Board encourages authors to be succinct, there is no restriction on the length of the papers. Authors should present their results in as much detail as possible, as reviewers are encouraged to emphasize scientific rigor and reproducibility.

Publication Speed (median values for papers published in 2023): Submission to First Decision: 7.5 weeks; Submission to Acceptance: 15.9 weeks; Acceptance to Publication: 7 days (1-2 days of FREE language polishing included)

Current Issue: 2024  Archive: 2023 2022 2021 2020 2019 2018 2017
Open Access Original Research

The Psychometric Properties of the COVID Stress Scales in Korean University Students

Boram Lee , Hyelin Jeong †,*

  1. Department of Early Childhood Education, Woosong University, 171 Dongdaejeon-ro, Dong-gu, Daejeon, 34606, South Korea

  2. † These authors contributed equally to this work.

Correspondence: Hyelin Jeong

Academic Editors: Ines Testoni, Adriano Zamperini and Lorenza Palazzo

Special Issue: How COVID-19 Changed Individual and Social Life: Psychological and Mental illness Studies on the Pandemic Outcomes

Received: April 13, 2023 | Accepted: August 03, 2023 | Published: August 07, 2023

OBM Neurobiology 2023, Volume 7, Issue 3, doi:10.21926/obm.neurobiol.2303177

Recommended citation: Lee B, Jeong H. The Psychometric Properties of the COVID Stress Scales in Korean University Students. OBM Neurobiology 2023; 7(3): 177; doi:10.21926/obm.neurobiol.2303177.

© 2023 by the authors. This is an open access article distributed under the conditions of the Creative Commons by Attribution License, which permits unrestricted use, distribution, and reproduction in any medium or format, provided the original work is correctly cited.

Abstract

The COVID-19 pandemic and its associated disruptions have significantly impacted university students’ lives worldwide. The COVID Stress Scale (CSS) is a 36-item self-reporting instrument designed to measure stress caused by the COVID-19 pandemic. This study purposed to examine the psychometric properties of the Korean version of the CSS for use with Korean university students. The study sample comprised 402 undergraduate students enrolled in a four-year private university in central South Korea. This cross-sectional investigation employed an anonymous online survey conducted during the peak of the COVID-19 pandemic. The forward-backward translation method was adopted to convert the original English version of the CSS to Korean. Confirmatory factor analysis (CFA) was performed to determine the structure of the CSS. Convergent validity was assessed using correlation analysis with the Hospital Anxiety and Depression Scale (HADS). McDonald’s omega and Cronbach’s alpha reliability coefficients were used to evaluate reliability. The results revealed that a bifactor model specifying general factors and the six specific factors of danger, contamination, socioeconomic characteristics, xenophobia, traumatic stress symptoms, and compulsive checking and reassurance seeking provided the best fit among all alternatives. Further investigations demonstrated that the general factor of COVID-19-related stress accounted for the majority of CSS variances than the six specific factors. The results highlighted the unidimensionality of the measure. Additionally, the actions displayed excellent internal consistency. Our findings endorse the use of the Korean version of the CSS as a tool for measureing general stress experienced in reaction to the COVID-19 pandemic and we support using the instrument’s total score in this context.

Keywords

COVID-19; factor structure; the COVID Stress Scale; university students; validity

1. Introduction

An unprecedented public health crisis has been caused by the emergence and rapid spread of coronavirus disease (COVID-19) and its high rates of morbidity and mortality. The high infectivity and mortality rates of COVID-19 led the Korean government to implement several measures to prevent the spread of the virus, for instance, imposing lockdowns and quarantines [1]. However, the lockdowns enforced in response to COVID-19 disrupted almost all aspects of human life including work, education, health care, economy, and relationships. Meta-analyses have suggested that emergency measures such as quarantines and social, and physical distancing mandates were probably crucial in controlling the spread of the infectious disease. However, these actions were also associated with anxiety, depression, stress, loneliness, and isolation and negatively impacted mental health [2,3]. Hence, researchers must seriously attend to the negative effects of COVID-19 on public mental health. University students are no exception: the pandemic also adversely influenced their psychological well-being. A systematic review and meta-analysis of 60 studies from 10 countries deduced that university students were potentially particularly vulnerable to the mental health consequences of COVID-19 because of the extensive closure of universities, the consequent shift to the online learning environment, and the disruptions in their daily routines in terms of activities, goals, and social relationships [4,5]. In addition, COVID-19 severely affected the higher-education system in developing countries as many students lacked adequate internet access at home to avail themselves of remote learning [6,7]. A global heightening of the percentages of mental health issues has also been observed; studies have consistently discovered that university students suffered increased psychological stress and experienced more depressive and anxiety symptoms during the pandemic [8,9,10]. The growing prevalence of mental health problems in university students has reinforced the need for valid instruments that can identify and assess specific sources of stress emanating from the COVID-19-related experiences of university students.

However, specific robust screening tools that could promptly identify stress related to COVID-19 exposure or infections are lacking [11]. Most studies investigating psychological responses to COVID-19 during the pandemic have utilized traditional assessment tools such as the Patient Health Questionnaire-9 (PHQ-9) [12], the Generalized Anxiety Disorder-7 (GAD-7) [13], and the General Health Questionnaire (GHQ-12) [13]. Such measures are not specific to a singular disease such as COVID-19. Their use could thus cause under- or over-diagnosis stemming from the absence of any particular face validity apropos COVID-19 [11]. Therefore, the COVID-19 Stress Scales (CSS) was recently developed in English to offer a multifaceted assessment of COVID-19-related distress in the Canadian and American populations [14]. The CSS measures five facets of COVID-19-related suffering, including fears about the dangerous nature of COVID-19, socioeconomic concerns, COVID-19 xenophobia, traumatic stress symptoms related to COVID-19, and compulsive checking and reassurance-seeking symptoms. The CSS was valid and reliable in the North American context [15]; however, whether it measures the same construct in different populations remains a question that requires an empirical response.

The CSS has already been translated into several languages and empirically validated in diverse cultures. Recent studies have confirmed the psychometric properties of the CSS, yielding consistent and adequate results. However, at least three limitations currently prevent the use of this instrument in Korean university and research contexts. First, the participants of Khosravani et al.’s study [16] included persons with anxiety disorders (ADs) and obsessive-compulsive disorder (OCD) who were recruited from psychiatric hospitals and several clinical centers in Iran. Our university student sample could differ in crucial ways from such clinical samples comprising psychiatric patients. Second, the age span of the samples (18 to 86) used for the extant investigations (e.g., [17,18,19,20,21]) typifies questionnaire validation studies but is much broader than the ages exemplified by university students. These samples also represent general adult populations. Third, Abbady et al.’s study [22] include university students from Egypt and Saudi Arabia but employed the robust maximum likelihood method to illustrate the confirmatory factor analysis (CFA) model. Our study used the diagonally weighted least squares mean and variance corrected (WLSMV) estimator. Given these limitations, the existing literature may not sufficiently validate the applicability of the CSS to Korean university students.

Further, several issues still require exploration despite the abovementioned potential of the CSS. First, no study mentioned above has tested a bifactor model. Taylor et al. developed the CSS assuming a multidimensional structure (five- or six-factor structure) and previous psychometric work has supported a five- or six-factor structure of the CSS. Still, such a multidimensional structure may be problematic. Theoretically, the assumed five- and six-factor CSS models do not align with the instrument’s scoring system. A five- or six-factor CSS structure would translate into a separate calculation and interpretation of the scores for danger, contamination, socioeconomic characteristics, xenophobia, traumatic stress symptoms, and compulsive checking and reassurance seeking. However, researchers typically calculate a total CSS score (i.e., use a continuous scoring system) by aggregating the 36 items (e.g., [16,21]). This practice suggests the presence of a general factor among the 36 items and conflicts with the initial aim of the CSS, which is to measure five or six distinct components associated with COVID-19-related stress. Abbady et al. included a second-order factor in their analyses, revealing the presence of a general factor (the five-factor model was preferred but the second-order factor was associated with a marginally acceptable fit). However, second-order factor analyses do not allow a direct comparison of strengths between the general and specific group factors. It is thus incapable of evaluating any contribution of unique variance by the discrete subscales [23]. This assumption remains untested. Additionally, only one extant study has suggested that the CSS may be assessed either through scale scores or through a unitary total score. Still, the authors recommend the evaluation methods should be utilized depending on particular situations and suitability [18]. Thus, scholars have not achieved a consensus on apropos the structure and scoring system of the CSS. Second, no study has thus far explored the psychometric properties of the Korean version of the CSS in a sample of Korean university students. Identifying the efficient measurement of such constructs in this population is especially important.

Therefore, our study aimed to deliver data on the factorial structure and psychometric properties of the Korean version of the CSS by administering it to university students, given its potential utility and current unavailability in Korea. Specifically, we sought to further elucidate the structural model of the CSS by comparing the fit suggested by previously mooted models, (i.e., the one-, four-. five-, and six-factor frameworks) and the alternative bifactor structure. We also intended to better inform the computation and interpretation of the CSS scores. Further, we purposed to explore the internal consistency and investigate the associations between the CSS and corresponding psychiatric instruments for these constructs.

2. Materials and Methods

2.1 Participants

A convenience sample of 402 undergraduate students (110 male and 292 female) currently enrolled at a four-year private university in central Korea was recruited for the study. The participants were aged between 19 and 50 years, with a mean age of 21.9 years (SD = 2.71). Most respondents (92%) were 19–24 years old, and the ages of the remainder (8%) spanned between 25 and 50. Only three students were aged 40 years and above: one each at 40, 48, and 50 years. The study sample comprised students in degree programs majoring in arts, culinary arts, education, social work, and public health.

2.2 Procedures

Data were collected anonymously via an online survey administered on the Google platform between January 8, 2022, and March 8, 2022. This period corresponded to the pandemic-caused lockdown in Korea because of a significant increase in COVID-19 infections. Students were then experiencing the consequences of university closures along with social restrictions. Instructors in different faculties at the university were sent a link to the survey via email. They forwarded the link to their students using the university’s mailing lists. The email messages described the purpose of the study, apprised students of the voluntary and confidential nature of their participation, and included a link through which they could access the online questionnaire. Participants visiting the linked website were required to tender online consent before completing the survey. The complete set of questions was presented after they agreed to participate in the study, which was approved by the relevant Research Ethics Committee of the university where the study was conducted (Protocol Code: 1041549-190709-SB-76).

2.3 Measures

2.3.1 The COVID Stress Scale (CSS)

The CSS is a 36-item self-reporting questionnaire encompassing six domains related to COVID-19 (1) fears about its dangerousness (DAN; six items), (2) qualms about its socioeconomic consequences (SEC; six items), (3) xenophobia resulting from its presence (XEN; six items), (4) fears about sources of COVID-19-related contamination (CON; six items), (5) its associated traumatic stress symptoms (TSS; six items), and (6) the disease-related checking and reassurance seeking (CHE; six items) [15]. The items are rated on a five-point Likert-like scale ranging from 0 (not at all) to 4 (extremely). The sum of all six domain scores denotes the total tally, which ranges from 0 to 144. A higher total mark indicates a greater level of COVID-19 pandemic-related stress. Taylor et al.’s study of U.S. and Canadian samples reported Cronbach alpha coefficients calculated at >0.80 for each of the five scales, indicating good to excellent reliability and internal consistency.

2.3.2 The Hospital Anxiety and Depression Scale (HADS)

The HADS self-reporting questionnaire was developed to assess the severity of the core symptoms of anxiety and depression in people with a physical illness [24]. The questionnaire contains 14 items and consists of two subscales: anxiety (7 items) and depression (7 items). Each item is scored on a four-point Likert-like scale ranging from 0 to 3, yielding the maximum score of 21 for anxiety and depression. Higher scores on HADS represent higher levels of anxiety and depression. Oh et al. translated and validated the Korean version used in this study [25]. Oh et al.’s study computed the reliability of the Korean versions of the HADS-A and HADS-D at 0.89 and 0.80, respectively. The internal consistency for this study was adequate at 0.87 for HADS-A and 0.81 for HADS-D.

2.4 Translation of the COVID Stress Scale

The forward-backward translation method was applied to convert the original English version of the CSS into Korean. The bilingual author of the present study accomplished the forward translation, while an independent professional translator blinded to the original English questionnaire performed the backward translation. The author and translator subsequently discussed the translations to identify and correct any discrepancies. The final version of the CSS was generated in Korean after effecting due modifications and ensuring consensus between the author and translator. The final questionnaire was pilot tested with ten participants to elicit additional comments about clarity, but no appreciable remarks were noted during this process.

2.5 Statistical Analyses

The data for all the study items were screened for missing values and normality before the analyses were conducted. Less than 5% of the total cases in the data set displayed missing values, which were replaced using the expectation-maximization (EM) algorithm. All items were normally distributed and demonstrated skewness and kurtosis values within the acceptable ±1.5 range. Next, CFA was performed using the diagonally weighted least squares with mean and variance corrected (WLSMV) estimation. Flora and Curran [26] showed that the WLSMV performed better than weighted least squares (WLS) and maximum likelihood estimation (ML) in cases of small to moderate samples and complex models even when non-normally-distributed ordinal data with a small number of categories were analyzed; thus, we considered the WLSMV in our study.

We tested five competing models suggested in the extant literature and compared the resulting fit indices to determine the best fit to our data. Adamczyk et al.’s one-factor model with all 36 CSS items loaded onto a single factor was designated Model 1. Adamczyk et al.’s four-factor model was labeled Model 2: it combined the danger and contamination scales into a single factor. It specified the traumatic stress symptoms and checking scales to target the same factor. Taylor et al. empirical five-factor model from the EFA represented Model 3: it denotes the five latent variables as danger and contamination, socioeconomic characteristics, xenophobia, traumatic stress symptoms, and compulsive checking and reassurance seeking. Taylor et al.’s original theoretical six-factor model encompassing danger, contamination, socioeconomic characteristics, xenophobia, traumatic stress symptoms, and compulsive checking and reassurance seeking comprised Model 4. Finally, Model 5 was posited as a bifactor model in which COVID-19-related stress was viewed as a general factor distinct from the six independent factors. In particular, the bifactor model included a general factor onto which all items were allowed to load, and the second factor encompassed the orthogonal individual elements of danger, socioeconomic characteristics, xenophobia, contamination, traumatic stress symptoms, and compulsive checking and reassurance seeking. A bifactor-CFA model offers a more flexible alternative to traditional higher-order factorial models as items can simultaneously reflect the general factor of COVID-19-related stress as well as the six specific factors reflecting the unique variance shared among items forming each of the six subscales that the general factor does not explain. Hence, the general factor signals the variance shared by all items in the model. In contrast, the specific factors explain parts of the variance not accounted for by the general factor [27]. The following fit indices and their criteria were utilized to evaluate model fit through the CFA: the chi-square (χ2) and its related degrees of freedom (df); comparative fit index (CFI); Tucker–Lewis index (TLI); root mean square error of approximation (RMSEA) and its 90% confidence interval (90% CI); and standardized root mean square residual (SRMR) (i.e., χ2/df < 5; CFI and TLI ≥ 0.95; RMSEA < 0.06; and SRMR < 0.08) [28,29,30].

No consensus has been reached regarding the acceptable sample size for factor analysis; some scholars have highlighted the importance of the absolute number of cases (n), while others have emphasized the case-to-variable ratio. Hence, we have considered both recommendations in evaluating the sufficiency of our sample size. One suggested guideline is to recruit 5 or 10 participants per estimated parameter [31]. Meanwhile, others have indicated that 300 is a good sample size for CFA [32]. In the case-to-variable category, some studies have recommended a ratio of 10:1 [33,34] but that the absolute minimum sample size should not be less than 250 participants [31]. Regardless of the method of sufficiency evaluation, our sample was large enough for the intended analysis. The n of 402 for 36 items meets the n > 300 and 10:1 requirements.

The item difficulty index was computed to examine how well an item differentiated between different groups of participants. The item difficulty index ranged from 0 to 1: the higher the value, the easier the item, and vice versa. Finally, we assessed Cronbach’s coefficient (α) and its 95% confidence intervals to evaluate the reliability of the CSS. Coefficient α values of 0.70 were considered adequate. We also calculated McDonald’s omega (ω), omega hierarchical (ωH), and omega subscale (ωS) coefficients. Omega (ω) estimates the proportion of variance in the observed scores attributed to all sources of common variance and the general factor within the bifactor framework. According to Reise et al. [27], it corresponds to coefficient alpha for the total score, and therefore our study established the computed value of 0.70 as minimally acceptable and 0.80 as good. Omega hierarchical (ωH) estimates the proportion of total score variance that can be assigned to a single general factor. In contrast, the omega subscale (ωS) indicates the unique share of variance contributed by each subscale, excluding the contribution of the general factor. The scores should be considered unidimensional if the ωH value is >0.80. We also evaluated the explained common variance (ECV) to determine the relative strength of the general factor to the specific factors. Higher values of ECV (>0.70) suggest that the measure is essentially unidimensional [28]. The associations between the CSS and the HADS were examined using correlations with Pearson’s r to assess the convergent validity of the CSS. All analyses were executed using IBM SPSS Statistics 23 and MPlus 7.11.

3. Results

3.1 Descriptive Statistics

Table 1 presents the mean scores and standard deviations for the individual CSS item scores. The mean total score of the current sample was 83.9 (SD = 28.7), suggesting overall that the participants perceived moderate to severe stress levels due to the COVID-19 pandemic. The dimension of DAN received the highest average domain score, while the aspect of TSS displayed the lowest average mark (Table 2).

Table 1 Descriptive statistics for CSS items.

Table 2 Average values, Cronbach’s α and MacDonald’s ω reliability coefficients, correlation coefficients between domain, and correlations of CSS and HADS.

3.2 Confirmatory Factor Analysis (CFA)

The fit indices for all five models are shown in Table 3. The one-factor model exhibited the poorest fit of the five theoretical exemplars, with values for CFI, TLI, RMSEA, and SRMR beyond the recommended cut-offs (χ2 = 6445.38, df = 594; χ2/df = 10.9; CFI = 0.69; TLI = 0.68; RMSEA = 0.157 (90% CI = 0.153-0.160); SRMR = 0.123). Next, the four-factor model could not be considered acceptable even though it displayed a better fit than the one-factor template, as evidenced by a decrease in the chi-square value and the improved CFI, TLI, RMSEA, and SRMR statistics (χ2 = 4066.64, df = 588; χ2/df = 6.9; CFI = 0.76; TLI = 0.75; RMSEA = 0.121 (90% CI = 0.118-0.125); SRMR = 0.100). The five-factor model resulted in an improved but not acceptable fit to the data (χ2 = 3736.16, df = 584; χ2/df = 6.4; CFI = 0.86; TLI = 0.85; RMSEA = 0.116 (90% CI = 0.112-0.119); SRMR = 0.093).

Table 3 Goodness-of-fit indices for competing models of the CSS.

The six-factor model suited the data more than the one-, four-, and five-factor models and showed an optimal fit across all indices (χ2 = 2752.34, df = 579; χ2/df = 4.8; CFI = 0.91; TLI = 0.90; RMSEA = 0.072 (90% CI = 0.068-0.076); SRMR = 0.068). The bifactor model with six specific factors demonstrated the best overall model fit, with significant improvements over the one-, four-, five- and six-factor exemplars and all fit indices were observed to fall well within the recommended ranges (χ2 = 1662.23, df = 544; χ2/df = 3.1; CFI = 0.93; TLI = 0.93; RMSEA = 0.038 (90% CI = 0.037-0.038); SRMR = 0.029). The bifactor model evinced lower RMSEA values than the other models and was therefore deemed the model that exhibited the best fit (Figure 1). The bifactor model evidenced moderate to strong loadings on the general factor, ranging from 0.41 to 0.89 (Table 4). All loadings on the general factor were significant at p < 0.001. Compared to item loadings on the factor encompassing specific elements, the loadings for the general factor were stronger than those for the factor including specific components, which were mostly weak and nonmeaningful except for items 13, 14, 30, 32, and 36. These results suggest that a general factor could explain a large proportion of the variance of the items.

Click to view original image

Figure 1 The conceptual bifactor model of the CSS.

Table 4 Standardized factor loadings of bifactor model of the CSS.

3.3 Item Analysis and Internal Consistency

Model-based reliability estimates should be computed as the bifactor structure is applied to determine the preciseness of the assessment of the combination of general and specific factors by a certain scale. Thus, we estimated the McDonald’s ω, ωH, ωS coefficients, and ECV of the bifactor model to further appraise the unidimensionality of the CSS (Table 2). The coefficient omega values for the total score, danger, socioeconomic characteristics, xenophobia, contamination, traumatic stress symptoms, and compulsive checking and reassurance-seeking subscales were respectively ω = 0.97, ω = 0.92, ω = 0.87, ω = 0.96, ω = 0.94, ω = 0.95, and ω = 0.83, indicating robust reliability. The omega hierarchical coefficient (ωH) for the total score based on the bifactor solution was 0.85, supporting the presence of a relatively strong general CSS factor. It can thus be concluded that 85% of the variance in the total score could be attributed to variance in the general factor if a composite were formed from a sum of the CSS items. These values also suggest that only 12% of the total score variance was explained by amalgamating the six specific factors, with 3% attributable to error variance. ECV was also calculated at 0.81, signaling that the general factor accounted for most of the common variance. Conversely, the omega subscale coefficients (ωS) for danger (ωS = 0.11), socioeconomic characteristics (ωS = 0.09), xenophobia (ωS = 0.18), contamination (ωS = 0.14), traumatic stress symptoms (ωS = 0.06), and compulsive checking and reassurance seeking (ωS = 0.21) signified that the majority of their variances were explained by the general factor and that the reliability of the subscale scores was substantially diminished after the general factor was controlled.

In addition, internal consistency estimates (i.e., Cronbach's alphas) were also attained for each scale as well as for the overall CSS scale along with other item statistics (e.g., corrected item-total correlations, alpha if item deleted). Table 2 also presents Cronbach alpha coefficients for the scales. The Cronbach’s α coefficient was 0.97 for the total CSS, 0.92 for danger, 0.88 for socioeconomic characteristics, 0.96 for xenophobia, 0.94 for contamination, 0.95 for traumatic stress symptoms, and 0.83 for compulsive checking and reassurance seeking. Items 34 and 36 exhibited the lowest correlation coefficients of 0.49 and 0.46, respectively. Correlation coefficients were computed for the rest of the items and the total scores ranged from 0.51 to 0.80. Cronbach’s alpha coefficients were also appropriate for all items when an item was deleted. In addition, the six CSS domains were significantly interconnected, and their interrelation scales ranged from 0.46 to 0.84 (Table 2). The item difficulty indices of the CSS were found to range between 0.47 and 0.82.

3.4 Convergent Validity

We assessed the correlations of the CSS total scale and its subscales with other HADS-Anxiety and HADS-Depression measures (Table 2). The total CSS scores were positively correlated with the HADS-Depression (r = 0.38) and HADS-Anxiety scores (r = 0.49). The total CSS evinced greater connections with measures of anxiety-related traits than with the depression domain. The CSS subscales demonstrated small to moderate positive correlations with the HADS-Depression (ranging from 0.27 to 0.35) and HADS-Anxiety scores (ranging from 0.35 to 0.46). All correlations were significant at 0.01 levels and all were medium in magnitude. These findings support the convergent validity of the CSS.

4. Discussion

The current study aimed to examine the CSS's psychometric properties by evaluating its factorial structure, internal consistency, and validity at the peak of the COVID-19-caused lockdown in a sample of Korean university students. Additionally, the CSS was translated and validated for use in the Korean university context as a robust instrument for the testing and assessing stress related to the pandemic. This study strengthens our knowledge about the validated research tool to assess and better understand COVID-19-related distress. Through CFA, this study extends the literature by replicating the factor structure of the CSS to university students who are thought to be at heightened risk of COVID-19-related distress. Our study results suggest that Korean university students experienced COVID-19-related distress comparable to the outcomes reported by Khosravani et al. in clinical samples from Iran and by Milic et al. in other adult samples from Serbia. First, the overall mean score attained in our study was higher than the means obtained by Milic et al. (M = 35.4; SD = 25.9) but lower than the means for Khosravani et al.’s samples of AD (M = 99.3; SD = 18.5) and OCD patients (M = 106.3; SD = 16.5).

The COVID-19 outbreak caused increased perceived threats and fears: about the dangerousness of COVID-19, the socioeconomic consequences of the pandemic (i.e., job loss), foreigners who could carry the infection (i.e., disease-related xenophobia), fears about becoming infected (i.e., objects and surfaces), traumatic stress symptoms (i.e., unwanted thoughts and nightmares), and compulsive behaviors such as checking and reassurance seeking (i.e., inspecting news media about COVID-19, and searching for repeated assurance from friends or medical professionals) [19]. The current study’s participants accorded the highest score to the dangerous domain, consistent with the outcomes reported by previous studies conducted in Iran and Servia. However, the current study’s participants registered the lowest scores on the traumatic stress symptoms domain; conversely, the socioeconomic characteristics scale was less endorsed in a sample of patients with anxiety and obsessive-compulsive disorders in Iran. The present sample of Korean university students could differ qualitatively from a psychiatric sample of patients suffering from anxiety and a clinical sample comprising respondents diagnosed with obsessive-compulsive disorder. These distinctions could contribute to the observed differences.

In terms of the scale's dimensionality, we applied the cross-validated confirmatory factor method, which clarified and confirmed the latent structure of the CSS. While the best CSS model remains a subject for debate, this study further supports the alternative bifactor model with one general factor and the second factor encompassing the six specific elements of danger—socioeconomic characteristics, xenophobia, contamination, traumatic stress symptoms, compulsive checking, and reassurance seeking—which demonstrated slightly better overall fit indices about our sample. This strengthens the argument that the CSS could measure overall COVID-19-related distress. On the other hand, it offered limited support for the viable multidimensionality of the Korean version of the CSS. First, all CSS items were statistically significantly loaded on the general factor of COVID-19-related stress, except for five items with substantial loadings on both the general and specific factors. Most of the items did not display substantial loadings on their specific factors and were lower than the general factor. Second, the general factor explained most of the variance, and the reliability of the subscale scores was substantially reduced when the general factor was controlled. Indeed, the ωS coefficients for the six subscales ranged from 0.06 to 0.21, and none of them met the minimum standard of 0.50 suggested by Reise et al. [35]. Third, the low reliability of the six subscales as estimated by omega subscale coefficients also indicated that the six subscales did not yield measures of COIVD-19-related stress that were precise and distinct enough to facilitate practical applications because they comprised a very small amount of reliable variance for interpretation [36]. Simultaneously, the strength of the general factor as evidenced by ωH accounted for 85% of the total scale variance and 81% of the common variance. The strong loadings of the general factor on the items indicate that the CSS should be deemed unidimensional. Therefore, the calculation of separate subscale scores is not empirically justified. Moreover, the values computed for the six subscales were below the threshold of 0.50 established for a subscale to be considered a valid representation of a separable dimension. The specific factors do explain some variance over the general factor but this qualification does not suffice to warrant the use of the subscale scores. Therefore, our findings suggest that using the total CSS score as a measure of COVID-related stress would be more appropriate, at least within the Korean context. Our findings also imply that the subscale scores should be used cautiously because most of the reliable variance in subscale scores could be attributed to the general factor.

In addition, the total CSS score exhibited substantial correlations with all six measures of danger, contamination, socioeconomic characteristics, xenophobia, traumatic stress symptoms, and compulsive checking and reassurance seeking. This result may indicate that the correlations were primarily creditable to relationships with the general factor than to the orthogonal specific factors. Accordingly, the total score can be used to measure general COVID-19-related stress.

The results of reliability measures applied to the Korean version of CSS revealed excellent internal consistency. The reliability measures as measured by Cronbach’s α values and McDonald’s ω across the six specific factors and total scale were >0.80. These findings align with the results reported for the original CSS and other foreign validations of the questionnaire [16,17,19,20,22]. Moreover, statistically significant interrelations were observed for all six CSS domains, indicating that COVID-19-associated symptoms form a coherent COVID-related stress syndrome for people with high scores on the CSS [15].

COVID-19 is strongly associated with anxiety. Therefore, the HADS scales were deployed to evaluate the convergent validity of the CSS. As expected, the CSS aggregate displayed significant positive correlations with HADS-Anxiety, and HADS-Depression, supporting the convergent validity. Hence, our findings are consistent with some studies conducted in Korea, which have shown that many university students reported anxiety and depressive symptoms during the COVID-19 outbreak [37,38]. This study also verified the findings of other international studies that noted a strong correlation between CSS and anxiety-related traits [16,17].

Taylor et al. indicated that the CSS is an appropriate instrument for assessing COVID-19-related stress in vulnerable populations such as university students. The mental health of university students was a growing concern even before the COVID-19 pandemic [37]. The global outbreak of COVID-19 changed the lifestyles of this population dramatically. The stresses and strict restrictions associated with the pandemic placed university students at greater risk of developing mental health problems (i.e., stress, anxiety, and depression) [8,9,10,39]. Zurlo et al.’s [40] longitudinal study found that varied aspects of COVID-19-related stress were significantly associated with several psychopathological symptoms in their sample of university students as the pandemic progressed.

To the best of our knowledge, our study is the first to examine the bifactor structure of the CSS. Still, our our findings should be interpreted cautiously because we acknowledge several limitations. Our study found scant justification for using the CSS subscale scores because they primarily reflect variations of the general factor of COVID-19-related stress. However, compelling evidence is currently lacking to support the individual utilization of the six subscales for the Korean student population. Future research is warranted to explore the dimensionality of the Korean version of the CSS in more representative samples. Prospective studies should also assess the validity of generating the total CSS score. Next, the demographic information obtained from the students in our study included age, gender, and field of study. Notably, the student sample was relatively homogeneous, and the age range was small. We did not gather demographic data regarding the students’ or their families’ experience of exposure to the virus; future studies may explore this aspect further for a better understanding and assessment of the distress associated with COVID-19 and for the identification of people who need mental health services. Another limitation is that compared to when the pandemic was at its peak, the effect of COVID-19-related stressors and their psychological distress on students may have diminished toward the end of the pandemic, which is when we conducted our study. This may have prevented us from fully capturing their experience during the crisis. Finally, the study relied solely on self-reports and, therefore, its findings could be affected by the method bias. Prospective research projects should include a broader range of data sources to explore the stress associated with COVID-19 (e.g., in-depth interviews).

5. Conclusions

To conclude, notwithstanding the stated drawbacks, this study aimed to assess the psychometric properties of the Korean version of the CSS and provide preliminary evidence regarding its usefulness. The present study’s findings based on psychometric testing highlight the bifactor structure and confirm the robustness of the Korean version of the CSS as a measurement tool for assessing COVID-19-related distress in Korean university students. This version comprises general and six specific factors: danger, contamination, socioeconomic characteristics, xenophobia, traumatic stress symptoms, and compulsive checking and reassurance seeking. The CSS has displayed good internal consistency and adequate convergent validity for anxiety. Therefore, the Korean version of the CSS could be useful for studying stressors associated with COVID-19 or similar crises in the future. The Korean version of the CSS also appears to represent a unidimensional tool conceived for use by mental health and professionals on campuses advocating the utilization of an aggregated scoring system (i.e., total score). It can potentially promote a more comprehensive understanding of the stressors associated with the COVID-19 pandemic perceived by students [15,39].

Author Contributions

Conceptualization, B.L.; methodology, H.J.; formal analysis, B.L.; investigation, B.L.; data curation, H.J.; writing—original draft preparation, B.L., H.J.; writing—review and editing, visualization, B.L.; supervision, B.L.; project administration, H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Woosong University.

Competing Interests

The authors have declared that no competing interests exist.

References

  1. Jeong E, Hagose M, Jung H, Ki M, Flahault A. Understanding South Korea's response to the COVID-19 outbreak: A real-time analysis. Int J Environ Res Public Health. 2020; 17: 9571. [CrossRef]
  2. Brooks SK, Webster RK, Smith LE, Woodland L, Wessely S, Greenberg N, et al. The psychological impact of quarantine and how to reduce it: Rapid review of the evidence. Lancet. 2020; 395: 912-920. [CrossRef]
  3. Jin Y, Sun T, Zheng P, An J. Mass quarantine and mental health during COVID-19: A meta-analysis. J Affect Disord. 2021; 295: 1335-1346. [CrossRef]
  4. Deng J, Zhou F, Hou W, Silver Z, Wong CY, Chang O, et al. The prevalence of depressive symptoms, anxiety symptoms and sleep disturbance in higher education students during the COVID-19 pandemic: A systematic review and meta-analysis. Psychiatry Res. 2021; 301: 113863. [CrossRef]
  5. Mao J, Gao X, Yan P, Ren X, Guan Y, Yan Y. Impact of the COVID-19 pandemic on the mental health and learning of college and university students: A protocol of systematic review and meta-analysis. BMJ Open. 2021; 11: e046428. [CrossRef]
  6. Sifat RI, Ruponty MM, Shuvo MK, Chowdhury M, Suha SM. Impact of COVID-19 pandemic on the mental health of school-going adolescents: Insights from Dhaka city, Bangladesh. Heliyon. 2022; 8: e09223. [CrossRef]
  7. Saha A, Dutta A, Sifat RI. The mental impact of digital divide due to COVID-19 pandemic induced emergency online learning at undergraduate level: Evidence from undergraduate students from Dhaka City. J Affect Disord. 2021; 94: 170-179. [CrossRef]
  8. Chang J, Yuan Y, Wang D. Mental health status and its influencing factors among college students during the epidemic of COVID-19. Nan Fang Yi Ke Da Xue Xue Bao. 2020; 40: 171-176.
  9. Le Vigouroux S, Goncalves A, Charbonnier E. The psychological vulnerability of French university students to the COVID-19 confinement. Health Educ Behav. 2021; 48: 123-131. [CrossRef]
  10. von Keyserlingk L, Yamaguchi-Pedroza K, Arum R, Eccles JS. Stress of university students before and after campus closure in response to COVID-19. J Community Psychol. 2022; 50: 285-301. [CrossRef]
  11. Iversen MM, Norekvål TM, Oterhals K, Fadnes LT, Mæland S, Pakpour AH, et al. Psychometric properties of the Norwegian version of the Fear of COVID-19 Scale. Int J Ment Health Addict. 2022; 20: 1446-1464. [CrossRef]
  12. Kroenke K, Spitzer RL, Williams JB. The PHQ-9: Validity of a brief depression severity measure. J Gen Intern Med. 2001; 16: 606-613. [CrossRef]
  13. Spitzer RL, Kroenke K, Williams JB, Löwe B. A brief measure for assessing generalized anxiety disorder: The GAD-7. Arch Intern Med. 2006; 166: 1092-1097. [CrossRef]
  14. Goldberg DP. The detection of psychiatric illness by questionnaire: A technique for the identification and assessment of non-psychotic psychiatry illness. London: Oxford University Press; 1972.
  15. Taylor S, Landry CA, Paluszek MM, Fergus TA, McKay D, Asmundson GJ. Development and initial validation of the COVID Stress Scales. J Anxiety Disord. 2020; 72: 102232. [CrossRef]
  16. Khosravani V, Asmundson GJ, Taylor S, Sharifi Bastan F, Samimi Ardestani SM. The Persian COVID stress scales (Persian-CSS) and COVID-19-related stress reactions in patients with obsessive-compulsive and anxiety disorders. J Obsessive Compuls Relat Disord. 2021; 28: 100615. [CrossRef]
  17. Adamczyk K, Clark DA, Pradelok J. The Polish COVID Stress Scales: Considerations of psychometric functioning, measurement invariance, and validity. PLoS One. 2021; 16: e0260459. [CrossRef]
  18. Carlander A, Lekander M, Asmundson GJ, Taylor S, Olofsson Bagge R, Lindqvist Bagge AS. COVID-19 related distress in the Swedish population: Validation of the Swedish version of the COVID Stress Scales (CSS). PLoS One. 2022; 17: e0263888. [CrossRef]
  19. Mahamid FA, Veronese G, Bdier D, Pancake R. Psychometric properties of the COVID stress scales (CSS) within Arabic language in a Palestinian context. Curr Psychol. 2022; 41: 7431-7440. [CrossRef]
  20. Milic M, Dotlic J, Rachor GS, Asmundson GJ, Joksimovic B, Stevanovic J, et al. Validity and reliability of the Serbian COVID Stress Scales. PLoS One. 2021; 16: e0259062. [CrossRef]
  21. Noe-Grijalva M, Polo-Ambrocio A, Gómez-Bedia K, Caycho-Rodríguez T. Spanish translation and validation of the COVID Stress Scales in Peru. Front Psychol. 2022; 13: 840302. [CrossRef]
  22. Abbady AS, El-Gilany AH, El-Dabee FA, Elsadek AM, ElWasify M, Elwasify M. Psychometric characteristics of the of COVID Stress Scales-Arabic version (CSS-Arabic) in Egyptian and Saudi university students. Middle East Curr Psychiatry. 2021; 28: 14. [CrossRef]
  23. Chen FF, Hayes A, Carver CS, Laurenceau JP, Zhang Z. Modeling general and specific variance in multifaceted constructs: A comparison of the bifactor model to other approaches. J Pers. 2012; 80: 219-251. [CrossRef]
  24. Zigmond AS, Snaith RP. The hospital anxiety and depression scale. Acta Psychiatr Scand. 1983; 67: 361-370. [CrossRef]
  25. Oh SM, Min KJ, Park DB. A Study on the standardization of the Hospital Anxiety and Depression Scale for Koreans: A comparison of normal, depressed and anxious groups. J Korean Neuropsychiatr Assoc. 1999; 38: 289-296.
  26. Flora DB, Curran PJ. An empirical evaluation of alternative methods of estimation for confirmatory factor analysis with ordinal data. Psychol Methods. 2004; 9: 466-491. [CrossRef]
  27. Reise SP, Bonifay WE, Haviland MG. Scoring and modeling psychological measures in the presence of multidimensionality. J Pers Assess. 2013; 95: 129-140. [CrossRef]
  28. Marsh HW, Hau KT, Wen Z. In search of golden rules: Comment on hypothesis testing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu & Bentler’s (1999) findings. Struct Equ Model. 2004; 11: 320-341. [CrossRef]
  29. Browne MW, Cudeck R. Alternative ways of assessing model fit. Sociol Methods Res. 1992; 21: 230-258. [CrossRef]
  30. Fan X, Sivo SA. Sensitivity of fit indices to model misspecification and model types. Multivariate Behav Res. 2007; 42: 509-529. [CrossRef]
  31. Kyriazos TA. Applied psychometrics: Sample size and sample power considerations in factor analysis (EFA, CFA) and SEM in general. Psychology. 2018; 9: 2207-2230. [CrossRef]
  32. Comrey AL, Lee HB. A first course in factor analysis. 2nd ed. Hillsdale, NJ: Lawrence Erlbaum; 1992.
  33. Bentler PM, Chou C. Practical issues in structural modeling. Sociol Methods Res. 1987; 16: 78-117. [CrossRef]
  34. Kline RB. Principles and practice of structural equation modeling. New York, NY: Guilford; 2011.
  35. Reise SP, Moore TM, Haviland MG. Bifactor models and rotations: Exploring the extent to which multidimensional data yield univocal scale scores. J Pers Assess. 2010; 92: 544-559. [CrossRef]
  36. de Bruin GP, du Plessis GA. Bifactor analysis of the mental health continuum-short form (MHC-SF). Psychol Rep. 2015; 116: 438-446. [CrossRef]
  37. Jung NH, Park H, Jo H. Korean college students’ psychological distress surrounding COVID-19. J Asia Pac Couns. 2021; 11: 41-55. [CrossRef]
  38. Chon SH, Lee SH, Bae EJ, Kim JS. The college students’ depressive symptoms associated with levels of physical and social activities during the COVID-19 pandemic in South Korea: A web-based cross-sectional survey. Nurs Health Issues. 2021; 26: 10-17. [CrossRef]
  39. Zurlo MC, Cattaneo Della Volta MF, Vallone F. COVID-19 Student Stress Questionnaire: Development and validation of a questionnaire to evaluate students' stressors related to the Coronavirus pandemic lockdown. Front Psychol. 2020; 11: 576758. [CrossRef]
  40. Zurlo MC, Cattaneo Della Volta MF, Vallone F. Psychological health conditions and COVID-19-related stressors among university students: A repeated cross-sectional survey. Front Psychol. 2022; 12: 741332. [CrossRef]
Journal Metrics
2023
CiteScore SJR SNIP
1.00.2320.256
Newsletter
Download PDF Download Full-Text XML Download Citation
0 0

TOP