Original research

Children and adolescents’ experiences of mandatory SARS-CoV-2 testing in schools: a cross-sectional survey

Abstract

Background Public health measures during the COVID-19 pandemic had dramatic consequences for children and adolescents. However, policy-makers and healthcare researchers did not give sufficient weight to children’s perspectives. One common public health measure was mandatory SARS-CoV-2 tests in schools. This study examines the evaluation of such mandatory testing.

Methods We investigated the effects of test type (pooled PCR tests vs antigen rapid tests) and demographic and psychological factors on evaluations of the experience of being tested. A total of 569 children (8–17 years) in two major German cities completed online questionnaires between October and December 2021. Participants answered questions addressing test evaluation, vaccination status, pandemic-related stress, mental health difficulties and health-related quality of life.

Results Our results showed that overall test ratings were better for pooled PCR tests (p<0.001). Vaccine-willing students evaluated SARS-CoV-2 tests more positively than vaccine-unwilling students, regardless of test type (p<0.001). Children with mental health difficulties (abnormal/borderline Strength and Difficulties Questionnaire (SDQ) scores) evaluated SARS-CoV-2 tests more negatively than children with normal SDQ scores (p<0.001). Additionally, children who reported better health-related quality of life and children with less pandemic-related stress rated the tests more positively.

Conclusions Our results suggest that there are differences in the appraisal of the test types and that specific subgroups’ experiences of regular testing vary. Our study provides insights for policy-makers in future pandemics and raises questions regarding parallels between testing and vaccination hesitancy. Moreover, our study demonstrates the feasibility and value of collecting data directly from a large cohort of children in order to understand their experiences.

What is already known on this topic

  • Data on the experience of routine SARS-CoV-2 testing in asymptomatic children and adolescents are scarce. Moreover, factors associated with test appraisal are not sufficiently understood and there are no comparisons of different test types.

What this study adds

  • We found that young people give pooled PCR tests better ratings than rapid antigen tests. Additionally, children who are vaccine willing, children without mental health difficulties and children who report better health-related quality of life give better overall test scores.

How this study might affect research, practice or policy

  • This study provides data directly from children about their experiences of a mandatory public health measure; this could help policy-makers, practitioners and researchers to take young people’s perspectives into account and better balance competing goals in implementing public health measures. Moreover, it demonstrates the feasibility and value of rapidly collecting direct data from large cohorts of children.

Introduction

Public health measures implemented during the COVID-19 pandemic had—and continue to have—dramatic consequences for children and adolescents. Decreased in-person social contact, isolation and increased screen time through home schooling are just some of the many ways that young people’s everyday lives were affected, regardless of whether or not they were infected.1

Given that the medical risks of COVID-19 for most young people are low, the most significant and widespread risks of the pandemic for this age group, therefore, arose as a result of public health measures themselves. Thus, one major challenge during the pandemic was to mitigate tensions between the negative effects of public health measures on children and adolescents specifically and (high) medical risks for other demographic groups if such measures were not implemented. This consideration is critical because childhood and adolescence are important and potentially vulnerable periods of sociocognitive development.2 3 Investing in children’s health is critical not only for individual flourishing but also to ensure beneficial development of whole societies, as highlighted by the WHO-UNICEF-Lancet commission ‘A future for the world’s children’.3

Yet early in the pandemic, policy-makers did not give sufficient weight to children’s rights, and children had no feasible opportunities to raise possible concerns regarding public health measures. Moreover, healthcare researchers investigating COVID-19 did not adequately consider children’s and adolescents’ experiences.4 As a result, young people’s perspectives on public health measures that directly and significantly affected them were neglected. To address this, Jörgensen et al suggested adding the pillars ‘preparation (for future child health crisis)’ and ‘power (authority of children’s voices, which requires meaningful participation)’5 to the existing 3P-Network (provision, protection and participation), anchored in the United Nations Convention on the Rights of the Child.6

Although the SARS-CoV-2 pandemic is formally over, considering children’s opinions on public health measures continues to be important. The frequency of pandemics has increased over the past century7 and estimates for the lifetime risk of another pandemic range from 17% to 44%.8 Consequently, we need to prepare for future situations where tensions arise between the need to prevent the spread of infection and the desire to avoid subjecting children to mandatory public health measures. Data on perspectives from children themselves could yield new insights into how policy-makers, public health authorities, schools and researchers could better balance such considerations.

Here, we present data on children’s perspectives regarding mandatory SARS-CoV-2 testing in schools. In Germany, schools were fully or partially closed for 38 weeks in total9 (although evidence on the efficacy of school closures is equivocal10). To mitigate the risks of reopening schools, many governments required children to undergo regular SARS-CoV-2 testing.

Although the medical risks of testing in schools were low, little is known about how children perceived this testing. Children’s experience of being subjected to mandatory testing could influence their views and behaviour regarding other public health measures, both now and in the future, particularly if their experiences were negative. People’s thoughts and feelings play a critical role in their acceptance of public health measures11 and low trust in such measures is associated with low compliance.12 Although there was, overall, high acceptance of public health measures during the pandemic,13 most data are from adults. Regarding SARS-CoV-2 testing, in particular, limited data from adults show high acceptance.14–17 A large Norwegian cross-sectional study showed high compliance, especially among secondary school students. Regular testing in the aforementioned study was voluntary.18 Other than that, data on acceptance of routine testing in asymptomatic children are scarce and with small cohorts.19–21

In line with Jörgensen et al’s pillars ‘power’ and ‘preparation’,5 our study aims to close this knowledge gap by investigating children’s appraisals of routine SARS-CoV-2 testing in schools. We sought to address the following questions:

  1. How do children appraise two different routine SARS-CoV-2 test types (rapid antigen tests and pooled PCR tests)? What are the effects of demographic factors? What emotions do children associate with the two different SARS-CoV-2 test types?

  2. What is the relationship between test ratings and SARS-CoV-2 vaccine hesitancy?

  3. How do test ratings relate to mental health difficulties, pandemic-related stress/difficulties and health-related quality of life?

Methods

Questionnaires and the recruitment strategy were developed by a multiprofessional team of different healthcare researchers. Data were collected between November and December 2021 using online questionnaires. Participants were recruited by distributing links and QR codes in schools, day care facilities, hospitals and parent organisations in two major German cities (Freiburg and Cologne) and inviting children aged 8–17 to complete the online questionnaires via REDCap.22 23 Data from parents and caregivers of children aged 4–17 years were collected in parallel and have been reported separately.24

During the data collection period, Sars-CoV-2 incidence in Germany was between 91 per 100 000 in October and >200 per 100 000 in November–December 2021.25 The Delta variant was predominant during this period, which led to increased hospitalisations than other variants. Vaccination against SARS-CoV-2 was recommended and approved for children ≥12 years; the first vaccination for children ≥5 was approved after our data collection period. By 20 November 2021, 61.1% of children aged ≥12 years in Germany had been vaccinated at least once; 50.6% were fully vaccinated.26

Participants provided demographic data and test information and completed a set of questionnaires addressing test evaluation, vaccination status, pandemic-related stress, mental health difficulties and health-related quality of life (HRQoL).

Patient and public involvement

No patients or the public were involved.

SARS-CoV-2 test type(s) and evaluation

Regular SARS-CoV-2 testing for pupils attending in-person lessons was mandatory in Germany during the data collection period. The most common test methods were rapid antigen tests and saliva-based pooled PCR tests (‘pooled PCR tests pop-method’).27 28 Both methods entailed multiple tests each week. If the school used both methods, participants reported on the test type they had most recently experienced. Participants rated the SARS-CoV-2 tests using a standard German school grading system (1=excellent to 6=fail). Additionally, participants received an Emotional Words List and reported on a 4-point Likert scale (0=not at all to 3=very) how strongly they experienced each of 22 emotions (eg, ängstlich (fearful), beruhigt (reassured), missgestimmt (grumpy, ill tempered), fröhlich (cheerful)) when performing the SARS-CoV-2.29 Item scores are summed to give scores on the positive domain and the negative domain, as well as positive and negative subdomains. We focused on the positive and negative domains and the three negative subdomains (A: bad temperedness/annoyance, B: anxiety/sadness and C: deactivation).

Mental health difficulties

Participants completed the 25-item version of the Strength and Difficulties Questionnaire (SDQ)30 to screen for emotional and behavioural difficulties. Here, we focused on the total difficulties score, which was categorised as being within the normal range (≤14) or borderline/abnormal (>14).

Health-related quality of life

Participants completed the KIDSCREEN-10,31 a short questionnaire with 10 items on a 5-point Likert scale to asses general HRQoL. The KIDSCREEN-10 has good test–retest reliability (r=0.73; ICC (intraclass correlation) =0.72) and internal consistency (Cronbach’s alpha for the current study=0.87).

Pandemic-related stress

To evaluate COVID-19 pandemic-related stress, we used a questionnaire32 which assesses quality of social interactions, educational burdens in school, leisure time activities and emotional responses to the pandemic. Responses are on a 5-point Likert scale ranging from ‘much worse’ to ‘much better’.

Statistical analyses were performed in SPSS V.29.0 (IBM). To examine the effects of gender, school type, age (within the subgroup who attended secondary grammar school), vaccination status and mental health difficulties on ratings of the two different SARS-CoV-2 test types, we used ordinary multiway analysis of variance (ANOVA). For vaccination status, we conducted separate analyses for 12–17 years and under 12 years, since at the time of the data collection, vaccination was recommended for children aged ≥12 but not for younger children. We also used multiway ANOVA to examine the effects of test type on positive domain emotions on the Emotional Words Test. Scores on the negative domain and its three subdomains were strongly right skewed; we, therefore, used non-parametric (Mann-Whitney) tests to analyse effects of test type on these scores. We examined associations between test ratings and HRQoL and pandemic-related stress using Pearson correlations. Some children experienced both test types and reported on their experience of their most recent test. Mixed test types may have influenced results; we, therefore, conducted sensitivity analyses by repeating each analysis with data only from children who had only experienced one test type or the other. Since analyses were exploratory, all tests were two tailed and we did not attempt to replace missing values; rather, we excluded missing values from statistical analyses.

Results

Full data sets were available for 589 children. Due to low numbers, we excluded gender-diverse participants (n=4), those whose most recent test was an antigen spit test (n=8) and those whose last test was >7 days prior to the survey (n=8). The final sample, therefore, included 569 children. Data regarding demographics, SARS-CoV-2 testing and vaccination status are summarised in table 1.

Table 1
|
Demographics of participants, including SARS-CoV-2 testing data and vaccination status

Children’s overall ratings of the two test types and differences based on gender or school level (primary/secondary) showed that pooled PCR tests received better ratings than rapid antigen tests (main effect of test type, F(1, 549)=28.400, p<0.001, partial η2=0.049; estimated mean difference 0.95, 95% CI (0.60, 1.30)). The sensitivity analysis showed the same pattern of results. We found no statistically significant effects of age among secondary grammar school students (see online supplemental material 1 for details). Participants with unclear (eg, Steiner school) or missing school type were excluded (n=13).

Regarding emotions associated with SARS-CoV-2 testing, the pooled PCR test group reported higher mean positive domain scores associated with testing than the antigen rapid antigen test group (main effect of test type, F(1, 565)=36.524, p<0.001, partial η2=0.061; estimated mean difference 2.17, 95% CI (1.46, 2.87)). The sensitivity analysis yielded the same pattern of effects. The pooled PCR test group also reported lower scores for the negative domain (mean rank 274.61) than the antigen test group (mean rank 308.0; Mann-Whitney U test Z=−0.261, p=0.024). The same held for negative subdomain A (bad temperedness/annoyance), Z=−3.394, p<0.001 and negative subdomain B (anxiety/sadness), Z=−3.987, p<0.001. For negative subdomain C (deactivation), the pooled PCR test group associated testing with higher deactivation levels (mean rank 295.8) than the antigen test group (mean rank 261.1 Mann-Whitney U, Z=−2.505, p=0.012). The sensitivity analysis showed the same pattern of results for the negative domain and for negative subdomains A and B. For negative subdomain C, the difference was no longer statistically significant in the sensitivity analysis (p=0.061).

We also examined effects of vaccination status. Among 12–17 years, vaccinated/vaccine-willing adolescents gave the tests significantly better ratings than unvaccinated and vaccine-unwilling adolescents (see figure 1) (main effect of vaccination status, F(1, 367)=110.650, p<0.001, partial η2=0.232; estimated mean difference 1.69, 95% CI (1.38, 2.01)). The pooled PCR tests received significantly better ratings than the antigen tests (main effect of test type, F(1, 367)=29.088, p<0.001, partial η2=0.073; estimated mean difference 0.87, 95% CI (0.55, 1.18)). The interaction was not significant. In the sensitivity analysis, the two main effects remained significant; in addition, a significant interaction (F(1, 265)=4.211, p=0.041, partial η2=0.016) arose because the difference between the two test types was significantly larger for unvaccinated/unwilling participants (mean difference 1.68, 95% CI (0.89, 2.47)) than for vaccinated/willing participants (mean difference 0.80, 95% CI for difference (0.49, 1.10)). However, some subgroups in this analysis were extremely small.

Figure 1
Figure 1

Test ratings for the two test types (lolli tests and antigen tests) separated by vaccination status. Test appraisal is measured with the standard German school grading system (1=excellent to 6=fail).

Among under 12 years, vaccine-willing children rated the tests statistically significantly better than those who were vaccine unwilling (main effect of vaccination status, F(1, 146)=36.786, p<0.001, partial η2=0.201; estimated mean difference 1.40, 95% CI (0.94, 1.86)). The effects of test type and the interaction were not statistically significant. In the sensitivity analysis, both main effects were statistically significant: not only did vaccine-willing children give the tests significantly better ratings than vaccine-unwilling children (as in the main analysis), but also the pooled PCR tests received better ratings than the antigen tests (F(1, 126)=5.175, p=0.025, partial η2=0.002, as seen in earlier). The interaction was not statistically significant.

Applying an ANOVA to examine the influence of mental health difficulties on testing experiences yielded three significant main effects: test type: F(1, 561)=51.108, p<0.001, partial η2=0.083; SDQ category: F(1, 561)=38.830, p<0.001, partial η2=0.065; gender: F(1, 561)=11.204, p<0.001, partial η2<0.020. In addition, the interaction between gender and SDQ category was significant, F(1, 561)=5.401, p=0.020, partial η2=0.010. This arose because the gender difference in test ratings was statistically significant among those with borderline/abnormal SDQ scores (estimated mean difference 0.75, 95% CI (0.28, 1.22),) but not among those with normal SDQ scores (estimated mean difference 0.14, 95% CI (−0.08, 0.35)) (see figure 2). In the sensitivity analysis, the main effects SDQ category and test method remained statistically significant, ps<0.001, but the main effects of gender and the interaction were no longer statistically significant.

Figure 2
Figure 2

Mean Corona test for boys and girls by SDQ score category (normal or borderline/abnormal), not separated by test type. Test appraisal is measured with the standard German school grading system (1=excellent to 6=fail). SDQ, Strength and Difficulties Questionnaire.

Better HRQoL, as measured by KIDSCREEN scores, was statistically significantly correlated with better test ratings, r(567)=−0.283, 95% CI (−0.357, −0.206). Similarly, children who reported lower levels of pandemic-related stress/difficulties, as indicated by CBB scores, gave the tests better ratings, r(567)=0.308, 95% CI (0.232, 0.380). For both correlations, there was no statistically significant difference between test types and the sensitivity analyses yielded similar patterns of effects.

Discussion

Our data, gathered directly from a large cohort of children, helps to narrow a knowledge gap in understanding children’s experiences of being subjected to a regular, mandatory public health measure in school. In summary, our main findings were as follows:

  1. Overall test ratings were better for pooled PCR tests. We found no significant effects of school type or age on test ratings. Children in the pooled PCR group reported more positive test-related emotions and less negative emotions (eg, anxiety, annoyance). Interestingly, however, children in the pooled PCR group also reported more deactivation emotions (eg, tiredness, sleepiness and listlessness).

  2. COVID-19 vaccinated or vaccine-willing students evaluated SARS-CoV-2 tests more positively than unvaccinated or vaccine-unwilling students, regardless of test type.

  3. Children with mental health difficulties (abnormal/borderline SDQ scores) evaluated SARS-CoV-2 tests more negatively than children with normal SDQ scores. Similarly, children who reported better HRQoL and children with less pandemic-related stress also gave the tests better scores. These results were independent of the test type.

One strength of our study is the large sample size, which includes participants from two different areas in Germany (Cologne and Freiburg). Another major strength is that the reported data come directly from children and adolescents, allowing us to explore their emotional experience directly rather than via proxy reports through caregivers.

An important limitation is that the sample was not representative; this is evident from the fact that 80% of our data were collected from secondary grammar school students, whose experiences may not reflect those of other groups. Further, we could not differentiate between nasal and oral rapid antigen swabs but solely differentiated between test types (PCR vs antigen). Swab location might influence test experience; however, most rapid antigen tests used at the time were nasal.

Overall, our study adds to existing data on the acceptance of SARS-CoV-2 tests in general and the comparison of different sampling techniques. Schuster et al found that students report a preference for nasal swabs over saliva tests.19 However, they did not compare different test types (PCR vs antigen) and their sample size was rather small (67 students). Our study adds to their data with data from a large cohort considering different test types (not just test location). Franconeri et al found good compliance and high satisfaction with regular voluntary testing among primary and secondary students. They focused on the satisfaction of the implementation of regular testing rather than the emotional experience during testing.18 Adding to this study, our data expand the knowledge on testing acceptance considering mandatory tests and the actual experience when being tested. Moreover, we collected data from a wide age spectrum, which adds to Unger et al who explored the acceptance of regular testing in focus group discussions and found that students were in favour of testing at schools because it facilitated the return to in-person class. However, they only interviewed high school students (grade 10–12).21

In a parallel project,24 we investigated how parents evaluated the testing experience for their children and parents’ reports on their children’s responses and attitudes towards SARS-CoV-2 tests. The results were mostly in line with the data reported here: Parents also preferred pooled PCR tests for their children, parents of unvaccinated children tended to give tests worse ratings in general and parents of children with mental health difficulties gave worse ratings.

Considering our analysis, existing literature and the likelihood of future pandemics, our study aims to help prepare policy-makers for ‘the next pandemic’.

First, pooled PCR tests seem to be the preferred test option among children and adolescents. A recommendation for a specific test should always be given in light of current pandemic epidemiology and infection rates. Fear of infection can play an important role in how tests are perceived. SARS-CoV-2 incidence during our data collection period was high, which may have resulted in higher fear of COVID-19. Second, we advocate for age-specific support weighing the specific needs of elementary school students versus adolescents. Third, children with mental health issues should be specifically prepared and supported in order to ensure a comfortable testing experience.

Our suggestions for ‘the next pandemic’ stem from the direct insight of children’s and adolescents experiences. Our study demonstrates the feasibility of collecting data directly from a large cohort of children rapidly to obtain insights into their experiences with a public health measure that influences their everyday lives. For future pandemics, when public health measures might be necessary again, those measures could be adapted in real time to children’s needs. Our study could serve as an example to prepare (→ ‘preparation’) and give ‘power’ to children and the chance to make a change.5 Our study demonstrated that it is possible and necessary to involve large cohorts of children in research about public health measures. Children’s rights are not a luxury and their right to participation6 should not be disregarded, not even in light of a worldwide pandemic.

Finally, our study gives room to relevant questions and ideas about future research. As one example, we found an association between vaccination and testing acceptance. There is plenty of research regarding vaccine acceptance/hesitancy and associated factors.11 33 We argue that parallels between vaccine acceptance and testing acceptance should be investigated. For example, what is the relationship between those ‘acceptances’? Are there common factors? Does this have implications for the implementation of public health measures? We consider this to be an important area for future research, with the goal of understanding children’s experiences and motivations to comply with public health measures.