Skip to main content
Advertisement
  • Loading metrics

Self-tests for COVID-19: What is the evidence? A living systematic review and meta-analysis (2020–2023)

  • Apoorva Anand,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada, Infectious Diseases and Immunity in Global Health, Research Institute of McGill University Health Centre, Montreal, Quebec, Canada

  • Fiorella Vialard,

    Roles Data curation, Formal analysis, Methodology, Writing – review & editing

    Affiliations Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada, Infectious Diseases and Immunity in Global Health, Research Institute of McGill University Health Centre, Montreal, Quebec, Canada, Faculty of Medicine, McGill University, Montreal, Quebec, Canada

  • Aliasgar Esmail,

    Roles Writing – review & editing

    Affiliation Centre for Lung Infection and Immunity, Division of Pulmonology, UCT Lung Institute and Department of Medicine, University of Cape Town, Cape Town, Western Cape, South Africa

  • Faiz Ahmad Khan,

    Roles Writing – review & editing

    Affiliations Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada, Faculty of Medicine, McGill University, Montreal, Quebec, Canada

  • Patrick O’Byrne,

    Roles Writing – review & editing

    Affiliation Faculty of Health Sciences, University of Ottawa, Ottawa, Ontario, Canada

  • Jean-Pierre Routy,

    Roles Writing – review & editing

    Affiliations Infectious Diseases and Immunity in Global Health, Research Institute of McGill University Health Centre, Montreal, Quebec, Canada, Faculty of Medicine, McGill University, Montreal, Quebec, Canada

  • Keertan Dheda,

    Roles Writing – review & editing

    Affiliation Centre for Lung Infection and Immunity, Division of Pulmonology, UCT Lung Institute and Department of Medicine, University of Cape Town, Cape Town, Western Cape, South Africa

  • Nitika Pant Pai

    Roles Conceptualization, Funding acquisition, Methodology, Resources, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    nitika.pai@mcgill.ca

    Affiliations Centre for Outcomes Research and Evaluation, Research Institute of the McGill University Health Centre, Montreal, Quebec, Canada, Infectious Diseases and Immunity in Global Health, Research Institute of McGill University Health Centre, Montreal, Quebec, Canada, Faculty of Medicine, McGill University, Montreal, Quebec, Canada

Abstract

COVID-19 self-testing strategy (COVIDST) can rapidly identify symptomatic and asymptomatic SARS-CoV-2-infected individuals and their contacts, potentially reducing transmission. In this living systematic review, we evaluated the evidence for real-world COVIDST performance. Two independent reviewers searched six databases (PubMed, Embase, Web of Science, World Health Organization database, Cochrane COVID-19 registry, Europe PMC) for the period April 1st, 2020, to January 18th, 2023. Data on studies evaluating COVIDST against laboratory-based conventional testing and reported on diagnostic accuracy, feasibility, acceptability, impact, and qualitative outcomes were abstracted. Bivariate random effects meta-analyses of COVIDST accuracy were performed (n = 14). Subgroup analyses (by sampling site, symptomatic/asymptomatic infection, supervised/unsupervised strategy, with/without digital supports) were conducted. Data from 70 included studies, conducted across 25 countries with a median sample size of 817 (range: 28–784,707) were pooled. Specificity and DOR was high overall, irrespective of subgroups (98.37–99.71%). Highest sensitivities were reported for: a) symptomatic individuals (73.91%, 95%CI: 68.41–78.75%; n = 9), b) mid-turbinate nasal samples (77.79%, 95%CI: 56.03–90.59%; n = 14), c) supervised strategy (86.67%, 95%CI: 59.64–96.62%; n = 13), and d) use of digital interventions (70.15%, 95%CI: 50.18–84.63%; n = 14). Lower sensitivity was attributed to absence of symptoms, errors in test conduct and absence of supervision or a digital support. We found no difference in COVIDST sensitivity between delta and omicron pre-dominant period. Digital supports increased confidence in COVIDST reporting and interpretation (n = 16). Overall acceptability was 91.0–98.7% (n = 2) with lower acceptability reported for daily self-testing (39.5–51.1%). Overall feasibility was 69.0–100.0% (n = 5) with lower feasibility (35.9–64.6%) for serial self-testing. COVIDST decreased closures in school, workplace, and social events (n = 4). COVIDST is an effective rapid screening strategy for home-, workplace- or school-based screening, for symptomatic persons, and for preventing transmission during outbreaks. These data will guide COVIDST policy. Our review demonstrates that COVIDST has paved the way for self-testing in pandemics worldwide.

Introduction

COVID-19 cases are rapidly declining due to extensive vaccine coverage but clustering is reported in select subgroups (i.e., unvaccinated and immune suppressed individuals) [1]. A shift towards greater use of self-tests was observed towards the end of 2021. Widespread availability of rapid self-test kits, either through public distribution systems, or through private pharmacies, convenience stores, or online websites, empowered individuals to exercise autonomy in managing their exposures and guiding their actions.

COVID-19 self-testing (COVIDST), defined as strategies when individuals collect their own samples, test themselves, interpret results, and use results to guide actions post-self-test. COVIDST has particularly facilitated an expanded access in the global north. However, it has greater value in areas with limited resources, in the setting of expensive or absent laboratory-based conventional testing, and during outbreaks [2].

COVIDST is performed with rapid diagnostic tests (RDTs). It helps detect active COVID-19 infection in a rapid turnaround time (TAT), thereby offering a convenient, user-friendly alternative to conventional lab-based reverse transcription polymerase chain reaction (RT-PCR) tests. Conventional tests require long wait times and longer TATs that increase the risk of COVID-19 exposure [3, 4]. Alternatively, COVIDST can reduce dependence on healthcare workers (HCW) and reduce exposure in healthcare settings by allowing self-testing in safe, private spaces. Rapid identification of symptomatic SARS-CoV-2-infected individuals prevents exposure in community contacts and allows a timely knowledge of infection status, prompting informed action plans. Initiation of an action plan can greatly reduce transmission and mitigate burden on healthcare systems.

A Cochrane systematic review that assessed the diagnostic accuracy of HCW-performed RDTs reported an average sensitivity of 72% in symptomatic individuals and 58% in asymptomatic individuals. However, researchers did not report outcomes beyond accuracy where RDTs were used as self-tests [5]. The World Health Organization (WHO) guidelines on COVIDST implementation released in early 2022 provides evidence on diagnostic accuracy [6]. Although the COVID-19 pandemic has waned, a few randomized controlled trials (RCTs) and observational studies on COVID-19 self-testing and multiplexed point-of-care self-testing for multiple respiratory infections including COVID-19 are still being conducted and published. An explosion of literature in 2022–2023 on real world performance underscores the need for a comprehensive, living review of evidence beyond diagnostic performance.

The overarching goal of this living systematic review is to update existing policies, fill evidence gaps, and provide guidance to enhance quality of tests and reporting systems in line with WHO guidelines, and to guide future outbreaks of COVID-19.

The review aims to: a) explore variability in COVIDST diagnostic performance across the spectrum of its use in a meta-analysis; b) summarize feasibility, acceptability, accessibility, and public health impact of COVIDST; and c) document qualitative outcomes.

Methods

We registered our protocol on PROSPERO (CRD42022314799) [7]. No patients, study participants, or members of the public were involved in the design, conduct, or reporting of this review.

Data sources and searches

Two independent reviewers (AA, FV) searched five electronic databases (Pubmed, Embase, Web of Science, WHO database, and Cochrane COVID-19 registry) from April 1st, 2020, to January 18th, 2023, for peer-reviewed journal articles and conference abstracts. Grey literature was searched through the Europe PMC pre-prints database (Fig 1). No restrictions were placed on language or publication year. We will update the review until August 1st, 2023.

Search string

(COVID-19* OR covid* OR “SARS-CoV-2*”) AND

(“Self-test*” OR “Self test*” OR “Self-screen*” OR “Self screen*” OR “home test*” OR “at home test*” OR “at-home test*”) (S1 Box).

Study selection

All kinds of studies (observational and experimental) evaluating COVIDST strategies were included. Modelling studies, commentaries, narratives, opinion pieces, review articles, and case reports were excluded. Titles, abstracts, and full texts were independently screened for eligibility based on pre-specified inclusion and exclusion criteria. Disagreements were resolved by discussion and consultation with a senior reviewer (NPP) (Fig 1).

Data extraction and quality assessment

Data across all global geographic regions (i.e., low-, middle-, and high-income) were independently abstracted. Interventions included molecular/antigen/antibody COVID-19 self-tests as the index tests. Comparators included conventional RT-PCR testing by HCWs or other trained professionals.

Primary outcome was diagnostic accuracy (i.e., sensitivity, specificity, and diagnostic odds ratio [DOR]) [8]. Authors were contacted for data when not completely available.

Secondary outcome data on feasibility, acceptability, new infections detection, preferences, and impact were abstracted and reported with summary estimates of proportions and 95% confidence intervals (CI) [8, 9]. Tertiary outcomes included qualitative measures on motivations, facilitators, and barriers to test (S1 Table).

Quality Assessment of Diagnostic Accuracy Studies Tool 2 (QUADAS-2) was used to assess risk of bias in diagnostic accuracy studies (DAS). Newcastle-Ottawa Scale (NOS) was used for observational studies and Cochrane Risk of Bias Tool 2 (RoB2) deployed for randomized controlled trials (RCTs) [1014].

Data synthesis and meta-analyses

Diagnostic accuracy was explored in forest plots and heterogeneity was evaluated using i2 metric. Using bivariate random-effects meta-analysis, variability in COVIDST diagnostic performance was first explored.

Next, subgroup analyses were conducted for: 1) Symptom status (asymptomatic versus symptomatic individuals); 2) Strategy (supervised versus unsupervised testing strategy); 3) Site of self-sampling specimens (anterior nasal versus mid-turbinate nasal versus combined nasal-oropharyngeal versus saliva); 4) Digital support (i.e., websites, smartphone applications, test readers, other online tools) presence versus absence.

All analyses were conducted in R and RStudio statistical software (Version 2021.09.01, Build 372) using mada and meta packages [15, 16].

Results

Study selection

Of 146 studies assessed during full-text review, 85 were excluded. Reasons for exclusion were duplicate studies (n = 7), not self-testing (n = 42), no outcomes of interest (n = 6), serology studies (n = 8), study design (n = 8), and not COVID-19 (n = 14). Seventy studies (peer-reviewed = 65, preprints = 5) were included. Nine of these studies were retrieved through bibliographic search (Fig 1).

Study characteristics

Of seventy studies conducted across 25 countries, a majority, i.e., sixty-three (90.0%) were conducted in high-income countries (HICs) and eight (11.43%) in low- and middle-income countries (LMICs) [17]. Three studies were conducted in multiple countries [3, 18, 19]. Sample sizes ranged from 28 to 784,707 with a median sample size of 817 (S2 Table).

COVIDST strategies included mass screening (n = 32), targeted screening (i.e., school, college, university, nursing home, sports club) (n = 28), and healthcare facility-based screening (n = 8).

Populations studied were: 1) general population members (n = 39), 2) teachers, parents, school, and university students (n = 11), 3) healthcare and laboratory staff (n = 10), 4) hospital patients (n = 5), 5) drug addiction treatment patients (n = 1), 6) office employees (n = 1), 7) nursing homes residents and staff (n = 1), 8) music festival attendees (n = 1), and 9) Black, Indigenous, and People of Colour (BIPOC) community (n = 1).

Sampling sites used were anterior and mid-turbinate nasal, salivary, nasopharyngeal, and oropharyngeal.

Studies were conducted in asymptomatic (n = 17), symptomatic (n = 3), or both asymptomatic and symptomatic (n = 27) individuals.

Thirty-four studies reported unsupervised/at-home self-testing strategy, ten studies evaluated supervised self-testing strategy, and two studies evaluated both. In supervised self-testing, the entire procedure was observed by trained HCWs or research staff, who did/did not intervene if it was being incorrectly conducted or when assistance was required. In unsupervised COVIDST, unobserved testing was performed in test centres or at-home.

Digital supports for COVIDST (n = 20 studies) included websites, smartphone applications, and video-based instructions. Of these, nine studies reported digital components that aided in improving self-test accuracy.

Synthesized results for primary outcome (diagnostic accuracy)

Diagnostic performance of COVIDST was evaluated with: A) narrative synthesis and B) meta-analysis.

First, we reported sensitivity/specificity by test devices, symptom onset, covid variants, and cycle threshold (CT) values, for studies, where we were unable to meta-analyze due to paucity of data and studies (Narrative synthesis, Primary outcome). We reported 95% confidence intervals (95% CIs) where available. Subsequently, we generated a forest plot from pooled sensitivity and specificity (n = 14) where possible. For subgroups, we conducted a meta-analysis with pooled data (Meta-analysis results, Primary outcome).

Narrative synthesis.

Diagnostic accuracy results from individual studies were summarized across test devices (n = 14), by symptom onset (n = 4), by CT value (n = 3), and by variants (n = 2). We could not perform a meta-analysis for these categories.

Four studies reported on diagnostic performance across 15 different COVIDST devices (S3 Table). Of these, four test devices reported WHO-recommended sensitivities above 80%: Boson SARS-CoV-2 antigen test card (98.18%, 95% CI: 96.74%–99.62%), Biosynex in symptomatic populations (93.8%; 95% CI: 79.3%–98.4%), Biosynex in asymptomatic populations (83.3%; 95% CI: 73.4%-90.0%), Standard Q by SD Biosensor (82.50%, 95% CI: 68.1%–91.3% and 94.38%, 95% CI: 87.54%-98.60%), and MP Bio (83.01%, 95% CI: 78.8%-86.7%). Specificities were above 91% for all devices [2026].

Four studies reported accuracies by day of symptom onset. Two studies reported sensitivities of 99.18% one day prior to symptom onset, 98.77–100% on first 2 days, and 100% from day 2 to 7 of symptom onset [20, 25]. Conversely, a community-based study reported sensitivity of 23% within 0–1 days and 66.67% within 2–4 days of symptom onset [23]. Finally, another study reported a sensitivity of 73% when self-test was conducted within 0–5 days of symptom onset as compared to 22% when conducted after 5 days [27].

Three studies reported on performance of self-test by cycle threshold (CT) values. Low CT values of positive RT-PCR results indicated a high viral load in swab samples. RT-PCR and self-test results were compared; CT value was checked for each self-test result. One study detected 100% of infections with COVIDST when CT values were below 20, 92% when CT values were between 20–30, and 33.33% when CT values were above 30 [20]. COVIDST in 2 studies detected: 1) symptomatic cases when mean CT value was 23.1 (IQR: 19.5–30.0) and median CT value was 14 (IQR: 12.0–18.0); 2) asymptomatic cases when mean CT value was 28.2 (IQR: 25.0–33.0) [23, 28].

Two studies compared COVIDST performance in delta versus omicron variant infected populations. In one study, sensitivity decreased from 87.0% in the delta period to 80.9% in the omicron period [26]. Conversely, in another study, same-day sensitivity of self-tests was higher (22.1%, 95%CI: 15.5–28.8%) in omicron period versus 15.5% (95%CI: 6.2–24.8%) in delta period [29].

Meta-analyses results.

Fourteen studies reported data on accuracy [1926, 3035]. First, we pooled sensitivities and specificities to create forest plots (Figs 2 and 3). Following this, we assessed heterogeneity and conducted subgroup analyses; results are summarized below.

thumbnail
Fig 2. Forest plot—Sensitivity (Sn) of included studies (n = 14).

https://doi.org/10.1371/journal.pgph.0002336.g002

thumbnail
Fig 3. Forest plot—Specificity (Sp) of included studies (n = 14).

https://doi.org/10.1371/journal.pgph.0002336.g003

Our forest plots reported a point estimate for pooled sensitivity (n = 14) of 75.0% (95%CI: 59.0%-86.0%) (Fig 2). Sensitivities varied from 25% to 98%. Random effects model heterogeneity i2 statistic was high at 97%. Point estimate for pooled specificity (n = 14) was 100% (95%CI: 99.0%-100.0%) (Fig 3). Specificities varied from 97% to 100%. Random effects model heterogeneity i2 statistic was high at 94%.

We performed a subgroup analyses to explore this heterogeneity further. Summary receiver operating characteristic (SROC) curves were plotted for all subgroups (S1A–S1D Fig). Pooled sensitivities, specificities, and DORs estimates are provided in Table 1.

In subgroup analyses by sampling sites (n = 14), highest sensitivity was reported in samples from mid-turbinate sampling (77.79%, 95%CI: 56.03%-90.59%), followed by combined nasal-oropharyngeal sampling (69.69%, 95%CI: 58.96%-78.62%), and anterior nasal sampling (63.80%, 95%CI: 46.68%-78.0%, statistically significant). Sensitivity was lowest with salivary sampling (39.10%, 95%CI: 18.45%-64.57%). Specificity was above 98% irrespective of sampling site. DOR was highest for combined nasal-oropharyngeal specimens (303.00) and lowest for saliva specimens (98.80).

Nine out of fourteen studies reported diagnostic accuracy data based on presence/absence of symptoms. For symptomatic populations, sensitivity was 73.91% (95%CI: 68.41%-78.75%, statistically significant) versus 40.18% (95%CI: 21.52%-62.20%) for asymptomatic populations. Specificity was above 97% irrespective of symptomatic status. DOR was high at 249 for asymptomatic versus 175 in symptomatic populations.

Thirteen out of fourteen studies evaluated performance of supervised and unsupervised COVIDST. Supervised strategy reported a higher sensitivity of 86.67% (95%CI: 59.64%-96.62%) versus a sensitivity of 60.69% (95%CI: 50.31%-70.18%) in unsupervised strategy. Specificity was high at 99% irrespective of strategy. DOR was higher in supervised (1530.00) versus in unsupervised (181.00) COVIDST.

Fourteen studies analyzed COVIDST performance with/without digital supports. Sensitivity was higher with digital supports (70.15%, 95%CI: 50.08%-84.63%) than without (65.69%, 95%CI: 54.06%-75.70%). Specificity was 99% irrespective of presence/absence of digital supports. DOR was higher (409.00) with digital supports than without them (237.00).

All 14 included studies in our meta-analyses were conducted in high income countries and all were observational studies. Therefore, we were unable to explore heterogeneity by geographic regions and study design. However, our subgroup analyses suggests that the heterogeneity could’ve been attributed to sampling site, presence or absence of supervised self-testing, addition of digital supports to self-testing, presence or absence of symptoms. Additionally, possible sources of heterogeneity include the day of testing after exposure to virus or after developing symptoms, type of test/brand of test used, the population conducting self-testing (general population vs healthcare workers vs hospital patients etc.), different variants, and CT values.

Synthesized results for secondary outcomes

Test positivity (new infections detected).

Across twenty studies, new infections detected by COVIDST varied from 0.02% to 27% [22, 25, 27, 28, 31, 3648]. In two other studies, test positivity varied from 12% to 83.3% during the delta wave and 41.7% to 87.2% during the omicron wave [29, 49]. In one study, point prevalence for at-home COVIDST was 3.7% compared to 5.5% for testing by HCWs [47].

Acceptability and willingness to use.

Thirteen studies reported an overall high acceptability and willingness to use COVIDST. COVIDST acceptability was high (91%-98.7%) in two studies, with higher acceptability in females (73.91%) versus males (60.09%) reported in another study [5052]. Acceptability was lower (39.48%-51.1%) for daily self-testing [38, 40, 52]. Hesitancy to test (33.8%) and concerns about test accuracy (1%) made people decline COVIDST [40].

Across three studies in different populations, COVIDST uptake was 97% in school children, 92.5% in children with medical problems, and 45.2% in a mass self-testing study [41, 43, 53]. Across seven studies, willingness to use nasal self-tests ranged from 77% to 95.8% [2, 5459].

Feasibility and usability.

Eighteen studies reported high COVIDST feasibility and ease of use. Usability threshold, defined as the ability to correctly conduct all critical self-test steps, was higher with digital supports.

An overall high feasibility was reported (69.6%-100%) across five studies [23, 40, 45, 60, 61]. In three studies, feasibility was lower for serial-testing COVIDST (35.9%-64.6%) [41, 50, 62]. The average completion rate was 4.3 self-tests over 4.8 weeks in another serial-testing study [62].

Across seven studies, participants found COVIDST easy to use (81%-100%) [22, 30, 34, 45, 59, 63]. Specifically, two studies reported a high ease of conducting at-home self-tests (95.7%), ease of reading self-test results (92%), and ease of remembering to test regularly (96%) [22, 38].

Across four studies, confidence in reporting test results and testing abilities was high (70%-98%) [30, 34, 38, 64]. Regular COVIDST by dentists improved perception of safety while treating patients by 49% [65].

Usability threshold was assessed in three studies. A high usability threshold was reported from Malawi (82.4%-90.4%) and Zimbabwe (65.4%-70.6%) [2]. In Germany, usability was 61.2%, while in France, it increased from 99.1% to 100% with video supports [23, 66].

Preference.

Across six studies, preference for COVIDST varied from 29% to 87.9% [32, 45, 51, 63, 64, 66]. Overall, COVIDST preference was higher among Caucasian people, urban populations, individuals with a college degree, and healthcare workers, as compared to ethnic minorities, rural populations, individuals with a lower education, and working in other occupations [32, 51, 59, 63, 67, 68]. 94% of participants preferred throat swab-based self-test and 90% preferred saliva-based self-tests [55]. In another study, 95.4% participants preferred over-the-counter vending machines to obtain self-test kits [69].

Impact outcomes.

Impact outcomes were evaluated in eighteen studies. In four studies, COVIDST reduced closures in different institutions and of public events. Regular COVIDST in a peri-urban primary school resulted in fewer school closures and decreased secondary infections in one study [70]. In another, daily mass COVIDST resulted in 8,292 workday savings of essential workers [41].

Self-tests were also used as daily testing tools in high exposure HCWs, allowing them to quarantine immediately in case of a positive result and prevent transmission of infection [28]. In addition to healthcare settings, COVIDST facilitated the continuation of work of co-working health laboratory sites in a pandemic setting [31]. Furthermore, pre-event COVIDST allowed attendees to safely enjoy music concerts, wherein 87% of self-testers perceived a lower risk of contracting COVID-19 at the concert [71].

Three studies reported a higher TAT with COVIDST compared to conventional testing. In one study, TAT of 15–30 minutes for COVIDST versus 24–48 hours for RT-PCR was reported [22]. Antigen self-tests had a mean TAT of 8.1 minutes (standard deviation: 1.3) [23]. In another study, self-tests identified 23.5% of infections within 24 hours, and 54.9% of infections in the next 48 hours, prior to obtaining RT-PCR results.

Impact of COVIDST on action plans (n = 7) and self-test result notification (n = 4) was reported. In four studies, willingness to notify close contacts and relevant authorities was 80%-97.6% [2, 52, 54, 57]. In two studies, a high proportion of respondents (80.78%-98.32%) were willing to seek post-test counselling following a positive result [52, 57]. In three studies, 93%-100% testers expressed willingness to self-isolate following a positive test result [2, 57, 72]. Although only 49% of HCWs believed that self-testers would self-isolate themselves following a positive result, they opined that self-testers would take steps to reduce infection transmission [2].

Across two studies, 54%-78.3% of participants preferred validating initial COVIDST results through repeat testing [23, 54, 57]. In three studies, 70.1%-92.6% self-testers sought confirmatory RT-PCR testing [39, 41, 54]. Children aged 5–11 years and 12–18 years with a positive unsupervised self-test result were more likely to obtain a confirmatory PCR test compared to supervised testers (Odds ratio = 3.48, 95%CI: 2.68–4.52 and Odds ratio = 2.16, 95%CI: 1.86–2.50, respectively) [36].

Qualitative outcomes.

Qualitative outcomes such as motivations, facilitators, and barriers were assessed in 26 studies.

Motivators to self-test were protecting one’s health and reducing infection transmission to close contacts, partaking in daily activities and physically accessing services, workplace safety, travelling, dining outside, and attending large gatherings [45, 54, 66, 71, 73, 74]. Higher motivations to test were linked to a higher socioeconomic status (SES) and ability to acquire test kits [45, 68, 71, 73].

COVIDST facilitators assessed in twelve studies included self-test training prior to use, non-intrusive and ease of testing at-home, increased sense of safety, detailed self-test instructions, faster turnaround time, and instructional videos [23, 28, 32, 57, 61, 62, 66, 68, 70, 7579].

Across nine studies, COVIDST barriers included high costs, low trust in accuracy and reliability, anxiety, fear of stigma due to positive result, hesitation in self-test conduct, uncomfortable self-swabbing procedures, difficulty following instructions and interpreting faint positive test lines, lack of perceived benefit, and inequitable access to COVIDST [2123, 53, 62, 76, 78, 8082].

Self-testing with digital supports.

Across fifteen studies, COVIDST digital supports used were: online platforms (n = 6), app-based COVIDST (n = 6), video-based instructions (n = 5), and online supervised COVIDST (n = 6) [2224, 32, 34, 3840, 45, 46, 4951, 5962, 64, 71, 83].

In four studies, app-assisted COVIDST allowed 98%-100% of participants to successfully interpret their test results [38, 50, 60, 83]. while video-taped self-testing process increased participants’ confidence (76%) in COVIDST results [71].

In another four studies, uploading a test result picture or reporting test results online was a requirement that allowed HCWs to monitor and isolate positive cases [40, 49, 51, 60, 64].

In a mass COVIDST study, digital supports increased result notification in 75% of self-testers [83]. A self-testing and COVID-19 exposure notification app utilized such self-reported COVIDST results to reduce risk of infection in non-infected app users [48, 49]. However, unincentivized and voluntary reporting with a digital assistant in one mass COVIDST study was low (4.6%) [83]. Also, digital reporting varied by test result; 3.2% reported positive test results and 1.8% reported negative test results [60]. One study reported that federal COVID-19 statistics did not include 42.8% of participants with a positive self-test result [48].

Risk of bias assessment

To assess any publication bias in studies included in the meta-analysis, a funnel plot was plotted (S2 Fig). A low risk of bias was estimated using Deek’s method (p-value of 0.79).

Using the QUADAS-2 tool (n = 14), we found low risk of bias across all categories except for reference standards (unclear risk, n = 6) (S4A Table). Cohort studies (n = 13) had an average risk of bias in the comparability category (1-star, n = 7) (S4B Table). Similarly, cross-sectional studies (n = 41) also had an average risk of bias in the comparability category (1-stars, n = 12) (S4C Table). One case-control study had an overall poor risk of bias score across all categories. Finally, RoB2 tool was used for one qualitative RCT study wherein low risk for all domains was observed except for the selection of reported result domain.

Discussion

This review demonstrates that COVIDST strategies are effective in screening SARS-CoV-2 infections. Self-testing reported a faster TAT to test result compared to conventional testing, and can be safely used in outbreak settings, prevent institutional closures, and reduce further transmission in occupational settings.

Diagnostic accuracy and caveats

Our meta-analyses demonstrated very high specificity and above average sensitivity of COVIDST strategies. Specificity for COVIDST (across all tests) was consistently above 98% regardless of different subgroups. Specificity is computed by calculating all true negatives (TN)/true negatives (TN) and false positives (FP). If the specificity is high, and the person is asymptomatic, we can be certain that the false positives are low.

In contrast, sensitivity for COVIDST varied across subgroups; highest sensitivities were reported for: a) mid-turbinate nasal specimens (77.79%, 95% CI: 56.03%-90.59%), b) tests conduct in supervised settings (86.67%, 95% CI: 59.64%-96.62%), c) symptomatic individuals (73.91%, 95% CI: 68.41%-78.75%), and d) digital COVIDST (70.15%, 95% CI: 50.08%-84.63%).

Sensitivity is computed by reporting true positives (TP)/true positives (TP) plus false negatives (FN). With that, if false negatives increase, sensitivity drops. In symptomatic individuals, highest sensitivities were reported within the first 5 days of symptom onset. In contrast, for asymptomatic individuals, sensitivities were consistently low (40.18%).

Additionally, sensitivities were higher when CT value was lower or equal to 25. This is an important feature to note when sharing information on self-tests. Variance in sensitivities based on CT values show that self-tests can detect infections most accurately with peak viral loads and contagiousness. These findings highlight that the value of self-testing lies in the rapid identification and prompt isolation of highly contagious individuals compared to RT-PCR positive tests. A median PCR positivity period of 22–33 days gives a positive test result in the presence of viral particles that persist even after resolution of infection.

Comparatively, most false negative self-test results occur when individuals are outside the transmissibility window [84]. If a COVIDST result is negative but an RT-PCR test result is positive, it is likely that the individual is not very infectious and may not pose a public health threat [84].

As for test devices used, some devices performed consistently as per WHO–for example, the Boson SARS-CoV-2 antigen test card, Biosynex, Standard Q at-home test, and MP Bio–while others did not (S3 Table). Regarding strains, in the two studies that evaluated COVIDST performance by variants, we noted no difference in sensitivities for either the delta or the omicron strain pre-dominant periods. This is reassuring for future strains of the virus. Due to concerns regarding sensitivity of COVIDST, the FDA released guidelines on serial testing using rapid antigen tests. We were unable to find diagnostic studies reporting data on serial self-testing, therefore, a knowledge gap remains. However, in one study, we found that two rapid antigen tests in symptomatic populations taken 48 hours apart increased sensitivity to 93.4%. In asymptomatic populations, testing twice 48 hours apart increased the sensitivity to 62.7% while testing thrice increased it to 79% [85]. This is in line with FDA’s guidelines on the usage of serial testing to reduce the risk of false negative tests [86]. The same could be applied to serial self-testing to improve its sensitivity.

Interpreting sensitivity and specificity is challenging at the population level, especially due to wide range of sensitivities in different populations and settings. Therefore, messaging regarding interpretation is crucial for populations seeking to implement or use these self-tests. Our results show lower COVIDST sensitivity reported was due to unclear instructions for use, inadequate pre-test training, incorrect test conduct, non-adherence to instructions, and difficulties in interpreting faint positive test lines. To improve COVIDST performance, diagnostic companies need to design self-test kits with consideration for low-literacy, rural, peri-urban and senior populations. Self-test instructions for conduct and interpretation must be detailed, comprehensive, and provided in layman terms. In areas with high digital literacy and data connectivity, video-based instructions and virtual pre-test training sessions can be provided.

DORs were consistently high for all subgroups. Highest DOR was observed for supervised self-testing albeit with wide interval ranges. As DOR is calculated using both sensitivity and specificity values, a high DOR in subgroups with low sensitivity (e.g., asymptomatic populations) was found due to the high specificity across all subgroups. As a result, we weren’t able to use DORs for further analyses and interpretation of diagnostic accuracy results.

Secondary and tertiary outcomes

COVIDST screening strategies offer benefits in pandemic settings, when accessibility to laboratory testing is very limited, and timely test results are of the essence. Our results show that COVIDST strategies consistently reported a rapid TAT, were overall highly acceptable, highly feasible, and convenient to populations around the world. Their usability index was at 100% with additional digital supports. These supports included video-based or app-based instructions, highlighting the potential of digital COVIDST.

Our results are consistent with the interim guidance on self-testing provided by WHO, which found self-testing acceptable, feasible, and easy to use by laymen; however, our results are updated and include data that can serve WHO to adapt their guidance. These results were also very similar to the proven benefits that have been demonstrated with HIV self-testing [87].

Despite established COVID-19 nucleic acid amplification testing (NAAT) surveillance systems in many countries, COVIDST became an important screening and decision-making tool for individuals during the peak of the pandemic [88]. Our results show that regular COVIDST was instrumental in impacting onward transmission that stemmed from the pandemic. This impact was demonstrated in reducing school closures, resuming in-person education, and allowed attendees to safely attend social events. Healthcare workers were able to treat patients while monitoring themselves, thereby reducing the risk of nosocomial infections.

Serial testing during the pandemic, especially in high exposure jobs, allowed essential workers to resume work without the fear of losing jobs and pay and laboratories were able to remain operational. Participants were willing to report results, adhere to self-isolation guidelines, and seek confirmatory testing following a positive self-test result. Periodic self-testing reduced anxiety and created an environment of safety and reassurance when resuming normal activities.

Although serial self-testing may have higher diagnostic accuracy and demonstrated a high impact, a lower acceptability and preference was noted as compared to single use self-testing in the general population. Unclear understanding of the importance of serial self-testing, a lack of convenience and the increased effort and time commitment involved to test repeatedly and notify authorities may be reasons for a lower acceptability. Guidance on adapting testing frequencies based on infection prevalence in the community and epidemiologic burden must be provided to self-testers to reduce unnecessary testing as well as increase acceptability. Engaging the community and emphasizing on the importance of serial self-testing may also prove beneficial. Additionally, if the guidelines require serial testing but the costs of tests are high, this reduces the willingness to repeat test themselves even in the presence of symptoms.

Overall, participants were motivated to use COVIDST strategies to know their infection status, resume daily activities, protect their loved ones, and exercise caution while attending large gatherings. Motivations and preference for COVIDST over lab-based testing increased with a higher SES and in urban areas.

Inequitable access to self-tests in ethnic minorities with a lower SES was observed. This alludes to inequity in distribution of self-tests that was largely restricted to those with resources. This pattern could be changed for future pandemics by reducing the unit price of self-tests and public procurement of tests for large scale use.

Evidence on COVIDST parallels the vast evidence that has accumulated for HIVST.

Both viruses have paved the way for a greater use of self-testing solutions to know serostatus, and by increasing accessibility offered by these solutions during the pandemic, have made self-tests a common household name. This strategy holds promise for many infectious pathogens and pandemics in the near future.

Strengths and limitations

To our knowledge, ours is the first comprehensive and updated systematic review and meta-analysis on COVIDST. Although WHO released COVIDST guidelines, data on diagnostic accuracy then were scarce, therefore a meta-analyses could not be performed. Additionally, these guidelines were based on studies published before February 2022 while our updated review contains recent studies (upto 2023) that complement prior publications and guidance.

Our review and meta-analysis are based on observational data. With a few RCTs on self-testing underway, new data will soon become available which we plan to include in a subsequent analyses of our living systematic review [89, 90]. We were unable to obtain complete data from study authors for a few diagnostic accuracy studies, so these studies were not included in our meta-analyses.

Most of our data are from HICs (n = 63), making our results difficult to generalize to LMICs. Although we have limited data on diagnostic accuracy and implementation of COVIDST in such settings, we were able to analyze some information on acceptability, feasibility, preference, willingness to utilize COVIDST, and barriers to COVIDST. Finally, no studies were reported with highly accurate molecular rapid COVIDST based strategies [91].

Implication for product development and research

Publicly distributed self-tests can guarantee widespread accessibility but should be implemented with evidence-based strategies to improve test conduct and result interpretation. Checks for counterfeit test kits are necessary and regulating the sales of COVIDST kits can help improve public confidence in self-testing.

Public health sector and not-for-profit organizations along with healthcare facilities and pharmacies can increase access to self-tests by free-of-cost, widespread distribution of kits in urban and rural areas.

A strong and connected reporting system must be implemented by local authorities to avoid underestimating the true burden of infections. Future research can explore COVIDST diagnostic performance with digitally connected platforms, apps, test readers, and systems to report message notification and linkage to care. Data from clinical trials are needed to fill the gaps in evidence from LMICs.

Conclusion

Self-testing complements conventional testing in the pandemic setting with its speed and efficiency when timing is of the essence. Our review demonstrates that COVIDST is a convenient and effective strategy for screening infections when used by the general population.

In symptomatic populations, in supervised settings with guided instructions, and with the addition of digital supports, self-tests improved in their performance. COVIDST had a high usability threshold, impacted institutional closures, and reported results notification where reporting systems were in place. However, data from LMICs were limited due to scarcity of self-testing.

Digital COVIDST is promising, and additional data will help improve accuracy and trust. Our results can aid policymakers, government bodies, and healthcare systems in updating their policies, and organizations aimed at integrating serial COVIDST strategies in their health ecosystems. COVIDST can alleviate the impact of the COVID-19 pandemic across all global settings and their widespread availability will help address global health inequities.

Both HIVST and COVIDST have demonstrated the impact that self-tests can have in empowering lay individuals to know their serostatus and in preventing forward transmission. This approach holds promise for the many self-tests for related pathogens and use of similar strategies can aid in ending future waves of related pandemics.

Supporting information

S3 Table. Diagnostic accuracy across test devices.

https://doi.org/10.1371/journal.pgph.0002336.s004

(DOCX)

S1 Fig.

A-D Comparison of summary receiver operating characteristic (SROC) curves.

https://doi.org/10.1371/journal.pgph.0002336.s006

(DOCX)

S2 Fig. Funnel plot of studies included in the meta-analysis (n = 14).

https://doi.org/10.1371/journal.pgph.0002336.s007

(DOCX)

Acknowledgments

The authors would like to thank Olivia Vaikla and Melisa Eraslan for their assistance with proofreading and formatting the manuscript.

References

  1. 1. COVID-19 Situation Reports: The World Health Organization; 2022. https://www.who.int/publications/m/item/weekly-epidemiological-update-on-covid-19---9-november-2022
  2. 2. Use of SARS-CoV-2 antigen rapid diagnostic tests for COVID19 self-testing INTERIM GUIDANCE- Web Annex B.: The World Health Organization; 2022. https://apps.who.int/iris/bitstream/handle/10665/352345/WHO-2019-nCoV-Ag-RDTs-Self-testing-Web-annex-B-2022.1-eng.pdf
  3. 3. Use of SARS-CoV-2 antigen-detection rapid diagnostic tests for COVID-19 self-testing: The World Health Organization; 2022. https://www.who.int/publications/i/item/WHO-2019-nCoV-Ag-RDTs-Self_testing-2022.1.
  4. 4. Procop GW, Kadkhoda K, Rhoads DD, Gordon SG, Reddy AJ. Home testing for COVID-19: Benefits and limitations. Cleve Clin J Med. 2021. pmid:33579779
  5. 5. Dinnes J, Deeks JJ, Berhane S, Taylor M, Adriano A, Davenport C, et al. Rapid, point-of-care antigen and molecular-based tests for diagnosis of SARS-CoV-2 infection. Cochrane Database Syst Rev. 2021;3(3):CD013705-CD. pmid:33760236.
  6. 6. Web Annex A. GRADE table: Should COVID-19 self-testing, using SARS-CoV-2 Ag-RDTs, be offered as an additional approach?: The World Health Organization; 2022. https://apps.who.int/iris/bitstream/handle/10665/352344/WHO-2019-nCoV-Ag-RDTs-Self-testing-Web-annex-A-2022.1-eng.pdf.
  7. 7. Apoorva A, Nitika PP, Fiorella V, Faiz AK, Ali E, Patrick OB, et al. Diagnostic performance, feasibility, and real-world evaluation of COVID-19 self-tests: A living systematic review & meta-analysis protocol [Protocol]. PROSPERO-International prospective register of systematic reviews; 2022.
  8. 8. Pant Pai N, Sharma J, Shivkumar S, Pillay S, Vadnais C, Joseph L, et al. Supervised and unsupervised self-testing for HIV in high- and low-risk populations: a systematic review. PLoS Med. 2013;10(4):e1001414–e. Epub 2013/04/02. pmid:23565066.
  9. 9. Pant Pai N, Chiavegatti T, Vijh R, Karatzas N, Daher J, Smallwood M, et al. Measures and Metrics for Feasibility of Proof-of-Concept Studies With Human Immunodeficiency Virus Rapid Point-of-Care Technologies: The Evidence and the Framework. Point Care. (1533-029X (Print)). pmid:29333105
  10. 10. Penny FW, Anne W S R, Marie E W, Susan M, Jonathan J D, Johannes B R, et al. QUADAS-2: a revised tool for the quality assessment of diagnostic accuracy studies. Ann Intern, Med. (1539–3704 (Electronic)). pmid:22007046
  11. 11. G A W, B S, D OC, J P, V W, M L, et al. The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses 2011. http://www.ohri.ca/programs/clinical_epidemiology/oxford.asp.
  12. 12. RoB 2: A revised Cochrane risk-of-bias tool for randomized trials: Cochrane Methods. https://methods.cochrane.org/bias/resources/rob-2-revised-cochrane-risk-bias-tool-randomized-trials#:~:text=Version%202%20of%20the%20Cochrane,design%2C%20conduct%2C%20and%20reporting.
  13. 13. Modesti PA, Reboldi G, Cappuccio FP, Agyemang C, Remuzzi G, Rapi S, et al. Panethnic Differences in Blood Pressure in Europe: A Systematic Review and Meta-Analysis. PLoS one. 2016;11(1):e0147601. pmid:26808317
  14. 14. Modesti PA, Reboldi G, Cappuccio FP, Agyemang C, Remuzzi G, Rapi S, et al. Panethnic differences in blood pressure in europe: a systematic review and meta-analysis- S1: NEWCASTLE—OTTAWA QUALITY ASSESSMENT SCALE (adapted for cross sectional studies). PLoS one. 2016. pmid:26808317
  15. 15. Philipp D, Heinz H, Bernardo S-P. Meta-Analysis of Diagnostic Accuracy with mada: The Comprehensive R Archive Network. https://cran.r-project.org/web/packages/mada/vignettes/mada.pdf.
  16. 16. Schwarzer G. General Package for Meta-Analysis: The Comprehensive R Archive Network. https://cran.r-project.org/web/packages/meta/meta.pdf.
  17. 17. The World by Income and Region: The World Bank; 2021. https://datatopics.worldbank.org/world-development-indicators/the-world-by-income-and-region.html.
  18. 18. Boulliat C, Bilong CV, Dussart C, Massoubre B. Use of self-tests and rapid diagnostic tests: Survey of dispensing pharmacists in the Auvergne-Rhone-Alpes region. Ann Pharm Fr. 2021;79(5):547–57. pmid:33548277
  19. 19. García-Fiñana M, Hughes DM, Cheyne CP, Burnside G, Stockbridge M, Fowler TA, et al. Performance of the Innova SARS-CoV-2 antigen rapid lateral flow test in the Liverpool asymptomatic testing pilot: population based cohort study. BMJ. 2021;374:n1637. pmid:34230058
  20. 20. Kim S, Choi W. Performance of standard q covid-19 ag home test to detect sars-cov-2 within five days of disease onset. Clin Chem Lab Med. 2021;59(SUPPL 1):S606.
  21. 21. Lindner AK, Nikolai O, Rohardt C, Kausch F, Wintel M, Gertler M, et al. Diagnostic accuracy and feasibility of patient self-testing with a SARS-CoV-2 antigen-detecting rapid test. J Clin Virol. 2021;141 (no pagination). pmid:34144452
  22. 22. Schuit E, Venekamp R, Veldhuijzen I, van den Bijllaardt W, Pas S, Stohr J, et al. Accuracy and usability of saliva and nasal rapid antigen self-testing for detection of SARS-CoV-2 infection in the general population: a head-to-head comparison. medRxiv. 2021. PPR431858.
  23. 23. Tonen-Wolyec S, Dupont R, Awaida N, Batina-Agasa S, Hayette MP, Belec L. Evaluation of the practicability of biosynex antigen self-test covid-19 ag+ for the detection of sars-cov-2 nucleocapsid protein from self-collected nasal mid-turbinate secretions in the general public in france. Diagnostics. 2021;11(12) (no pagination). pmid:34943454
  24. 24. Zwart VF, van der Moeren N, Stohr JJJM, Feltkamp MCW, Bentvelsen RG, Diederen BMW, et al. Performance of Various Lateral Flow SARS-CoV-2 Antigen Self Testing Methods in Healthcare Workers: a Multicenter Study. medRxiv. 2022. PPR447982.
  25. 25. Leventopoulos M, Michou V, Papadimitropoulos M, Vourva E, Manias NG, Kavvadas HP, et al. Evaluation of the Boson rapid Ag test vs RT-PCR for use as a self-testing platform. Diagn Microbiol Infect Dis. 2022;104(3):115786. Epub 20220729. pmid:35998553.
  26. 26. Schuit E, Venekamp RP, Hooft L, Veldhuijzen IK, van den Bijllaardt W, Pas SD, et al. Diagnostic accuracy of covid-19 rapid antigen tests with unsupervised self-sampling in people with symptoms in the omicron period: cross sectional study. Bmj. 2022;378:e071215. Epub 20220914. pmid:36104069.
  27. 27. Bae S, Park H, Kim JY, Park S, Lim SY, Bae JY, et al. Daily, self-test rapid antigen test to assess SARS-CoV-2 viability in de-isolation of patients with COVID-19. Front Med (Lausanne). 2022;9:922431. Epub 20221019. pmid:36341265.
  28. 28. Downs LO, Eyre DW, O’Donnell D, Jeffery K. Home-based SARS-CoV-2 lateral flow antigen testing in hospital workers. J Infect. (1532–2742 (Electronic)). pmid:33573777
  29. 29. Soni A, Herbert C, Filippaios A, Broach J, Colubri A, Fahey N, et al. Comparison of Rapid Antigen Tests’ Performance between Delta (B.1.61.7; AY.X) and Omicron (B.1.1.529; BA1) Variants of SARS-CoV-2: Secondary Analysis from a Serial Home Self-Testing Study. J Intern Med. 2022. pmid:35262091
  30. 30. Frediani JK, Levy JM, Rao A, Bassit L, Figueroa J, Vos MB, et al. Multidisciplinary assessment of the Abbott BinaxNOW SARS-CoV-2 point-of-care antigen test in the context of emerging viral variants and self-administration. Sci Rep. 2021;11(1):14604. Epub 20210716. pmid:34272449.
  31. 31. Harmon A, Chang C, Salcedo N, Sena B, Herrera BB, Bosch I, et al. Validation of an At-Home Direct Antigen Rapid Test for COVID-19. JAMA Netw Open. 2021;4(8):e2126931. Epub 20210802. pmid:34448871.
  32. 32. Møller IJB, Utke AR, Ryesgaard UK, Østergaard LJ, Jespersen S. Diagnostic Performance, User Acceptability, and Safety of Unsupervised SARS-CoV-2 Rapid Antigen Detecting Tests Performed at Home. Int J Infect Dis. 2021. pmid:35038598
  33. 33. Tim P, UK C-LFOT. COVID-19: Rapid Antigen detection for SARS-CoV-2 by lateral flow assay: a national systematic evaluation for mass-testing. medRxiv. 2021:2021.01.13.21249563.
  34. 34. Stohr J, Zwart VF, Goderski G, Meijer A, Nagel-Imming CRS, Kluytmans-van den Bergh MFQ, et al. Self-testing for the detection of SARS-CoV-2 infection with rapid antigen tests for people with suspected COVID-19 in the community. Clin Microbiol Infect. 2021. Epub 20210804. pmid:34363945.
  35. 35. Venekamp RP, Schuit E, Hooft L, Veldhuijzen IK, van den Bijllaardt W, Pas SD, et al. Diagnostic accuracy of SARS-CoV-2 rapid antigen self-tests in asymptomatic individuals in the omicron period: a cross-sectional study. Clin Microbiol Infect. 2022. Epub 20221113. pmid:36379401.
  36. 36. Hughes DM, Bird SM, Cheyne CP, Ashton M, Campbell MC, García-Fiñana M, et al. Rapid antigen testing in COVID-19 management for school-aged children: an observational study in Cheshire and Merseyside, UK. J Public Health (Oxf). 2022. Epub 20220204. pmid:35137216.
  37. 37. Cassuto NG, Gravier A, Colin M, Theillay A, Pires-Roteira D, Pallay S, et al. Evaluation of a SARS-CoV-2 antigen-detecting rapid diagnostic test as a self-test: Diagnostic performance and usability. J Med Virol. 2021;93(12):6686–92. Epub 20210826. pmid:34331707.
  38. 38. Hirst JA, Logan M, Fanshawe TR, Mwandigha L, Wanat M, Vicary C, et al. Feasibility and Acceptability of Community Coronavirus Disease 2019 Testing Strategies (FACTS) in a University Setting. Open Forum Infect Dis. 2021;8(12):ofab495. Epub 20211004. pmid:34904117.
  39. 39. Lamb G, Heskin J, Randell P, Mughal N, Moore LS, Jones R, et al. Real-world evaluation of COVID-19 lateral flow device (LFD) mass-testing in healthcare workers at a London hospital; a prospective cohort analysis. J Infect. 2021;83(4):452–7. pmid:34364950
  40. 40. Love N, Ready D, Turner C, Yardley L, Rubin GJ, Hopkins S, et al. The acceptability of testing contacts of confirmed COVID-19 cases using serial, self-administered lateral flow devices as an alternative to self-isolation. J Med Microbiol. 2021. pmid:35947525
  41. 41. Institute of Population Health UoL. Covid-SMART Asymptomatic Testing Pilot in Liverpool City Region: Quantitative Evaluation 2022. https://www.liverpool.ac.uk/media/livacuk/coronavirus/Liverpool_City_Region_Covid_SMART_Evaluation-Feb.pdf.
  42. 42. Qasmieh SA, Robertson MM, Teasdale CA, Kulkarni SG, Nash D. Estimating the Period Prevalence of Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) Infection During the Omicron (BA.1) Surge in New York City (NYC), 1 January to 16 March 2022. Clin Infect Dis. pmid:35959571
  43. 43. Coller RJ, Kelly MM, Howell KD, Warner G, Butteris SM, Ehlenbach ML, et al. In-Home COVID-19 Testing for Children with Medical Complexity: Feasibility and Association With School Attendance and Safety Perceptions. Am J Public Health. 2022:e1–e5. Epub 20220915. pmid:36108256.
  44. 44. Papenburg J, Campbell JR, Caya C, Dion C, Corsini R, Cheng MP, et al. Adequacy of Serial Self-performed SARS-CoV-2 Rapid Antigen Detection Testing for Longitudinal Mass Screening in the Workplace. JAMA Netw Open. 2022;5(5):e2210559. Epub 20220502. pmid:35522284.
  45. 45. Agustí C, Martínez-Riveros H, González V, Fernández-Rivas G, Díaz Y, Montoro-Fernandez M, et al. Feasibility of an online antigen self-testing strategy for SARS-CoV-2 addressed to health care and education professionals in Catalonia (Spain). The TESTA’T- COVID Project. PLoS One. 2022;17(9):e0275006. Epub 20220927. pmid:36166432.
  46. 46. O’Byrne P, Orser L, Musten A, Ho N, Haines M, Lindsay J. Delivering COVID self-tests through GetaKit.ca: Creating testing access during a pandemic. Public Health Nurs. 2023. Epub 20230110. pmid:36625331.
  47. 47. Qasmieh SA, Robertson MM, Rane MS, Shen Y, Zimba R, Picchio CA, et al. The Importance of Incorporating At-Home Testing Into SARS-CoV-2 Point Prevalence Estimates: Findings From a US National Cohort, February 2022. JMIR Public Health Surveill. 2022;8(12):e38196. Epub 20221227. pmid:36240020.
  48. 48. Stemler J, Salmanton-García J, Weise B, Többen C, Joisten C, Fleig J, et al. A pilot surveillance report of SARS-CoV-2 rapid antigen test results among volunteers in Germany, 1st week of July 2022. Infection. 2022:1–5. Epub 20221024. pmid:36279033.
  49. 49. Daniore P, Nittas V, Ballouz T, Menges D, Moser A, Höglinger M, et al. Performance of the Swiss Digital Contact-Tracing App Over Various SARS-CoV-2 Pandemic Waves: Repeated Cross-sectional Analyses. JMIR Public Health Surveill. 2022;8(11):e41004. Epub 20221111. pmid:36219833.
  50. 50. Herbert C, Broach J, Heetderks W, Qashu F, Gibson L, Pretz C, et al. At-Home Serial Testing Using Over-the-Counter SARS-CoV-2 Tests with a Digital Smartphone App for Assistance: Findings of feasibility from a longitudinal cohort study. JMIR Form Res. 2022. Epub 20220825. pmid:36041004.
  51. 51. Martin AF, Denford S, Love N, Ready D, Oliver I, Amlôt R, et al. Engagement with daily testing instead of self-isolating in contacts of confirmed cases of SARS-CoV-2. BMC public health. 2021;21(1):1067. pmid:34090404
  52. 52. Martinez-Perez GZ, Shilton S, Sarue M, Cesario H, Banerji A, Batheja D, et al. Self-testing for SARS-CoV-2 in Sao Paulo, Brazil: results of a population-based values and attitudes survey. BMC Infect Dis. 2022;22(1). pmid:36056299
  53. 53. Marinos G, Lamprinos D, Georgakopoulos P, Oikonomou E, Zoumpoulis G, Garmpis N, et al. Evaluation of Knowledge, Attitudes and Practices Related to Self-Testing Procedure against COVID-19 among Greek Students: A Pilot Study. Int J Environ Res Public Health. 2022;19(8). Epub 20220410. pmid:35457427.
  54. 54. Betsch C, Sprengholz P, Siegers R, Eitze S, Korn L, Goldhahn L, et al. Empirical evidence to understand the human factor for effective rapid testing against SARS-CoV-2. Proc Natl Acad Sci USA. 2021;118(32). pmid:34362848.
  55. 55. Mistler CB, Sullivan M, Wickersham JA, Copenhaver MM, Shrestha R. Clinical and demographic differences in the willingness to use self-administered at-home COVID-19 testing measures among persons with opioid use disorder. Subst Abus. 2022;43(1):708–12. pmid:35100084.
  56. 56. Phillips G, Xu JY, Ruprecht MM, Costa D, Felt D, Wang XZ, et al. Associations with COVID-19 Symptoms, Prevention Interest, and Testing Among Sexual and Gender Minority Adults in a Diverse National Sample. Lgbt Health. 2021;8(5):322–9. pmid:34115955
  57. 57. Thomas C, Shilton S, Thomas C, Batheja D, Goel S, Iye CM, et al. Values and preferences of the general population in Indonesia in relation to COVID-19 self-testing: A cross-sectional survey. Trop Med Int Health. 2022. pmid:35332616
  58. 58. LeRouge C, Durneva P, Lyon V, Thompson M. Health Consumer Engagement, Enablement, and Empowerment in Smartphone-Enabled Home-Based Diagnostic Testing for Viral Infections: Mixed Methods Study. JMIR Mhealth Uhealth. 2022;10(6):e34685. Epub 20220630. pmid:35771605.
  59. 59. Schilling J, Moeller FG, Peterson R, Beltz B, Joshi D, Gartner D, et al. Testing the Acceptability and Usability of an AI-Enabled COVID-19 Diagnostic Tool Among Diverse Adult Populations in the United States. Qual Manag Health Care. 2023;32(Suppl 1):S35–s44. pmid:36579707.
  60. 60. Herbert C, Kheterpal V, Suvarna T, Broach J, Marquez JL, Gerber B, et al. Design and Preliminary Findings of Adherence to the Self-Testing for Our Protection From COVID-19 (STOP COVID-19) Risk-Based Testing Protocol: Prospective Digital Study. JMIR Form Res. 2022;6(6):e38113. Epub 20220616. pmid:35649180.
  61. 61. Hoehl S, Schenk B, Rudych O, Göttig S, Foppa I, Kohmer N, et al. High-Frequency Self-Testing by Schoolteachers for Sars-Cov-2 Using a Rapid Antigen Test–Results of the Safe School Hesse study. Dtsch Arztebl Int. 2021;118(14):252–3. pmid:34114556.
  62. 62. Wanat M, Logan M, Hirst J, Vicary C, Lee JJ, Perera R, et al. Perceptions on undertaking regular asymptomatic self-testing for COVID-19 using lateral flow tests: A qualitative study of university students and staff. BMJ Open. 2021. pmid:34475190
  63. 63. Prazuck T, Gravier A, Pires-Roteira D, Theillay A, Pallay S, Colin M, et al. Evaluation Of A New “All In One” Sars-Cov-2 Antigen-Detecting Rapid Diagnostic Test And Self-Test: Diagnostic Performance And Usability On Child And Adult Population. J Med Virol. 2022. PPR400123. pmid:35474460
  64. 64. Denford S, Martin A, Love N, Ready D, Oliver I, Amlôt R, et al. Engagement with daily testing instead of self-isolating in contacts of confirmed cases of SARS-CoV-2: A qualitative analysis. Front Public Health. 2021. PPR348017. pmid:34414160
  65. 65. Coker MO, Subramanian G, Davidow A, Fredericks-Younger J, Gennaro ML, Fine DH, et al. Impact of DHCWs’ Safety Perception on Vaccine Acceptance and Adoption of Risk Mitigation Strategies. JDR Clin Trans Res. 2022:23800844211071111. Epub 20220222. pmid:35191352.
  66. 66. Hajek A, Nedjad M, Kretzler B, König HH. [Use of and Attitudes toward Tests for the Detection of SARS-CoV-2 and Corresponding Antibodies: Results of a Nationally Representative Survey in Late Summer 2021]. Gesundheitswesen. 2023;85(1):26–35. Epub 20220909. pmid:36084943.
  67. 67. Fishman J, Bien-Gund CH, Bisson GP, Baik Y. COVID-19 Self-Testing Preferences Linked to Political Perspectives: Social Determinants in the U.S. Pandemic. Am J Prev Med. 2022. Epub 20221103. pmid:36411144.
  68. 68. Wu F, Yuan Y, Li Y, Yin D, Lang B, Zhao Y, et al. The acceptance of SARS-CoV-2 rapid antigen self-testing: A cross-sectional study in China. J Med Virol. 2023;95(1):e28227. Epub 20221025. pmid:36241424.
  69. 69. Jairoun AA, Al Hemyari SS, Abdulla NM, Shahwan M, Bilal FHJ, Al-Tamimi SK, et al. Acceptability and Willingness of UAE Residents to Use OTC Vending Machines to Deliver Self-Testing Kits for COVID-19 and the Implications. J Multidiscip Healthc. 2022;15:1759–70. pmid:36039076
  70. 70. Wachinger J, Schirmer M, Tauber N, McMahon SA, Denkinger CM. Experiences with opt-in, at-home screening for SARS-CoV-2 at a primary school in Germany: An implementation study. BMJ Paediatr Open. 2021;5(1) (no pagination). pmid:34697600
  71. 71. Dallera G, Alaa A, El-Osta A, Kreindler J, Harris M. Evaluating the feasibility and acceptability of a safety protocol to mitigate SARS-CoV-2 transmission risks when participating in full-capacity live mass events: a cross-sectional survey and interview-based study. BMJ Open. 2022;12(12):e063838. Epub 20221223. pmid:36564106.
  72. 72. Woloshin S, Dewitt B, Krishnamurti T, Fischhoff B. Assessing How Consumers Interpret and Act on Results From At-Home COVID-19 Self-test Kits A Randomized Clinical Trial. JAMA Intern Med. pmid:35099501
  73. 73. Bien-Gund C, Dugosh K, Acri T, Brady K, Thirumurthy H, Fishman J, et al. Factors Associated With US Public Motivation to Use and Distribute COVID-19 Self-tests. JAMA Netw Open. 2021;4(1):e2034001. Epub 20210104. pmid:33471114.
  74. 74. D’Agostino EM, Corbie G, Kibbe WA, Hornik CP, Richmond A, Dunston A, et al. Increasing access and uptake of SARS-CoV-2 at-home tests using a community-engaged approach. Prev Med Rep. 2022;29:101967. Epub 20220830. pmid:36061814.
  75. 75. Mouliou DS, Pantazopoulos I, Gourgoulianis KI. Societal Criticism towards COVID-19: Assessing the Theory of Self-Diagnosis Contrasted to Medical Diagnosis. Diagnostics (Basel). 2021;11(10). Epub 20210927. pmid:34679475.
  76. 76. Thomas C, Shilton S, Thomas C, Iye CM, Martínez-Pérez G. COVID-19 self-testing, a way to “live side by side with the coronavirus”: results from a qualitative study in Indonesia. Res Sq. 2022. PPR449862. pmid:36962512
  77. 77. Willeit P, Bernar B, Zurl C, Al-Rawi M, Berghold A, Bernhard D, et al. Sensitivity and specificity of the antigen-based anterior nasal self-testing programme for detecting SARS-CoV-2 infection in schools, Austria, March 2021. Euro Surveill. 2021;26(34). pmid:34448449
  78. 78. Nwaozuru U, Obiezu-Umeh C, Diallo H, Graham D, Whembolua GL, Bourgeau MJ, et al. Perceptions of COVID-19 self-testing and recommendations for implementation and scale-up among Black/African Americans: implications for the COVID-19 STEP project. Bmc public health. 2022;22(1). pmid:35725400
  79. 79. Rader B, Gertz A, Iuliano AD, Gilmer M, Wronski L, Astley CM, et al. Use of At-Home COVID-19 Tests—United States, August 23, 2021-March 12, 2022. MMWR Morb Mortal Wkly Rep. 2022;71(13):489–94. Epub 20220401. pmid:35358168.
  80. 80. Goggolidou P, Hodges-Mameletzis I, Purewal S, Karakoula A, Warr T. Self-Testing as an Invaluable Tool in Fighting the COVID-19 Pandemic. J Prim Care Community Health. 2021;12:21501327211047782. pmid:34583571.
  81. 81. Tulloch JSP, Micocci M, Buckle P, Lawrenson K, Kierkegaard P, McLister A, et al. Enhanced lateral flow testing strategies in care homes are associated with poor adherence and were insufficient to prevent COVID-19 outbreaks: results from a mixed methods implementation study. Age Ageing. 2021;50(6):1868–75. pmid:34272866
  82. 82. Undelikwo V, Shilton S, Folayan MO, Alaba O, Reipold EI, Martínez-Pérez G. COVID-19 self-testing in Nigeria: Stakeholders’ opinions and perspective on its value for case detection. medRxiv. 2022. PPR447979.
  83. 83. Herbert C, Shi Q, Kheterpal V, Nowak C, Suvarna T, Durnan B, et al. Use of a Digital Assistant to Report COVID-19 Rapid Antigen Self-test Results to Health Departments in 6 US Communities. JAMA Netw Open. 2022;5(8):e2228885. Epub 20220801. pmid:36018589.
  84. 84. Mina MJ, Peto TE, García-Fiñana M, Semple MG, Buchan IE. Clarifying the evidence on SARS-CoV-2 antigen rapid tests in public health responses to COVID-19. Lancet. 2021;397(10283):1425–7. pmid:33609444
  85. 85. Performance of Rapid Antigen Tests to Detect Symptomatic and Asymptomatic SARS-CoV-2 Infection. Annals of Internal Medicine. 2023;176(7):975–82. pmid:37399548.
  86. 86. FDA. At-Home COVID-19 Antigen Tests-Take Steps to Reduce Your Risk of False Negative Results: FDA Safety Communication 2022 [cited 2023]. https://www.fda.gov/medical-devices/safety-communications/home-covid-19-antigen-tests-take-steps-reduce-your-risk-false-negative-results-fda-safety.
  87. 87. McGuire M, de Waal A, Karellis A, Janssen R, Engel N, Sampath R, et al. HIV self-testing with digital supports as the new paradigm: A systematic review of global evidence (2010–2021). eClinicalMedicine. 2021;39. pmid:34430835
  88. 88. Ritchey MD, Rosenblum HG, Del Guercio K, Humbard M, Santos S, Hall J, et al. COVID-19 Self-Test Data: Challenges and Opportunities—United States, October 31, 2021-June 11, 2022. MMWR Morb Mortal Wkly Rep. 2022;71(32):1005–10. Epub 20220812. pmid:35951486.
  89. 89. LLC N. COVID-19 Diagnostic Self-testing Using Virtual Point-of-care: ClinicalTrails.gov; 2020. https://clinicaltrials.gov/ct2/show/NCT04348864.
  90. 90. Robert Gross UoP. COVID-19 Self-Testing Through Rapid Network Distribution (C-STRAND): ClinicalTrials.gov; 2021. https://clinicaltrials.gov/ct2/show/NCT04797858.
  91. 91. Zahavi M, Rohana H, Azrad M, Shinberg B, Peretz A. Rapid SARS-CoV-2 Detection Using the Lucira Check It COVID-19 Test Kit. Diagnostics (Basel). 2022;12(8). Epub 20220803. pmid:36010227.