Analytical sensitivity and specificity of four point of care rapid antigen tests for SARS-CoV-2
Expires:September 13, 2024
Brad Karon, M.D., Ph.D.
Professor of Laboratory Medicine and Pathology
Chair, Division of Clinical Core Laboratory Services
Mayo Clinic, Rochester, Minnesota
I am Brad Karon, and I am the chair of the Division of Clinic Core Laboratory Services at the Mayo Clinic in Rochester, Minnesota. Last year, I was asked to lead a group in the Mayo Clinic Enterprise that was evaluating rapid testing options for novel coronavirus, SARS CoV-2, both molecular and antigen tests. And today I’m going to talk about the results of a study we recently published comparing rapid antigen tests for novel coronavirus.
I have no conflicts of interest to disclose related to this presentation.
So laboratory testing for the novel coronavirus, SARS-CoV-2, has taken a central role in the response to the COVID-19 pandemic. Central laboratory nucleic acid amplification tests, primarily reverse transcription quantitative polymerase chain reaction, or RT-qPCR, are now widely available and have been the primary diagnostic tool used in the United States. However, RT-qPCR testing has limitations in that it requires instrumentation — often costly instrumentation, has higher reagent costs, and takes a minimum of several hours to return results. As the vaccination program has rolled out and surges continue among selected populations and geographies, there has been more focus on the use of decentralized methods such as rapid antigen tests. The role of rapid antigen testing for novel coronavirus remains controversial for several reasons, among them conflicting data on the clinical sensitivity of these assays. Independent studies have suggested the clinical sensitivity of antigen tests varies from 30% to 50% in asymptomatic patients or individuals, and 50% to 80% in symptomatic patients, with some studies showing a much higher or lower clinical sensitivity. One of the primary reasons for our recent study of antigen tests is to understand whether differing analytical sensitivity of rapid antigen tests may explain some of the variability in the results of these clinical studies. We therefore developed a study to compare the analytical sensitivity and specificity of four rapid antigen tests for novel coronavirus.
There are a number of challenges to designing a study to compare point of care antigen tests. As a clinical chemist, the most obvious challenge I noted was the lack of a reference for the detection of antigen to the novel coronavirus. We are trying to compare sensitivity and specificity of tests without any reasonable definition of what constitutes a true positive or true negative sample or patient. Like most previous studies of antigen tests, we started by identifying samples that were positive or negative by RT-qPCR. We then tested those samples by four point of care antigen methods, but we also took advantage of an internally developed, highly sensitive mass spectrometric antigen test to further define a subset of RT-qPCR antigen positive samples for comparison. We also used digital droplet PCR as a second, more sensitive method for demonstrating the presence of RNA in true positive samples.
Another challenge to directly comparing rapid antigen tests is that the intended sample type, a nasal swab placed directly into a vendor-specific extraction buffer or media, cannot be used to compare the performance between multiple antigen tests. Like many other studies, we used residual samples from RT-qPCR testing, in our case residual phosphate buffered saline samples, and validated a dilution protocol to show that the PBS samples diluted into a vendor-specific extraction buffer recovered the intended results for each of the tests we studied.
Another limitation to performing studies to compare point of care antigen tests is the common use of the crossing point (Cp) or cycle threshold (Ct) to make assumptions about the viral load of positive patients or samples. Many studies have reported different sets of sensitivity numbers, one for the overall sensitivity comparing an antigen test to RT-qPCR, and another for a set of samples with a Cp or Ct value under some number thought to represent either an infectious sample or a sample with a higher viral load. Several groups have recently pointed out one limitation to this approach: the fact that the Cp or Ct values are not standardized between different nucleic acid amplification tests (NAAT), and therefore a Cp by one test is not equivalent to a Cp or Ct by another test. Another limitation, as or perhaps more important than this one and not widely recognized, is that the estimation of viral load from the Cp or Ct value is subject to large variability depending upon the PCR efficiency of individual samples tested. Many previous studies have used RNA standards to measure the average viral load at various Cp or Ct values, then assumed that individual samples at a given Cp or Ct value would have this viral load. In a few slides I will show you the variability in estimation of viral load this creates. We used digital droplet PCR to measure viral load in individual samples. ddPCR, or digital droplet PCR, provides a quantitative measured viral load in unique samples that is independent of the use of standards or calibrators and is not affected by PCR efficiency of individual samples. Therefore, ddPCR has been demonstrated to be 4- to 20-fold more precise than RT-qPCR in the measurement of viral load.
We recently completed a study of four point of care rapid antigen tests for novel coronavirus. We used two lateral flow antigen tests, one of these lateral flow tests used a digital reader to interpret results, and the other lateral flow test was visually read. We also used two different fluorescence immunoassay antigen tests, or FIAs. We’ve referred to it, or we’ve called it the “4 way” study because we compared four rapid antigen tests, and we also used four different methods — RT-qPCR, a mass spectrometric antigen test, digital droplet PCR, and the point of care antigen tests — to obtain results on samples that had been previously tested by RT-qPCR. We used over 350 residual phosphate buffered saline, or PBS, samples that had been tested by RT-qPCR, 150 samples had tested negative by RT-qPCR and 200 samples that were positive by RT-qPCR and were selected to fall into specific Cp value ranges.
150 RT-qPCR, ddPCR-negative samples were used to assess rapid antigen specificity. All rapid antigen tests were conducted by diluting previously tested PBS samples into vendor-specific extraction buffer. Two technologists performed testing for the digital reader lateral flow assay and the two fluorescence immunoassays, while a third technologist performed testing for the visually read lateral flow test so the results would not be biased by the results of the other test. Lateral flow test A, and the two fluorescence immunoassay methods, all had 100% specificity. There were four false positive results noted for lateral flow method B, resulting in a specificity of 97.3%. Consistent with other studies, the rapid antigen methods have very good specificity, or negative percent agreement, with RT-qPCR.
Slide 9 shows results for the overall sensitivity of the point of care antigen methods, as well as sensitivity broken into brackets of measured viral load. Overall lateral flow antigen test A had a lower sensitivity, 66.5%, compared to lateral flow test B at 83.2%. The two fluorescence immunoassays demonstrated 80% to 90% sensitivity in this sample set, while the mass spectrometric test was the most sensitive, detecting 90.9% of RT-qPCR samples overall. Looking within brackets of measured viral load, all antigen tests worked very well when measured viral load was over 1.5 million copies/mL, and all tests also performed well when the measured viral load was between 500,000 and 1.5 million copies/mL. Surprisingly, the two lateral flow methods showed a significant difference in analytical sensitivity when the viral load was between 50,000 and 500,000 copies/mL. The two lateral flow tests detected 10% or fewer of samples with a viral load of less than 50,000 copies/mL, compared to the fluorescence immunoassay test B that detected just fewer than half of those samples.
On slide 10 I show another way to look at the sensitivity data when we consider only the samples that were positive by RT-qPCR, digital droplet PCR, and one or more of the antigen tests. This allows us to ask the question “If antigen is present in a sample, how well do the various point of care antigen tests detect it?” Looking at the data overall, the mass spectrometric antigen test was the most sensitive, detecting 99% of samples with measurable antigen and RNA, though fluorescence immunoassay B was close behind and was able to detect 96.1% of all samples with antigen present and measurable RNA. The difference in sensitivity among samples with less than 50,000 copies/mL on the last column on this graph, is even more striking, with the lateral flow A method detecting only 13% of these samples compared to 78.3% detected for fluorescence immunoassay B.
Slide 11 shows a scatterplot of samples with undetected or detected antigen for each antigen method, as a function of measured viral load by ddPCR. The lateral flow test A only detected four samples with viral load under 200,000 copies/mL, while many samples with viral load over 100,000 copies/mL were undetected by this method. Lateral flow test B was able to detect more samples in the range of 10,000 to 100,000 copies/mL compared to lateral flow test A, while fluorescence immunoassay A was able to detect approximately half of the samples in this range. Fluorescence immunoassay B was the only rapid antigen test able to detect the majority of RT-qPCR positive specimens with viral loads between 10,000 and 100,000 copies/mL.
Finally, our study design allowed us to examine the relationship between the crossing point, or Cp, also commonly referred to as a cycle threshold, or Ct, for some assays, and viral load measured by digital droplet PCR. This plot shows all samples with ddPCR results that did not oversaturate the ddPCR method and therefore had a measured viral load. The viral load of samples clearly decreases as a function of crossing point, though there is great variability in measured viral load at any given crossing point. Most notable, however, the relationship between Cp and viral load flattens out at a crossing point of around 33. There were many samples that had a crossing point of 35, the highest Cp value assigned for this particular RT-qPCR method, and among those samples the viral load varied from 378 to over 1 million copies/mL.
In conclusion, we studied four rapid antigen tests for novel coronavirus and found that the analytical sensitivity varied between methods, mainly for samples with less than 500,000 copies/mL RNA. We studied only four rapid antigen methods, so we cannot be sure whether the observed performance represents upper and lower bounds of analytical sensitivity for these tests, or whether antigen tests exist that are significantly more or less sensitive. As has been demonstrated for other viral antigen tests, lateral flow methods are analytically less sensitive than fluorescence immunoassays. Somewhat surprisingly, we also saw significant difference in sensitivity between the two lateral flow methods we studied. When antigen is detectable in samples with SARS-CoV-2 RNA also present, more sensitive methods can detect antigen even when the viral load is less than 50,000 copies/mL.
In contrast, less sensitive methods will fail to detect antigen in many samples with less than 500,000 copies/mL RNA. Our study puts some boundaries around the analytical sensitivity of antigen tests and highlights the kind of samples that may show differences in antigen results based upon the performance characteristics of individual antigen tests.
Finally, the variable relationship between the crossing point or cycle threshold and viral load as measured by more precise methods like ddPCR, limits the use of the crossing point or cycle threshold to make assumptions about the viral load of individual samples tested by RT-qPCR.
I would like to thank you for your time and attention. Our study is now published online in the Journal of Clinical Chemistry, and I have added the reference here for your convenience.
Contact us: email@example.com
Basilico Studio Stock