Why school Covid-19 screening programs are leading to a vast majority of false positives
CDC and other public health authorities are recommending widespread Covid-19 screening for schools with almost no acknowledgement of the high likelihood that the vast majority of test positives are false positives
I have written a number of essays and papers (with co-authors Blaine Williams and Daniel Howard on the linked paper) recently about what I’m calling the “false positive catastrophe.” This is a well-known epidemiological issue: when disease prevalence is low and asymptomatic people are tested, the large majority of positive test results will be false positives — even if tests are relatively accurate.
For example, FDA warned in a letter from November 2020, that up to 96% of all test positives could be false positives at low disease prevalence.
Similarly, UK government officials warned in internal email discussions that up to 98% of the rapid test results rolled out widely for screening in that country could be false positives.
Why? It’s counter-intuitive but here’s why: it’s because the test will produce the same percentage of false positives at high or low disease prevalence, but as disease prevalence goes down the false positives will start to swamp the true positives.
Harvard Medical School professor and epidemiologist Westyn Branch-Elliman recently wrote about this phenomenon in U.S. News and World Report. She and her coauthors described how at the 0.1% or so background prevalence, and a 95% accurate test, we might see in schools this summer and fall, we’re likely to see literally 71 out of 72 test positives be false positives.
You read that right: the professors write that just one of 72 tests would in this scenario be a true positive. Here’s the summary of their argument:
[T]he probability of COVID-19 in asymptomatic students attending in-person learning was consistently low — less than 0.5% — even before widespread vaccination. Using 0.5 as a (very) generous overestimate and a close-to-perfect (99% specific) diagnostic test, that means for every one true positive test, three will be false-positive. The true specificity of some polymerase chain reaction (PCR) tests is probably closer to 95% (in other words, still very good, but not quite so close to perfect). This more realistic estimate increases the proportion of false-positives test results even more — up to 14 false-positives for every real case of COVID-19 identified by the screening program. As case rates continue to decline, the ratio of real cases to false-positives only gets worse (and worse). Assuming a rate of 1 in 1,000 or 0.1% and a nearly perfect test, there are 14 false-positive tests for every real case found by a screening testing program, and 71 if we use the more realistic estimate of 95% specificity.
Here’s the basic math as to how this works. It’s based on the well-known epidemiological equation for calculating the chance of a test being right at various disease prevalence percentages. This is known as “positive predictive value” (PPV) for positive tests and “negative predictive value” (NPV) for negative tests.
If we have a test that is 99% specific to the Covid-19 virus, which means it will give a negative test result in 99% of the cases, we’ll get necessarily 1% false positives by random chance. In a population of 100 people being randomly tested we’ll get 1 false positive.
If the actual disease prevalence at that time was 10% we’d have one false positive and ten true positives, a 1 to 10 ratio — an acceptable ratio of false positives.
But what if the disease prevalence is only 1%? Now we have one false positive and one true positive and a ratio of 50% false positives. That’s not helpful since we have a coin flip chance of isolating and quarantining the wrong person, not to mention causing unnecessary fear and panic.
What if the disease prevalence is only 0.1%? Now in a population of 1,000 people being randomly tested we’d have just one true positive but ten false positives. We now have a false positive ratio of 10 to 1.
Applying this same formula for a 95% specific test and a 0.1% disease prevalence we get the 71 out of 72 false positives that Branch-Elliman and her colleagues wrote about.
How is this reflected in the real world? Is this all theoretical or actually happening?
Well, it’s actually happening. Screening testing (testing of asymptomatic people in schools, universities, sports teams, workplaces, etc.) has been done since almost the start of the pandemic. But as we’ve been trying to get “back to normal,” and eradicate the disease, screening testing has been ramping up dramatically around the country.
The U.S. Congress approved $12 billion in additional funding for the Department of Health and Human Services (HHS) to expand testing and mitigation around the country, with a strong focus on screening. This funding was approved in spring of 2021 but was only deployed around the country starting in the early summer. As summer progressed we saw testing rates go up and, concomitantly and at least in part proportionately, ‘cases’ went up too.
This is because a ‘case’ was defined in this pandemic, for almost the first time in history, as only a PCR test positive, with no consideration of any symptoms at all required.
CDC has issued guidance for screening programs in schools — with almost no mention of the risk of false positives. Their guidance, updated as recently as Aug. 6, 2021, states only this with respect to the risk of false positives:
Testing in low-prevalence settings might produce false positive results, but testing can provide an important prevention strategy and safety net to support in-person education.
This is, however, an absurd and misleading statement because in the very same document, CDC describes as “low” prevalence a case rate of 0–9 new cases per 100,000 people per week. This translates to 0.01% disease prevalence, on a weekly basis. Even their “high” prevalence case of 100 or more cases per week translates to only 0.1% prevalence.
Even looking at CDC’s “high” prevalence scenario of just 0.1%, if we assume an extremely generous 99% test specificity for the antigen tests that are used for most screening programs, we get a whopping 94% false positive test results. If we use the more realistic 95% test specificity figure that Branch-Elliman and her colleagues used, we get a catastrophic 99% false positive rate.
This is why I call it the “false positive catastrophe,” when I’m being more honest about the degree to which these details affect false positive rates.
It should be obvious that any testing program that returns 90% or more false positive results is worse than useless — it’s extremely misleading and damaging to efforts to actually control the virus and get back to some semblance of normal life. It is a massive part of why this pandemic is still here — based on false positive results that are in many cases swamping the true positives.
So what are we to do? First, don’t do screening programs at anything less than 10% disease prevalence — and this should not be confused with positivity rates. Disease prevalence is not the same as the positivity rate. Disease prevalence must be measured based on random or sample testing of the broader population. And all surveillance and screening programs must (yes, must) include PCR verification of initial testing results. Or, better, yet, there should be more reliable verification testing such as live culture calibration or genetic sequencing.
It seems to be a highly counter-intuitive result that even relatively accurate tests will lead to high numbers of false positives at low disease prevalence. But it is absolutely crucial, if we are to end this pandemic, that at the very least our policymakers understand this dynamic.