The “false positive catastrophe” that results from widespread Covid-19 testing

Tam Hunt
10 min readJul 26, 2021

Widespread testing of asymptomatic people — known as screening or surveillance testing — leads to high numbers of false positives even with accurate tests; but the various Covid-19 tests are not very accurate, leading to a catastrophe of false positive test results. This affects not only case numbers, which are defined in most cases simply by a positive test result, but also hospitalization and Covid-19 death figures because these categories are also defined in most cases simply by a positive test result

[This is a shorter and more accessible version of our academic paper on “The False Positive Paradox” available here]

It’s well-known that widespread testing of people with a low probability of having the disease at issue will lead to high levels of false positives. This is known as the “false positive paradox.” It’s a paradox because even quite accurate tests can lead to high levels of false positives when used widely in a population with low actual prevalence. It’s reached such a significant level of false positives that I now prefer to call it the “false positive catastrophe” because it’s such an important part of what is going on.

FDA warned in a letter from November 2020, that up to 96% of all test positives in screening programs could be false positives at low disease prevalence: “At 0.1% [active disease] prevalence … 96 out of 100 positive results would be false positives.”

Similarly, UK government officials warned in internal email discussions that up to 98% of the rapid test results rolled out widely for screening in that country could be false positives.

Why? It’s counter-intuitive but here’s why: it’s because the test will produce the same percentage of false positives at high or low disease prevalence, but as disease prevalence goes down the true positives decline and the false positives will start to swamp the true positives. So even very accurate tests can result in a vast majority of false positives at low disease prevalence.

And even during the summer Delta spike in cases in the U.S. and elsewhere the disease prevalence was still quite low (under 1% in the population as a whole) in terms of the false positive paradox.

As the federal Health and Human Services department rolls out $12 billion approved by Congress in the spring of 2021, for expanded Covid-19 testing programs, with testing mostly focused on asymptomatic people, we are now seeing this false positive paradox in action.

A big part of this large amount of new funding is focused on testing in schools. But a July 2021 analysis by a trio of doctors with expertise in epidemiology, including Westyn Branch-Elliman at Harvard Medical School as lead author, warned against widespread testing of school students in the fall of 2021 — specifically because of the certainty that such testing will yield a large majority of false positives. They state:

Put simply, no test is perfect. There are errors in which a test is positive but there is no disease (false positives), and in which a test is negative even when the person has the disease (false negatives). When case rates are low, the majority — and sometimes even the vast majority — of positive test results are false-positives.

They add more detail:

[Various studies] across the state and across the country have shown us that the probability of COVID-19 in asymptomatic students attending in-person learning was consistently low — less than 0.5% — even before widespread vaccination. Using 0.5 as a (very) generous overestimate and a close-to-perfect (99% specific) diagnostic test, that means for every one true positive test, three will be false-positive. The true specificity of some polymerase chain reaction (PCR) tests is probably closer to 95% (in other words, still very good, but not quite so close to perfect). This more realistic estimate increases the proportion of false-positives test results even more — up to 14 false-positives for every real case of COVID-19 identified by the screening program. As case rates continue to decline, the ratio of real cases to false-positives only gets worse (and worse). Assuming a rate of 1 in 1,000 or 0.1% and a nearly perfect test, there are 14 false-positive tests for every real case found by a screening testing program, and 71 if we use the more realistic estimate of 95% specificity.

So, to summarize: these doctors are warning that Covid-19 screening in schools will very likely yield 71 out of 72 false positives — just one true positive out of 72 positive test results. It’s not hard to see how that may be labeled a false positive catastrophe as it leads to renewed fear, panic-based responses, shutting schools again or delaying reopening, etc.

Now that we have good data about society-wide background prevalence of Covid-19 and test accuracy we can extend this same argument against school testing and reasonably conclude that we have also seen a false positive catastrophe in the last year and a half at the societal level, with well over 90% of Covid-19 test results very likely being false positives. This is because much of the Covid-19 testing since the early days of the pandemic has been screening or surveillance testing, which by definition don’t consider symptoms before performing a test.

This false positive issue is not a new problem. In fact, it’s been highlighted in past pandemics as a problem. CDC’s 2004 guidance from the SARS pandemic, for example, stated: “To decrease the possibility of a false-positive result, testing should be limited to patients with a high index of suspicion for having SARS-CoV disease [i.e. having symptoms or contact with someone who has had the disease].”

WHO and CDC did, however, recommend widespread testing of asymptomatic people early in the Covid-19 pandemic, but CDC revised its guidance in August of 2020, only to reverse course again after public and expert pushback — clearly, there was a lot of internal debate about this important issue in these agencies.

CDC’s most recent (March 2021) guidance does again recommend screening testing of asymptomatics, despite the widely known issues regarding this policy. This guidance states: “Rapid, point-of care serial screening can identify asymptomatic cases and help interrupt SARS-CoV-2 transmission. This is especially important when community risk or transmission levels are substantial or high.”

WHO once again changed its guidance in June of 2021 to recommend against testing asymptomatics except for higher risk people like health care workers. In December 2021 the Canadian province of Ontario followed suit. As of January 2022 the US, however, continues to recommend testing of asymptomatics.

The prudent policy in this situation is, quite clearly, to not test asymptomatics because such testing leads to extremely high levels of false positives even with highly accurate tests. As I’ve described in various other essays, the available PCR and antigen tests are not close to being highly accurate.

Inaccurate tests combined with widespread testing of asymptomatics can lead to catastrophically high levels of false positives.

For example, in the U.S. we used to test for prostate cancer and breast cancer widely, under the common sense notion that it’s good to catch these illnesses early and “nip them in the bud.” What has happened, however, in practice, is an extremely high level of false positives for both types of cancer, due to the false positive paradox. Consequently, the American Medical Association and most other groups have stopped recommending widespread testing for these cancers.

How does the math work?

Here are the details on why testing asymptomatics in the Covid-19 pandemic is such a bad idea: even a test with a very high 99% accuracy rate used to screen asymptomatic populations, with a low background rate of actual infection, will yield high levels of false positives. And the background rate of actual infection, even during “spikes,” has always been relatively low. For example, Baden et al., 2020, found a 0.6% background positive test result in the 30,420 clinical trial participants for the Moderna virus, so assuming 1%, as I do in the results in Figure 1, is a generous assumption.

An essay in The Guardian newspaper by two mathematicians explains this issue well, and it’s based on Bayesian probability, which sounds complex but is actually pretty simple:

Imagine you undergo a test for a rare disease. The test is amazingly accurate: if you have the disease, it will correctly say so 99% of the time; if you don’t have the disease, it will correctly say so 99% of the time.

But the disease in question is very rare; just one person in every 10,000 has it. This is known as your “prior probability”: the background rate in the population.

So now imagine you test 1 million people. There are 100 people who have the disease: your test correctly identifies 99 of them. And there are 999,900 people who don’t: your test correctly identifies 989,901 of them.

But that means that your test, despite giving the right answer in 99% of cases, has told 9,999 people that they have the disease, when in fact they don’t. So if you get a positive result, in this case, your chance of actually having the disease is 99 in 10,098, or just under 1%. If you took this test entirely at face value, then you’d be scaring a lot of people, and sending them for intrusive, potentially dangerous medical procedures, on the back of a misdiagnosis.

Without knowing the prior probability, you don’t know how likely it is that a result is false or true. If the disease was not so rare — if, say, 1% of people had it — your results would be totally different. Then you’d have 9,900 false positives, but also 9,990 true positives. So if you had a positive result, it would be more than 50% likely to be true.

Figure 1 below is based on the British Medical Journal (BMJ)’s Covid-19 test accuracy interactive calculator (go ahead, play with it yourself; it’s fun! And very educational).

In populating the three cells in the calculator (at the top of the image) I’ve assumed a 1% background prevalence of active infection, which is as mentioned above an extremely generous assumption for the level of active infection (testing, such as antibody testing, of populations for past infections is a different kind of testing and is designed to pick up people that have had the infection at any point in the past, not in a particular snapshot in time, as is the case for PCR or antigen tests).

I’ve also assumed 58% sensitivity and 99% specificity, which are the findings of a recent metastudy combining 64 published reviews of antigen test accuracy.

We get fully 50% false positives in this scenario (1/2 positives are false positives) — even with a 99% specificity test. And zero false negatives.

Figure 1. False positives are 50% (1/2) even with a 99% sensitivity test if the active infection level in the population is at 1%.

50% is the same as chance. In other words, this 99% specificity test can in this scenario do no better than a coin flip. So testing in this scenario is NOT warranted because data that is no better than a coin flip is not data — it’s random chance.

However, it gets worse, much worse. PCR tests and antigen tests actually have no where near a 99% specificity level in practice, for various reasons. The peer-reviewed publication, Lee 2020 “Testing for SARS-CoV-2 in cellular components by routine nested RT-PCR followed by DNA sequencing,” performed a detailed analysis of the CDC PCR test, which was widely used in the first months of the pandemic, and found it had a 70% specificity (i.e. 30% false positives) and 80% sensitivity (20% false negatives). Other studies, such as Gubbay et al. 2021, and Rahman et al. 2020, which also used the true gold standard of sequencing of viral test results to verify PCR test results, found similarly low specificity for PCR tests: Gubbay et al. found only 60% specificity and Rahman et al. found a higher but still very poor 92% specificity.

If we use a 1% background prevalence and a 70% specificity in the BMJ calculator, we get a catastrophic 30 out of 31 false positives. In other words, just one out of 31 positive test results is actually a real positive. And, again, we get zero false negatives.

Figure 2. Using Lee 2020 findings regarding CDC’s PCR test inaccuracies with testing of asymptomatic people in a population with active Covid-19 infection level of 1%.

This is a large part of why there have been so many allegedly asymptomatic carriers of the virus: 1) a “case” was defined as anyone who tested positive; 2) but with highly inaccurate tests and widespread testing of asymptomatics the large majority of “cases” are in fact false positives.

This affects not only case numbers, which are defined in most cases simply by a positive test result, but also hospitalization and Covid-19 death figures because these categories are also defined in most cases simply by a positive test result. Applying a retrospective analysis of these data, in light of the understanding that most positive test results have been false positives, will result in a large reduction in hospitalization and deaths figures.

It is hard to overstate the importance of this understanding of the current pandemic — and for the next pandemic and how to avoid it through massive over-reaction.

So what are we to do? Two recommendations are obvious: 1) drastically reduce or eliminate screening and surveillance programs that focus on asymptomatic people; 2) in any screening or surveillance programs that continue always require a second test (which should be PCR not antigen) to verify a positive test result before concluding that the person has in fact tested positive. These two solutions will go far toward mitigating this false positive catastrophe.

--

--

Tam Hunt

Public policy, green energy, climate change, technology, law, philosophy, biology, evolution, physics, cosmology, foreign policy, futurism, spirituality