False positives galore: Covid-19 levels rise in kids in August — to a still very low 0.03%
Despite the coverage in the media about increasing threats to children and adolescents from Covid-19 — probably leading soon to vaccine mandates for kids as well as adults — new data from CDC show an extremely low level of Covid-19 cases in kids in the U.S., at 0.03%, even during the spike in cases this August
CDC released data in early September showing that despite a significant increase in Covid-19 cases in 0–17 year-olds in the U.S. in August of 2021, the average disease prevalence was still extremely low at just 0.03%.
The report (CDC’s Mortality and Morbidity Weekly Report (MMWR)) states:
COVID-19 incidence among persons aged 0–4, 5–11, and 12–17 years during August 2020–August 2021 peaked in January 2021 at 21.2, 30.1, and 51.7 cases per 100,000 persons, respectively (Figure 1). Incidence declined in June 2021 to a low of 1.7, 1.9, and 2.9, respectively, across the three age groups; however, incidence in August 2021 among the three age groups reached 16.2, 28.5, and 32.7 per 100,000 persons, respectively.
16.2, 28.5 and 32.7 averages to 25.8, which is 0.03% of 100,000 people.
These data are based on CDC’s “case-based surveillance system” that aggregates data from various states. A case is defined as a positive PCR or antigen test only, with no consideration of symptoms being present or not. Since about half of all cases are asymptomatic and never become symptomatic, it’s likely that the actual case rate for Covid-19 (as opposed to simply a positive test result) is far lower even than this 0.03% figure.
This is important for at least two reasons: 1) it shows that the actual risk to kids is extremely low even during the Delta surge — and keeping in mind that case levels were 1/4th the level of August 2021 for much of the last year; 2) it shows that the false positive catastrophe that I’ve written about in a number of previous blogs and articles is indeed happening as school Covid-19 screening programs are ramped up.
For example, FDA warned in a letter from November 2020, that up to 96% of all test positives could be false positives at low disease prevalence — and “low” in their example was 0.1%, three times higher than the level we apparently have had among kids even during this recent August Delta spike.
Similarly, UK government officials warned in internal email discussions that up to 98% of the rapid test results rolled out widely for screening in that country could be false positives, at low disease prevalence.
Why? It’s counter-intuitive but here’s why: it’s because the test will produce the same percentage of false positives at high or low disease prevalence, but as disease prevalence goes down the false positives will start to swamp the true positives.
Harvard Medical School professor and epidemiologist Westyn Branch-Elliman recently wrote about this phenomenon in U.S. News and World Report. She and her coauthors described how at a disease prevalence of 0.1%, and a 95% accurate test we’re likely to see literally 71 out of 72 test positives be false positives.
You read that right: the professors write that just one of 72 tests would in this scenario be a true positive. Here’s the summary of their argument:
[T]he probability of COVID-19 in asymptomatic students attending in-person learning was consistently low — less than 0.5% — even before widespread vaccination. Using 0.5 as a (very) generous overestimate and a close-to-perfect (99% specific) diagnostic test, that means for every one true positive test, three will be false-positive. The true specificity of some polymerase chain reaction (PCR) tests is probably closer to 95% (in other words, still very good, but not quite so close to perfect). This more realistic estimate increases the proportion of false-positives test results even more — up to 14 false-positives for every real case of COVID-19 identified by the screening program. As case rates continue to decline, the ratio of real cases to false-positives only gets worse (and worse). Assuming a rate of 1 in 1,000 or 0.1% and a nearly perfect test, there are 14 false-positive tests for every real case found by a screening testing program, and 71 if we use the more realistic estimate of 95% specificity.
Here’s the basic math as to how this works. It’s based on the well-known epidemiological equation for calculating the chance of a test being right at various disease prevalence percentages. This is known as “positive predictive value” (PPV) for positive tests and “negative predictive value” (NPV) for negative tests.
If we have a test that is 99% specific to the Covid-19 virus, which means it will give a negative test result in 99% of the cases, we’ll get necessarily 1% false positives by random chance. In a population of 100 people being randomly tested we’ll get 1 false positive.
If the actual disease prevalence at that time was 10% we’d have one false positive and ten true positives, a 1 to 10 ratio — an acceptable ratio of false positives.
But what if the disease prevalence is only 1%? Now we have one false positive and one true positive and a ratio of 50% false positives. That’s not helpful since we have a coin flip chance of isolating and quarantining the wrong person, not to mention causing unnecessary fear and panic.
What if the disease prevalence is only 0.1%? Now in a population of 1,000 people being randomly tested we’d have just one true positive but ten false positives. We now have a false positive ratio of 10 to 1.
Applying this same formula for a 95% specific test and a 0.1% disease prevalence we get the 71 out of 72 false positives that Branch-Elliman and her colleagues wrote about.
But based on the data at the beginning of this essay, showing that disease prevalence is only about 0.03%, we have even higher potential for large numbers of false positives.