Interpreting Medical Tests
Adding to all of the uncertainty about this disease, Covid-19, there is an issue with the accuracy of the tests. The short version:
Do not trust a negative PCR test when actively infected - they can miss a lot of cases.
Do not trust a positive antibody test up here - it may be a false positive, and even if a true positive, we do not yet know if that means you cannot catch it again. . .
Tests will all be negative on the day of exposure.
The false negative rate falls to 38% when symptoms begin (usually day 4 after exposure).
3 days after symptom onset, the false negative rate is 20%.
This means a negative test does NOT mean for sure that you are negative, which is why we may be telling you to remain isolated and to continue to monitor your oxygen even after a test comes back negative, and why you may not be completely safe visiting a vulnerable person if asymptomatic even if your test is negative.
The other catch is that we do not yet know if evidence of past infection makes you safe from re-exposure, even if it is a true positive.
The interpretation of Positive and negative test results does not just depend on the accuracy of the test itself, but also on the amount of illness in the population being tested.
The numbers I like to know are the positive predictive value (PPV) and negative predictive value(NPV), that is:
PPV = the percentage of positive tests that are true positives, which tells me: Should I trust this test? Should I treat based on the results? Or should I be skeptical and try to confirm in some other way before treating?
NPV = the percentage of people with negative test results who actually do not have the condition. Which tells me: Am I safe treating this person as though they do not have the condition, or should I still be a little worried?
Unfortunately, even a really good test can still have misleading results if the condition you are looking for is rare.
Imagine a pretty accurate test:
Sensitivity = 90%, so the test comes out positive 90% of the time when someone has the condition, and is only a false negative 10% of the time when they have it.
Specificity = 90%, test is negative 90% of the time when the disease is absent, and only false positive 10% of the time.
In a population where the condition is common (60% of people have it), this looks like a great test:
- click on this thumbnail if you want to see it graphically
Thus, of the total positive tests (58), 54 people have the condition, 4 do not have it, which means if any given person tests positive, there is a 54/58 = 93% chance it is a true positive
(PPV = 93.1%)
If someone tests negative, 85.7% of those negative tests will be true negatives.
But let's look at that same decent test in a population where the condition is more rare (only 10% of the population have the condition):
Now, only half of those with a positive test actually are true positives (PPV = 50%)
So now I want a second test to be sure - I would not want to treat for something the person did not have 50% of the time!
On the other hand, I really trust a negative test: The NPV = 98.8%.
This is why conditions like HIV, Hepatitis C, and Lyme disease have a series of 2 tests: A screening Elisa, and then a viral load, western blot or other confirmatory test to help separate out the false positives from the true positives.
Thus, 98% of those with a positive test really were previously infected(PPV = 98%), and those who test negative really were not infected (NPV = 100%.)
Now, the PPV = 75%, meaning one fourth of the positive tests are actually false positives! So can someone who has a positive test go volunteer on a covid ward without PPE? No! It would not be safe to make decisions based on this test, given the stakes. One out of 4 people might walk out thinking they are immune when they are actually still susceptible.
If only 0.5% of us have been infected, this drops to a PPV of 50%.
If you are a statistics geek like me, you can play with these numbers at https://micncltools.shinyapps.io/TestAccuracy/ - a tool created by Joy Allen, Sara Graziadio and Michael Power.
A Shiny Tool to explore prevalence, sensitivity, and specificity on Tp, Fp, Fn, and Tn
NIHR Diagnostic Evidence Co-operative Newcastle. July 2017
Do not trust a negative PCR test when actively infected - they can miss a lot of cases.
Do not trust a positive antibody test up here - it may be a false positive, and even if a true positive, we do not yet know if that means you cannot catch it again. . .
Diagnosing Current Infection
Our PCR tests for current infection are troubled by false negatives. In addition to that study from the Annals of Internal Medicine, another from the NEJM discusses a number of trials showing a false negative rate ranging from 2-29%.Tests will all be negative on the day of exposure.
The false negative rate falls to 38% when symptoms begin (usually day 4 after exposure).
3 days after symptom onset, the false negative rate is 20%.
This means a negative test does NOT mean for sure that you are negative, which is why we may be telling you to remain isolated and to continue to monitor your oxygen even after a test comes back negative, and why you may not be completely safe visiting a vulnerable person if asymptomatic even if your test is negative.
Confirming Past Infection
The test to assess for past Covid-19 infection (at least 14 days after onset) is the blood test for antibodies. There is a very accurate test available, but we still will have false positives in our area because we have not been hit so hard yet.The other catch is that we do not yet know if evidence of past infection makes you safe from re-exposure, even if it is a true positive.
The interpretation of Positive and negative test results does not just depend on the accuracy of the test itself, but also on the amount of illness in the population being tested.
The numbers I like to know are the positive predictive value (PPV) and negative predictive value(NPV), that is:
PPV = the percentage of positive tests that are true positives, which tells me: Should I trust this test? Should I treat based on the results? Or should I be skeptical and try to confirm in some other way before treating?
NPV = the percentage of people with negative test results who actually do not have the condition. Which tells me: Am I safe treating this person as though they do not have the condition, or should I still be a little worried?
Unfortunately, even a really good test can still have misleading results if the condition you are looking for is rare.
Table of contents
Simple Example
I will start with a simple example:Imagine a pretty accurate test:
Sensitivity = 90%, so the test comes out positive 90% of the time when someone has the condition, and is only a false negative 10% of the time when they have it.
Specificity = 90%, test is negative 90% of the time when the disease is absent, and only false positive 10% of the time.
In a population where the condition is common (60% of people have it), this looks like a great test:
- click on this thumbnail if you want to see it graphically
Condition present | Condition absent | Total | |
Test positive | 54 | 4 | 58 |
Test negative | 6 | 36 | 42 |
Totals | 60 | 40 | 100 |
Thus, of the total positive tests (58), 54 people have the condition, 4 do not have it, which means if any given person tests positive, there is a 54/58 = 93% chance it is a true positive
(PPV = 93.1%)
If someone tests negative, 85.7% of those negative tests will be true negatives.
But let's look at that same decent test in a population where the condition is more rare (only 10% of the population have the condition):
Condition present | Condition absent | Total | |
Test positive | 9 | 9 | 18 |
Test negative | 1 | 81 | 82 |
Totals | 10 | 90 | 100 |
Now, only half of those with a positive test actually are true positives (PPV = 50%)
So now I want a second test to be sure - I would not want to treat for something the person did not have 50% of the time!
On the other hand, I really trust a negative test: The NPV = 98.8%.
This is why conditions like HIV, Hepatitis C, and Lyme disease have a series of 2 tests: A screening Elisa, and then a viral load, western blot or other confirmatory test to help separate out the false positives from the true positives.
Covid-19 Antibody Testing
Now let's take Covid-19 serologic testing.Is past infection protective?
First of all, we do not yet know for sure that if you have had this virus (SARS-CoV2) you are safe from being reinfected, but we think that is likely to be the case, at least for the short-term (a year or two.) I will update this when more is known.Is the test a good one?
Next, there are a bunch of bogus tests out there, which are wildly inaccurate, but our practice is not ordering those. We are ordering the Abbott test, which the company has shown has a sensitivity of 100% and a specificity of 99.6%. (For these calculations I am using 99.9% sensitivity and 99.5% specificity.)What is the incidence of the condition?
High Incidence
In a place like New York, where it is believed that up to 20% of people have been infected as of early May 2020, this is a very trustworthy test:Condition present | Condition absent | Total | |
200 | 4 | 204 | |
0 | 796 | 796 | |
Totals | 200 | 800 | 1000 |
Thus, 98% of those with a positive test really were previously infected(PPV = 98%), and those who test negative really were not infected (NPV = 100%.)
Low Incidence
Unfortunately for the test accuracy (but gratitude for living here), if you use that same test on a random sample of people from Humboldt, where we think only 1 and 1/2 percent of people may have been infected, or possibly even fewer:Condition present | Condition absent | Total | |
Test positive | 15 | 5 | 20 |
Test negative | 0 | 980 | 980 |
Totals | 15 | 985 | 1000 |
Now, the PPV = 75%, meaning one fourth of the positive tests are actually false positives! So can someone who has a positive test go volunteer on a covid ward without PPE? No! It would not be safe to make decisions based on this test, given the stakes. One out of 4 people might walk out thinking they are immune when they are actually still susceptible.
If only 0.5% of us have been infected, this drops to a PPV of 50%.
Factor in your illness and exposure history (personal pre-test probability)
What if the person being tested had the most severe flu of their life in early March 2020? Well, then I think they are more likely to be one of the true positives than the person who is just wondering if they were one of the asymptomatic infected people. . . . So even if you are in an area where the incidence is low, if you personally had a suggestive illness, or say were living with someone who was sick and tested positive, then you are in a subpopulation with a higher incidence, maybe more like New York, and the test is more trustworthy again.If you are a statistics geek like me, you can play with these numbers at https://micncltools.shinyapps.io/TestAccuracy/ - a tool created by Joy Allen, Sara Graziadio and Michael Power.
A Shiny Tool to explore prevalence, sensitivity, and specificity on Tp, Fp, Fn, and Tn
NIHR Diagnostic Evidence Co-operative Newcastle. July 2017