Antibody testing has accelerated in the United States in recent weeks: In one prominent study, for example, involving some 3,000 New Yorkers, roughly 14 percent of state residents were found to have been exposed to the virus — and 1 in 5 in New York City. Some proponents of such tests say they could pave the way for “immunity passports,” documents identifying people who have previously been infected and may now be immune to the virus. They might allow people to rejoin the workforce, or eat in a restaurant.

Immunity passports are controversial for many reasons: We don’t know the extent to which exposure to the novel coronavirus protects against future infection, for example, and a passport system raises important issues involving medical privacy. (To prevent potential cheating, testing would have to be officially verified and, most likely, stored in a centralized database.) Passports could create perverse incentives for people in precarious economic circumstances to deliberately catch the disease so that — if they recover — they can return to normal life. They could rend social cohesion by splitting the population into groups with greater and fewer rights.

But there’s an even more basic problem with immunity passports: They might fail entirely because of false positive test results. When it comes to PCR testing for the active virus, false negatives get most of the attention, because they open the door to the disease being spread by people who wrongly think they don’t have it. But in the context of immunity passports, false positives are especially pernicious: People would stop social distancing yet would continue to be at risk of infection. In its haste to provide access to tests, the Food and Drug Administration apparently approved the sale of some very inaccurate tests. But even those that appear on the surface to be very accurate could prove highly problematic in practical use.

Consider a test produced by a company called Cellex, which reportedly falsely classifies only 4 out of 100 negative samples as positives. A natural conclusion, given the reported level of accuracy, would be that for every 1,000 people who test positive — who are deemed to have been infected at some point and thus immune to the virus — on average only 40 tests will be inaccurate. Unfortunately, that seemingly obvious conclusion would be wrong — for reasons first identified in the 18th century by the statistician and Presbyterian minister Thomas Bayes.

Bayes’ Theorem reveals that, when you are testing for a condition that is rare, the actual false positive rate can be much greater than it first appears.

To grasp why, imagine testing members of a secluded tribe that had not come in contact with outsiders — and so could not possibly have been infected by the coronavirus. If we gave antibody tests that were “96 percent accurate” to all members of the tribe, a small number would receive a positive test result. Whenever this happened, we could be 100 percent sure it was a false positive.

In our society, however, we don’t know exactly how many people have truly been infected with the coronavirus. Since New York is an outlier, let’s take the results from a recent study set in Santa Clara County, and assume the answer is on the high end of the range researchers found there: 4 percent. Then we start screening Americans at random — or perhaps we screen everyone.

In that situation, out of every 1,000 people tested, we would expect about 40 people (4 percent of 1,000) to test as true positives for viral antibodies; they had a history of infection. But we would also expect some of the remaining 96 percent — although never infected — to generate positives, too. In fact, we would expect 4 percent of those 960, or close to 40, people to falsely test positive for viral antibodies. As a result, our original 1,000 people would produce approximately 80 total positive test results, and half of them would be false positive results.

There are reasons to believe the Santa Clara study overestimated the true prevalence of antibodies. If so, then the problem would be greater still, with the same test. At 1 percent prevalence, we would get 10 true positive results (1 percent of 1,000) and 40 false positive results (4 percent of 990). In that situation, fully 80 percent of all positive test results would in fact be false positives.

The consequences of these false positives could be disastrous, if they were the key to passports that allowed people to reenter the workforce, setting off a fresh wave of infections.

Is there any way out of the false-positive cul-de-sac? The lesson of Bayes is not to discard test results, but to make use of all available information. A positive test result for a member of a secluded tribe, or from a population with rates known to be very low, can be assumed to be a false positive. Conversely, when we test people who we know had exposure to an infected person, or who showed active symptoms, the true prevalence of the virus among those being tested is higher; the rate of false positives will be lower. By focusing on specific populations in the United States, we can make best use of antibody tests.

The United Kingdom appears set on encouraging its citizens to use home testing kits, a process likely to lead to a high false-positive rate. Unaware of Bayes’s insight, consumers will interpret positive test result as a near-certain indication that they are immune.

Could testing people twice, possibly using different tests, fix the problem? Unfortunately, we can’t assume so, because a second test might also return a false positive for the same reason as the first — for example, because it picked up a response to a past infection with a virus other than covid-19.

It would be too dismissive to say immunity passports could never work, but — even setting aside the ethical and legal issues — they are far from ready for prime time. A much more accurate test than those we have now (say, with 99.5 percent accuracy) might yield acceptably low levels of false positives, especially as the fraction of infected people in the population rises. Even then, however, skilled public health experts would have to make statistical adjustments for the populations being examined.

Bayes’s 18th-century math may seem abstract and confusing, but grasping its implications could steer us away from profound health-policy errors. Failing to do calculations properly could lead us to draw the wrong inferences about immunity — and enact policies that could cost 21st-century lives.