Stop Getting So Excited About ‘Preliminary’ Findings
The results are shocking: Covid-19 infection rates have been subject to a “ vast undercount,” by a factor of 50 or more. A pair of studies based on antibody tests in California had just been written up and posted to a preprint server, the first one going up last Friday—and while the results were shaky and preliminary, they still made headline news. The research methods were “dubious,” one infectious disease expert told WIRED, but he was gladdened by the fact that research of this kind was coming out. “Just to see the first bit of data is quite exciting,” he said.
Dubious, yet still quite exciting: We’ve seen this play before. In recent weeks, other deeply flawed—but also meaningful, important, quite exciting—findings about pandemic topics have made their way into the papers. Consider last week’s reporting on a preprint out of New York University, suggesting a link between obesity and severity of Covid-19. Leora Horwitz, a senior author on the study, “cautioned that the findings were preliminary,” according to a write-up in The New York Times, and “noted that some of the data was still incomplete.” Then, as if by magic, in the very next paragraph: “Dr. Horwitz said the implications for patient care were clear.” Preliminary findings, incomplete data, clear implications: Science!
Our evidence is never perfect, and we will always need to make decisions before our understanding of things is complete, especially when people are dying by the thousands every day, from a virus no one’s ever seen before. But a crude idea—if not an ideology—has taken hold amid the panic of this crisis: that any data, however shoddy, is better than none. “It’s not perfect,” admitted one author of the California antibody research, Stanford University’s John Ioannidis, “but it’s the best science can do.” That’s dangerous and wrong. As Ioannidis himself has previously shown, it can be risky to lower the bar for science in normal times. Right now, it may be even worse. It’s vital that we gather knowledge as quickly as we can, in the face of the pandemic—but sacrificing scientific standards won’t do anything to accelerate that process. If anything, it will slow it down.
The inclination to trade off rigor for the sake of speed appears at every level of research. Scientists are rushing to understand the virus and its spread, and to search for ways we might treat, prevent, or control it. Administrators too have brushed away the rules that might impede this research program, signing off on testing tools without the normal vetting process. Physicians are testing candidate drugs in uncontrolled studies, often many at a time, and under “compassionate use” protocols that are meant to be a last resort for critically ill patients who have no other options. And then the “exciting” data from this hasty work are coming out on preprint servers, and being taken up by doctors, reporters, and the public without ever undergoing peer review.
What, exactly, have we gained from this mad abandonment of careful scientific practice? It’s too soon to say for sure, but it’s quite possible that some results from slipshod studies will pan out in the end, just as some higher-quality studies may end up being wrong. In the meantime, though, the downsides are obvious. As careless work proliferates, it seeds journals, preprint servers, and internet rumor mills with unreliable data, making it more difficult for everyone to sort facts from wishful thinking. At the fringes, this can lead people to take up unproven and potentially dangerous treatments—like the Arizona man who died in March after attempting to self-medicate with chloroquine. But the biggest harms are those that spread into the mainstream of our health care system and research. Half-baked studies don’t just produce misleading results—they also steal attention and precious resources from projects that have a real chance of producing actionable information.