Skip to main content

Imprecise Findings in COVID-19 Drug Trials Could Steer Clinicians Away From Innovative Treatments

In a new IPR working paper, IPR's Charles Manski says this could negatively affect patient outcomes

Get all our news

Subscribe to newsletter

The requirement for conventional statistical significance creates a status quo bias in favor of what is called ‘standard care’ and against innovative treatments. My co-author and I think this is a serious problem that may affect treatment of patients.”

Charles Manski
Board of Trustees Professor of Economics and IPR fellow

Drug Trials

As the COVID-19 pandemic progresses, researchers are reporting findings of randomized trials comparing standard care with care augmented by experimental drugs. The trials have small sample sizes, so estimates of treatment effects are statistically imprecise.

In a June 8 IPR working paper, also published by the National Bureau of Economic Research (NBER), IPR economist Charles Manski, along with co-author Aleksey Tetenov of the University of Geneva, argue that the manner in which medical research articles present findings of trials assessing COVID-19 drugs may inappropriately give the impression that new treatments are not effective.

In “Statistical Decision Properties of Imprecise Trials Assessing COVID-19 Drugs,” the authors state that seeing imprecision, clinicians reading research articles may find it difficult to decide when to treat patients with experimental drugs.

A conventional practice when comparing standard care and an innovation is to choose the innovation only if the estimated treatment effect is positive and statistically significant.

“The requirement for conventional statistical significance creates a status quo bias in favor of what is called ‘standard care’ and against innovative treatments,” said Manski, who is Board of Trustees Professor in Economics. “My co-author and I think this is a serious problem that may affect treatment of patients.”

It’s common to use hypothesis tests to choose treatments. The authors evaluate decision criteria using the concept of “near-optimality,” which jointly considers the probability and magnitude of decision errors, and suggest this way to analyze findings of trials comparing COVID-19 treatments. They recommend that clinical decisions should be guided by positive trial results even when they are not statistically significant and re-evaluated when new evidence emerges.

“An appealing decision criterion from this perspective is the ‘empirical success’ rule, which chooses the treatment with the highest observed average patient outcome in the trial,” Manski said. “Considering the design of recent and ongoing COVID-19 trials, we show that the ‘empirical success’ rule yields treatment results that are much closer to optimal than those generated by prevailing decision criteria based on hypothesis tests.” The authors show that clinicians could make good decisions based on relatively small trials if they use the empirical success rule instead of hypothesis testing.”

Manski and Tetenov add that from a clinical perspective, one might argue that it is reasonable to place the burden on an innovation when standard care is known to yield good patient outcomes. According to the authors, this argument lacks appeal in the COVID-19 setting, which is developing rapidly to cope with an emergency. 

A limitation of this paper, the authors acknowledge, is that it only considers treatment choice using data from one trial. In practice, a clinician may learn the findings of multiple trials and may also be informed by observational data. The concept of near-optimality is well-defined in these more complex settings, but methods for practical application are yet to be developed.

Read full paper here.

Charles F. Manski is Board of Trustees Professor of Economics and an IPR fellow.

Photo credit: Pexels

Published: June 8, 2020. Updated: June 8, 2020.