While many studies have found more favorable results for drugs in clinical trials funded by their manufacturers, these studies normally just compare a set of manufacturer-funded studies with a set of studies funded in other ways, said Tamar Oostrom, assistant professor of economics at The Ohio State University, who conducted this new research.
This new study is the first to have an apples-to-apples comparison.
“I compared different clinical trials in which the exact same pairs of drugs are compared for their effectiveness – the only substantial difference being who funded the study,” Oostrom said.
Oostrom called her finding the “sponsorship effect.”
“There was this dramatic difference – removing the sponsorship effect would reduce the difference in efficacy between a sponsored drug and other drugs in the trial by about 50%,” she said.
“I wasn’t surprised that I found an effect. But I was surprised by the size of the effect,” she said.
The study was published recently in the Journal of Political Economy.
The data in the paper contained all available double-blind randomized control trials (RCT) for either antidepressants or antipsychotics. She used these drugs because of data availability and their huge market size in the United States.
Double-blind randomized control trials are referred to as the “gold standard” for studying the effectiveness of drugs because they eliminate much of the bias that can be found in other study designs.
In her initial analysis, Oostrom focused on 509 published clinical trials.
Oostrom gave an example of one the drugs she studied: the antidepressant drug Effexor, introduced in 1993 by Wyeth Pharmaceuticals.
Over 15 years, Wyeth compared the effectiveness of Effexor with the drug Prozac. In 12 of the 14 trials funded solely by Wyeth, Effexor was found to be more effective than Prozac.
But only one of three trials with different funding found Effexor to be more effective than Prozac.
“Each of these trials is a double-blind RCT comparing the exact two molecules and examining the same standard outcomes,” she said. “But the manufacturer’s trials were much more favorable for their drug.”
So how could that be?
One possibility is that the trials are planned or conducted differently, so they get different results. Oostrom tested for that by examining trial characteristics, including the length of the trial, the drug’s dosage and total enrollment in the trials, as well as average age, gender and baseline severity of symptoms in the enrolled patients.
Controlling for all these factors did not have a major impact on the sponsorship effect, she found.
But what did have an effect was what is known as publication bias. After scientists conduct a study, they can send their papers to scientific journals for review. If the papers are accepted, they are then published and cited in the drug review and approval process. But many trials are never published.
In this research, Oostrom was able to identify 77 trials of drugs that were conducted but never ended up being published in scientific journals. Adding these unpublished papers to the analysis changed the results.
“Trials funded by manufacturers in which their drug appears more effective are more likely to be published. That connection between outcomes and publication doesn’t appear to happen as much when there are other funders,” Oostrom said.
In her analysis, she found that adding just one of each of the unpublished trials reduced the sponsorship effect by 20%.
“The addition of unpublished trials reduces the effect of sponsorship, and most of the sponsorship effect can be explained by publication bias,” she said.
There is one major policy that has helped reduce the problem of publication bias in the past two decades – preregistration, she said.
Preregistration requires researchers to register their trials as a condition of publication or funding. Requirements often include requiring researchers to report their results, which can increase the possibility that even studies that aren’t favorable to the target drug see the light of day.
Oostrom found that the sponsorship effect has declined since 2005, when preregistration started becoming required for some trials, and when other transparency and publication norms began changing.
But preregistration is not a cure-all. Even with current preregistration requirements, only one-quarter of all preregistered trials report results.
And it doesn’t fix what has gone on in the past.
“Most existing antidepressant and antipsychotic drugs were approved before these requirements, so even with preregistration, there is a stock of existing drugs potentially based on biased evidence,” she said.
The study was supported by the National Institute on Aging and the National Science Foundation.