Despite Law, Fewer Than One In Eight Completed Studies Of Drugs And Biologics Are Reported On Time On ClinicalTrials.gov
+ Author Affiliations
- 1Michael R. Law (email@example.com) is an assistant professor in the Centre for Health Services and Policy Research at the University of British Columbia, in Vancouver.
- 2Yuko Kawasumi is a research associate in the Department of Anesthesiology, Pharmacology, and Therapeutics at the University of British Columbia.
- 3Steven G. Morgan is an associate professor and associate director of the Centre for Health Services and Policy Research at the University of British Columbia.
- ↵*Corresponding author
Clinical trial registries are public databases created to prospectively document the methods and measures of prescription drug studies and retrospectively collect a summary of results. In 2007 the US government began requiring that researchers register certain studies and report the results on ClinicalTrials.gov, a public database of federally and privately supported trials conducted in the United States and abroad. We found that although the mandate briefly increased trial registrations, 39 percent of trials were still registered late after the mandate’s deadline, and only 12 percent of completed studies reported results within a year, as required by the mandate. This result is important because there is evidence of selective reporting even among registered trials. Furthermore, we found that trials funded by industry were more than three times as likely to report results than were trials funded by the National Institutes of Health. Thus, additional enforcement may be required to ensure disclosure of all trial results, leading to a better understanding of drug safety and efficacy. Congress should also reconsider the three-year delay in reporting results for products that have been approved by the Food and Drug Administration and are in use by patients.
Clinical trials of prescription drugs are an important foundation for the evidence-based regulation, coverage, and prescribing of prescription drugs. However, the selective reporting of results by sponsors and some journals’ publication bias toward positive results simultaneously limit and skew the information that trials provide.1
Clinical trial registries—public repositories for prospectively documenting study characteristics and retrospectively reporting results—have been created to help address such problems.2 The registries can improve the balance and completeness of information for clinical and policy decision making, provided that participation in them is not subject to the same biases as the publication of trials in journals. One way to achieve this is to mandate and enforce registration and reporting of results.
Transparency about trial design and results has clear benefits. Published studies, for example, often report different primary outcomes than those stated in the original protocols.3 Trial registration and results reporting ensure that the original study outcomes can be verified and that all outcome and safety data are released. Furthermore, many studies remain unpublished,4 even though data from them have the potential to elucidate drug efficacy and safety problems.2 For example, a meta-analysis that examined unpublished efficacy data on antidepressants found that published studies substantially overstated efficacy.5
A complete registry of trials and results would make unpublished studies available in a public forum, which would benefit patients, clinicians, and policy makers by providing valuable insights into the safety and efficacy of medicines. Currently the largest trial registry is ClinicalTrials.gov, which is run by the US National Institutes of Health (NIH) and contains information on more than 100,000 different trials.
The database was originally made public in February 2000 to improve patient access to information on clinical trials for rare and serious conditions. Following its opening, two entities—the International Committee of Medical Journal Editors and the federal government—enacted policies mandating that drug trials be registered with a public database such as ClinicalTrials.gov. These policies affected Phase II (efficacy trials), Phase III (randomized controlled trials of a drug’s effectiveness versus that of a comparator), and Phase IV trials (postmarketing studies done on approved drugs)—but not Phase I trials (safety and tolerability studies in healthy volunteers).
First, in July 2005 the International Committee of Medical Journal Editors made early registration—that is, registration prior to patient enrollment—a prerequisite for publication for all studies started after that date.2 Ongoing trials had to be registered by September 13, 2005. Research has documented an increase in rates of registration with ClinicalTrials.gov following the enactment of this mandate.6,7
And second, the September 2007 Food and Drug Administration Amendments Act mandated that either the study sponsor or the principal investigator register with ClinicalTrials.gov all Phase II and higher drug and biologic trials that meet either or both of two criteria: the trial has one or more study sites located in the United States; and the trial is being conducted under a US investigational new drug application.
The act also mandated that trials that were already in progress be registered by December 2007 on ClinicalTrials.gov.8 And it required that researchers post a summary of basic results—including outcome measures and adverse events—within a year of the completion of data collection or within thirty days after the Food and Drug Administration first approved the drug.8
Failure to comply can result in civil penalties of up to $10,000 per day if the violations remain uncorrected thirty days after being cited by the Food and Drug Administration. However, the act permits researchers to delay reporting results for up to three years for trials conducted before drugs are initially approved, and for studies of unapproved clinical indications for currently approved drugs.9
The overall value of clinical trial registries for the clinical and scientific communities depends on how comprehensively the registries document completed and ongoing studies, particularly those that will never be published. However, researchers have yet to quantify the impact of the US mandate on the number of study registrations or the timeliness of registrations and results reporting.
We studied the longer-term impact of the federal mandate on the registration and reporting of results for drug and biological trials in ClinicalTrials.gov. We found that the mandate increased the number of trials registered with ClinicalTrials.gov. But we also found that despite the mandates, 39 percent of trials were registered late, and only 12 percent of completed studies reported results within one year.
Study Data And Methods
To analyze the impact of the federal mandate on trial registration and results reporting, we used data on studies that were registered with ClinicalTrials.gov from September 1999 to May 2010. From ClinicalTrials.gov, we downloaded the complete study registration records as of July 2, 2010, in XML format and imported these data into the statistical analysis software SAS, version 9.1, using the SAS XML Mapper. We also determined whether and when results had been posted, from HTML copies of the study records. After compilation, we checked to see if the overall number of studies imported and the overall number of studies reporting results matched the numbers reported on ClinicalTrials.gov.
From the electronic records, we compiled the registration date, start month, primary completion month, overall status (for example, completed or active), results reporting date, study phase, number of patients studied, trial funding source, and type of intervention being studied (drug or biologic).
We studied Phase II or higher drug or biologic trials because these trials were subject to both mandates. Trials listing more than one trial phase were assigned to the higher category listed. We categorized trial registration timing as “early” and “late” using two variables: the registration date and the trial’s start month. Because the federal mandate required registration within twenty-one days of patient enrollment, we classified studies as “early” if they were first received by ClinicalTrials.gov within the first twenty-one days of the month following their reported start month. We classified studies as “late” if they were received after that deadline.
To investigate the impact of the federal mandate on trial registrations, we used an interrupted time-series analysis to determine the number of trials registered monthly with ClinicalTrials.gov and the numbers that were registered early.10 We excluded short-term “spikes” associated with the enactment of each mandate (and of the registry launch) by not including registrations during the two months before and after those events.
We used generalized-least-squares models to assess changes in both the level and the trend in the number of trials registered between three time periods: “pre–journal mandate,” from December 1999 to April 2005; “pre–federal mandate,” from December 2005 to September 2007; and “post–federal mandate,” from March 2008 to May 2010. We tested the models for the presence of significant autocorrelation at one to four months and at one year, but we found none.10
To examine the rate of results reporting, we used statistical models of “time to reporting” for all trials that reported an actual—as opposed to an anticipated—completion between October 2008 and May 2010. We chose October 2008 because the NIH launched the ClinicalTrials.gov system for reporting results in September 2008.
For each completed trial, we studied the length of time between the end of the completion month and the date that results were reported. To model the factors associated with reporting results, we used a Cox proportional hazards model—a statistical model that investigated the relationship between the characteristics of trials and the time they took to report results—and included four groups of covariates. The four groups were as follows: the study phase (III and IV, versus II); enrollment (101–500 patients and more than 500 patients, versus up to 100 patients); whether or not at least one listed study location was in the United States; and indicators for different funding sources (industry, clinical research network, US federal agency except for the NIH, and other, versus the NIH).
Completed studies that had not reported results within our study period () were included in our analysis, and their follow-up time was limited to the end of the study period (May 31, 2010), at which point they were considered to have not reported (they were “censored,” in terms of our survival analysis). Because studies that lacked enrollment data () did not report results, we excluded them from our analysis.
We note several limitations to our analysis. First, ClinicalTrials.gov contains only self-reported data, so dates and trial status might not have been accurately updated or might have changed after we extracted our data. However, any missing data on completed studies would serve only to lower our estimate of the proportion of studies having reported results.
Second, we used the “start month”—defined as the date that enrollment in the protocol begins—in ClinicalTrials.gov to determine whether studies were registered early or late. The federal mandate requires that studies register within twenty-one days after the first patient is enrolled. We may have incorrectly classified studies in which these dates differed because of delays in recruiting the first patient.
Finally, we could not assess whether researchers posted results in a selective manner—for instance, disproportionately reporting positive findings. This was because we did not have any data on whether or not unreported results reflected poorly on the drugs under study.
Overall, 40,343 Phase II–IV drug and biologic trials were registered with ClinicalTrials.gov. Of these, 4,516 (11.2 percent) focused on biologics. Exhibit 1 shows the phase, funding, and other characteristics of these trials. Start dates were missing for 1,749 trials (4.7 percent), so we could not include them in our analysis of early registrations.
Registration Of Studies
Monthly trial registrations with ClinicalTrials.gov spiked to 3,474, from the previous monthly average of 78, when the mandate of the International Committee of Medical Journal Editors took effect in September 2005 (Exhibit 2). After that but before the federal mandate took effect, there were, on average, 372.35 more trials registered each month than before the journal mandate (95% confidence interval: 338.61, 406.09; ).
Another spike of 755 registrations occurred in December 2007, when the federal mandate took effect (Exhibit 2). The average monthly number of registrations rose by a further 129.60 between December 2007 and March 2008, the start of the post–federal mandate period (95% confidence interval: 86.22, 172.98; ). However, the number of registrations dropped by 5.38 trials in each month thereafter (95% confidence interval: −8.06, −2.70; ). Thus, by January 2010 the monthly number of registrations had fallen to the level seen before the federal mandate took effect.
Over the entire study period, 47.1 percent of the studies were registered early—within twenty-one days of their start month—and 52.9 percent were registered late (Exhibit 1). The highest monthly share of early registrations was 70.4 percent, in October 2009. Not surprisingly, there was a significant spike in the number of late registrations when the federal mandate took effect, in December 2007 (). Following that date, 39 percent of trials were registered late.
Our interrupted time-series model indicated that the average number of early registrations per month increased by 37.96 immediately after the federal mandate took effect (95% confidence interval: 15.26, 60.76; ). However, the trend then decreased, with 2.17 fewer registrations in each subsequent month thereafter (95% confidence interval: −3.58, −0.76; ). This later decrease essentially reversed the immediate increase by the end of our study period.
Posting Study Results
Exhibit 3 shows the characteristics of registered studies reported as being completed after September 2008, stratified by whether or not results had been reported by May 31, 2010. Results had been posted on ClinicalTrials.gov for 337 (7.6 percent) of the 4,455 trials. Notably, 9.5 percent of industry-funded studies had reported results—a rate three times higher than for studies with any other type of funding.
Exhibit 4 shows the cumulative proportion of trials that had reported results within a given number of days after study completion. One year after reporting completion, only 12.0 percent of all trials (Exhibit 4) and only 14.1 percent of Phase IV trials (data not shown) had reported results. The federal government requires that all trials with a US study site or studies conducted under a US investigational new drug application report results within one year, unless granted an exemption. Unfortunately, we could not determine which studies fit these criteria, because ClinicalTrials.gov did not publicly report information on exemptions when we collected our data.
Several characteristics of trials were associated with a higher probability of results reporting in our multivariate Cox proportional hazards analysis. Studies funded wholly or partly by industry had a threefold higher hazard ratio for reporting than studies funded only by the NIH (hazard ratio: 3.00; 95% confidence interval: 1.42, 6.33; ). Phase III trials (hazard ratio: 2.00; 95% confidence interval: 1.50, 2.63; ) and Phase IV trials (hazard ratio: 2.25; 95% confidence interval: 1.66, 3.04; ) were more likely to report results than Phase II trials. And studies that had 101–500 enrollees (hazard ratio: 1.29; 95% confidence interval: 1.00, 1.67; ) or more than 500 enrollees (hazard ratio: 1.419; 95% confidence interval: 1.03, 1.95; ) were more likely to report results than trials with 100 or fewer enrollees.
A complete registry of all clinical trials and their results would enable more comprehensive and less biased studies and meta-analyses of the clinical efficacy, off-label use, and safety of pharmaceuticals. We found that at least in the short term, the federal mandate increased the number of clinical trials registered on ClinicalTrials.gov. We also found, however, that the current rates of results reporting mean that fewer than one in eight trials have posted results within one year. This result is important because there is evidence of selective reporting even among registered trials.11 Furthermore, we found that trials funded by industry were more three times as likely to report results as were trials funded by the NIH.
We share previously expressed concerns that publication requirements are not sufficient for comprehensively enforcing the timely and complete registration of clinical trials.12 Our findings show that the increases in registration as a result of the requirements by the International Committee of Medical Journal Editors have persisted over the long term. However, the additional short-term impact of the federal mandate indicates that many studies went unregistered despite the journal editors’ requirement. Yet as trials increasingly move overseas, national mandates such as that of the Food and Drug Administration have less force.13
Unfortunately, we have little sense of how many trials remain unregistered because it is impossible to determine the number currently under way. Without this denominator, there is no way to know whether declining registrations in recent years are the result of studies’ going unregistered. Another possible explanation for this drop is that fewer drugs are reaching the trial stage. However, industry-reported research and development spending has continued to grow in recent years, which suggests that the denominator probably continues to be high.14
Failure To Report Results
The current low rate of results reporting on ClinicalTrials.gov, particularly for studies not funded by industry, is troublesome and may affect the overall usefulness of the registry in expanding clinical knowledge in a timely manner. As outlined above, results reporting of trials investigating unapproved uses can be delayed for up to three years.9
In our study, only 14 percent of Phase IV trials—postmarketing studies of drugs that have already been approved—had posted results within one year. These drugs are currently marketed and available for off-label use. Thus, the safety and efficacy information that these trials provide could be important in determining both drug safety and efficacy. For instance, a recent meta-analysis linking rosiglitazone (Avandia) to an increased risk of myocardial infarction relied on a clinical trial registry for results from twenty-six of the forty studies the researchers examined.15 These registry data were not published or otherwise available in public documents.
Given this experience and the recent safety problems that have appeared after approval for many drugs, Congress should reconsider the three-year delay in reporting results for products that have been approved by the Food and Drug Administration and are in use by patients. Public disclosure in a registry is vital for at least two reasons. First, even if trials are registered, many studies remain unpublished—particularly those funded by industry.16 And second, there is evidence that the primary and secondary outcomes reported in journal articles are often different from those initially registered.17 Because of these concerns, public disclosure of all studies and all of their outcomes would make trial reporting both more balanced and more credible.
There are numerous possible reasons why many studies continue to be registered late and do not report results in public registries such as ClinicalTrials.gov. Both academic and industry researchers may be unaware of existing requirements for registration and reporting.18 There may also be difficulties involved in completing the necessary data analysis within the specified time frame.
In addition, investigators from both academe and industry have expressed a range of reservations about trial registration and results reporting.19–21 These reservations include concerns about the confidentiality of proprietary information, competitiveness between companies and research groups, the extra work involved for researchers, and the possibility that disclosure could jeopardize their chances of publication.19–21
Ways To Increase Registration And Results Reporting
Our results indicate that industry-sponsored studies are much more likely than others to report results. This may reflect the capacity of industry researchers to meet posting requirements, the fact that they—unlike academic researchers—are paid to post results, or the strong incentive that manufacturers have to publicize positive results about their products. In any event, the overall low rate of results reporting suggests that the incentives in place to encourage early trial registration and results reporting might not be strong enough to achieve the desired result.
Three potential ways to ensure completeness of trial registrations and results reporting are stricter enforcement of the current penalties for not reporting results; using Institutional Review Boards and informed consent documents as points of leverage to encourage registration and reporting; and making early registration and the posting of results requirements for publication in medical journals.
It is unknown whether the current penalties for not registering and reporting results are severe enough, given the sales at stake for manufacturers.22 Fines for failing to submit clinical trial information are on the order of $10,000 per day, according to the federal mandate.23 But manufacturers must weigh those penalties against possible risks to revenue streams that are often orders of magnitude larger.
However, as of February 2010 no fines had been levied for noncompliance with the federal mandate (Jarilyn Dupont, director of regulatory policy, Food and Drug Administration, personal communication, March 18, 2010). Thus, it is unclear whether the threat of higher fines is necessary, or whether enforcement of the current penalties would be sufficient. Our findings also suggest that regulatory authorities should apply stronger pressure to nonindustry researchers. This could include enforcing the current federal mandate provision that allows the authorities to withhold funds from the NIH for failure to comply with other mandate provisions.
To ensure the comprehensive registration of trials, the greatest point of leverage might be not with funders but with investigators, who could also be required by Institutional Review Boards to register their trials early.12,24 A proposed Food and Drug Administration regulation requiring that informed-consent documents contain trial registration information would be a step in this direction. That change would force sponsors to register trials prior to receiving ethics approval and, as a consequence, prior to enrolling patients.25
Increasing results reporting is also important because many studies will never be published.26 For that reason, it is unlikely that journals’ publication requirements could achieve this goal. Instead, regulators should consider requiring more timely reporting of results from all drug trials, including those concerning unapproved uses. They should also make use of current enforcement mechanisms and possibly introduce stronger incentives in the future—such as the public disclosure of infractions and the use of larger fines—if the rate of results reporting remains low.
Finally, it is also troubling that a large portion of trials continue to register late. Medical and other journals that continue to publish the results of trials that register late should refuse to consider for publication future papers about trials that did not register early. To encourage further results reporting, journals should also insist that the results for studies they publish be entered into a clinical trial registry at the earliest opportunity. To ensure that this takes place, journals must be clear that public disclosure in a registry does not constitute prior publication, which many journals prohibit.
Overall, the federal mandate increased the registration of drug trials on ClinicalTrials.gov, at least for a short period of time. Additional mechanisms may be required to ensure the completeness of trial registration and results reporting. Achieving that goal would improve the comprehensiveness and balance of our knowledge base about drugs, leading to more informed prescribing by physicians and improved patient safety.
The results of this study were presented at the AcademyHealth Annual Research Meeting in Boston, Massachusetts, June 27, 2010. This study was supported in part by funds from the University of British Columbia Centre for Health Services and Policy Research. Michael Law received salary support through a New Investigator Award from the Canadian Institutes of Health Research and an Early Career Scholar Award from the Peter Wall Institute for Advanced Studies. The authors thank Lucy Cheng for her assistance.
ABOUT THE AUTHORS: MICHAEL R. LAW, YUKO KAWASUMI & STEVEN G. MORGAN
In this month’s Health Affairs, Michael Law and coauthors examine the impact of a federal law requiring researchers to register certain studies and report the results on ClinicalTrials.gov, a public database of federally and privately supported clinical trials conducted in the United States and abroad. When the authors studied the mandate’s impact, they found an initial increase in trial registrations but a problem with late registration and underreporting of results. Only 12 percent of studies reported results within one year, as required by law.
The authors were motivated to conduct the research based on the episode that resulted in the withdrawal of the diabetes drug Avandia from the US market. The definitive trial of the drug, published in the New England Journal of Medicine, used a trial information registry to examine the results of a number of trials of the drug.
“[We] wondered how far we’d come in term of information disclosure on clinical trials for drugs since then,” says Law, and that led to the authors’ inquiry into how well legal requirements for timely reporting on trial results were being met.
Law is an assistant professor in the Centre for Health Services and Policy Research at the University of British Columbia. His research focuses on pharmaceutical outcomes and policy research, including evaluating the impact of medication adherence, the value of newer drugs, and both drug coverage changes and direct-to-consumer advertising.
Law has been published in leading medical journals and has received both national and international awards, including a career award from the Canadian Institutes of Health Research. He holds a doctorate in health policy from Harvard University and completed a postdoctoral fellowship at Harvard Medical School, where he trained in research methods and statistics.
Yuko Kawasumi is a research associate in the Department of Anesthesiology, Pharmacology, and Therapeutics at the University of British Columbia. Her primary research interest is investigating the quality of prescription medication use through administrative claims databases.
Kawasumi holds a doctorate in epidemiology from the Department of Epidemiology, Biostatistics, and Occupational Health at McGill University. She was a postdoctoral research fellow at the University of British Columbia Centre for Health Services and Policy Research.
Steven Morgan is an associate professor of health policy and associate director of the Centre for Health Services and Policy Research at the University of British Columbia. An expert in pharmaceutical policy, Morgan is a recipient of career awards from the Canadian Institutes of Health Research and the Michael Smith Foundation for Health Research.
Morgan earned a master’s degree in economics from Queen’s University and a doctorate, also in economics, from the University of British Columbia. He was a 2001–02 Canadian Associate Harkness Fellow in Health Care Policy, while a postdoctoral fellow in health economics at the Centre for Health Services and Policy Research.