BMJ clinical trial report misleading

The authors provide inaccurate and misleading information about industry compliance with legal and ethical principles, which appears to stem from a fundamental misunderstanding of those requirements.

Jocelyn Ulrich
Jocelyn UlrichNovember 16, 2015

BMJ clinical trial report misleading.

ctBiopharmaceutical companies are committed to enhancing the transparency of their research in a responsible manner. To demonstrate their commitment, in July 2013 PhRMA joined with the European Federation of Pharmaceutical Industries and Associations (EFPIA) in adopting joint Principles for Responsible Clinical Trial Data Sharing

PhRMA supports independent efforts to measure compliance with disclosure obligations provided they are accurate, balanced, and rigorous. One such example is a study published last March in the New England Journal of Medicine examining compliance with the reporting of clinical trial results for trials registered on Clinicaltrials.gov. The study found that industry-supported trials had the highest rate of results reporting, outpacing the reporting of National Institutes of Health (NIH) trials and other government or academic institution-funded trials.

Unfortunately, the study recently published in BMJ Open entitled Clinical trial registration, reporting, publication and FDAAA compliance: a cross-sectional analysis and ranking of new drug approved by the FDA in 2012 does not meet these standards.  On the contrary, the authors provide inaccurate and misleading information about industry compliance with legal and ethical principles, which appears to stem from a fundamental misunderstanding of those requirements. Following are several areas in which the Miller study is contributing to misinformation about the clinical trial process and data sharing.

First, the authors assert that there are conflicting understandings about the reporting requirements added in the FDA Amendments Act (FDAAA) of 2007: do they apply only to “controlled” clinical studies or more broadly to “interventional” studies? (pp. 3-4).  But there is no dispute.  The clear language of the statute defines an “applicable drug clinical trial” as one that is “controlled,” and furthermore guidance from the National Institutes of Health (NIH), the federal agency charged with implementing the law, confirms that the statute means what it says and only applies to “controlled” clinical trials.

Second, the FDAAA reporting requirements clearly exclude Phase I studies, and allow for companies to delay the submissions of results by submitting a certification that an Applicable Trial meets one of two allowable conditions. First, sponsors are allowed to defer posting results for trials involving products that are not yet approved by the FDA.  In this situation, sponsors have until 30 days after the drug or device is approved, licensed, or cleared by the FDA for an initial use. Second, FDAAA allows a delay when sponsors have completed a trial that is to support the new use of an already approved product. Although the authors assert (once again without support) that there is disagreement about the role of certificates of delay, the FDAAA requirements are quite clear and there is no known dispute here either.

Third, the authors imply that the Federal Policy for the Protection of Human Subjects, or the “Common Rule,” establishes an ethical standard that all clinical trials should be publicly disclosed in order to “contribute to generalizable knowledge.”  But the Common Rule states no such thing: if it did, there would have been no need for Congress to have enacted FDAAA, which promotes a framework for responsible clinical trial data sharing that respects and supports innovation.  In addition, the Common Rule was established in 1991 to apply directly to research funded by the federal government. Industry-sponsored studies are under the jurisdiction of the FDA. So, although the ethics policies in the Common Rule have implications for biopharmaceutical companies, industry-sponsored research is not directly subject to the Common Rule.

Finally, in their analysis, the authors exhibit a fundamental misunderstanding of the applicable legal requirements, which taints their entire study and conclusions.  For example, the authors assert that only 67% of the reviewed studies were “timely” reported according to FDAAA standards.  But even a cursory review of the studies in question shows that a significant number of those claimed by the authors to be late or undisclosed were, in fact, submitted to NIH within 30 days of initial drug approval (per the discussed certification process).  This is within the applicable statutory deadline.  It is therefore highly misleading for the authors to allege that these studies were not publicly disclosed, or were disclosed late, when they were in fact posted to ClinicalTrials.gov within the statutory deadline. Additionally, the authors do not clearly distinguish compliance between studies that are legally required per the statute versus those that they imply are “ethically” required in their analysis, further confusing the facts.

PhRMA and its members agree with the authors that enhancing public health through responsible clinical trials data sharing is critically important. However, it is not in the best interest of public health to spread misinformation and create distrust. PhRMA intends to conduct a more in-depth analysis of the Miller study in the coming weeks, and looks forward to continuing to advance information about clinical trial data sharing that is rigorous, balanced, and accurate.

This website uses cookies and other tracking technologies to optimize performance, preferences, usage, and statistics. By clicking “Accept All”, you consent to store on your device the cookies and other tracking technologies that require consent. You can tailor or change your preferences by clicking “Manage My Cookies”. You can check our privacy policy for more information.