October 5, 2012

Missing Data Said to Threaten Trial Integrity

35123

By John Gever, Senior Editor, MedPage Today

Published: October 04, 2012

Reviewed by Zalman S. Agus, MD; Emeritus Professor, Perelman School of Medicine at the University of Pennsylvania and Dorothy Caputo, MA, BSN, RN, Nurse Planner

Losing track of clinical trial participants prior to study completion is a common and serious problem in medical research that needs to be addressed, according to a National Research Council panel.

"The assumption that analysis methods can compensate for such missing data are not justified, so aspects of trial design that limit the likelihood of missing data should be an important objective," panel members led by Roderick J. Little, PhD, of the University of Michigan, wrote in the Oct. 4 issue of the New England Journal of Medicine.

Such common methods as the "last observation carried forward" to estimate outcomes in trial participants whose real outcomes are unknown rely on assumptions that are frequently false, the group argued.

Instead, Little and colleagues contended, trials should be designed more carefully to limit the incidence of missing trial data.

The panel began their work after the FDA asked the National Research Council in 2008 for advice on the question, as part of the regulator's plan to develop guidance for drug and device companies on clinical trial design.

Little and colleagues wrote a report in 2010, but because the issue remains relevant, they provided a new summary with policy recommendations in the current NEJM issue.

Patients who stop treatment in a study because of adverse effects, lack of efficacy, or other factors are one of the most common reasons data go missing, the researchers noted.

Frequently, study investigators fill the resulting data gaps with estimates of how the dropouts would have performed had they stayed in the trial.

Although that approach can have merit, the estimates "all require unverifiable assumptions," panel members wrote, and "there is no foolproof way to analyze data in the face of substantial amounts of missing data."

Instead of trying to replace missing data with estimates, a better approach would be to keep data from going missing in the first place. Little and colleagues recommended that study investigators work harder and smarter to retain participants in trials, including those who stop taking study treatments.

"The consensus of the panel was that in many studies, the benefits of collecting outcomes after participants have discontinued treatment outweigh the costs," they wrote.

Little and colleagues offered eight suggestions for designing trials to minimize missing data, and eight more related to the conduct of such trials.

Their ideas for trial design included the following:

  • Target populations not well served by current treatments
  • Begin studies with all participants receiving the active treatment, with only those tolerating and adhering to it going on to the randomized phase
  • Allow flexible treatment regimens to maximize efficacy and safety
  • Consider designs in which the study treatment is added to an existing active treatment
  • Keep follow-up periods short
  • Allow use of rescue medications
  • In long-term efficacy studies, use a randomized withdrawal design in which patients remaining on active treatment are re-randomized to stay on it or switch to placebo
  • Consider use of rescue medications or treatment discontinuation as an outcome measure

The group's suggestions for trial conduct were as follows:

  • Select trial investigators with an established history of good participant retention
  • Set targets for missing data and monitor them during the trial
  • Provide incentives (which may be financial) to investigators and participants for completeness of data collection
  • Minimize burdens on trial participants with respect to convenience and other factors
  • Provide continued access to effective treatments prior to regulatory approval
  • Emphasize importance of complete data collection to investigators, trial staff, and participants
  • Assess participants for the likelihood of dropping out, and seek to alleviate factors that may impel them to drop out
  • Keep participant contact information up to date

In a separate paper appearing in the same issue of NEJM, a group of statistical reviewers for the journal, led by James H. Ware, PhD, of the Harvard School of Public Health in Boston (and including one of the NRC panel members), described a review of the NEJM's own policies on missing data.

Ware and colleagues promised that they would be more hard-nosed about manuscripts that use statistical methods to compensate for missing data.

For some methods, the group indicated, "we will require justifications that the assumptions required for the validity of those methods are reasonable."

And, "when the missing data are sufficiently extensive to raise questions about the robustness of the results to unverifiable assumptions, we may require authors to conduct sensitivity analyses."

Ware and colleagues added that they would expect plans for dealing with missing data to be included in trial protocols posted on internationally recognized clinical trial databases.

Primary source: New England Journal of Medicine
Source reference:
Little R, et al "The prevention and treatment of missing data in clinical trials" New Engl J Med 2012; 367: 1355-1360.

Additional source: New England Journal of Medicine
Source reference:
Ware J, et al "Missing data" New Engl J Med 2012; 367: 1353-1354.

Source

No comments:

Post a Comment