Using Implementation Science-Guided Pilot Studies to Assess and Improve the Informativeness of Clinical Trials
- PDF / 1,644,530 Bytes
- 4 Pages / 595.276 x 790.866 pts Page_size
- 47 Downloads / 192 Views
and Marc A. Kowalkowski, PhD2
1
Department of Internal Medicine, Atrium Health’s Carolinas Medical Center Charlotte, NC, USA; 2Center for Outcomes Research and Evaluation, Atrium Health Charlotte, NC, USA.
J Gen Intern Med DOI: 10.1007/s11606-020-06220-3 © Society of General Internal Medicine 2020
controlled trials (RCTs) are the gold standard R andomized for generating evidence on the effectiveness of healthcare interventions. Unfortunately, RCTs are frequently uninformative in terms of providing results that patients, clinicians, researchers, or policymakers can confidently apply as the basis for clinical decision-making in the real world.1 Safeguards against uninformative research begin early in study development. Thus, well-conceived pilot studies can play a critical role in the conduct of high-quality clinical trials.2 We propose using implementation science—the study of how to adopt best practices into real-world settings—as a natural framework for pre-RCT pilot studies, viewing these pilot studies as a critical opportunity to improve the informativeness of RCTs.
WHY DO WE NEED RIGOROUS PILOT TRIALS?
The burden of uninformative RCTs is an important one. Money, time, and participants’ efforts are wasted when research is conducted without taking sufficient account of the contextual factors necessary for applying study results. In some cases, these factors are related to standard RCT quality criteria (e.g., CONSORT). In other cases, however, a study’s lack of informativeness is related to inadequate consideration of broader factors. Zarin et al. posited five necessary conditions for a trial to be informative: (1) the study hypothesis must address an important and unresolved question; (2) the study must be designed to provide meaningful evidence related to this question; (3) the study must be feasible; (4) the study must be conducted and analyzed in a scientifically valid manner; and (5) the study must report methods and results accurately, completely, and promptly.3 Received March 10, 2020 Revised July 1, 2020 Accepted September 3, 2020
Unfortunately, many contemporary trials fail these necessary conditions. One overt example is a multicenter randomized assessing the impact of pre-hospital antibiotics for sepsis administered via emergency medical service personnel.4 The trial found no mortality difference, but application of the results is limited by randomization violations—some emergency medical services personnel “purposefully opened the envelopes until they found an envelope instructing randomization to the intervention group.” The motivation for this violation of study procedures was attributed to “overenthusiasm of EMS personnel wanting to treat as many patients as possible with antibiotics.” Plausibly, pre-trial identification of these beliefs about the acceptability of withholding treatment from study patients could have prompted a responsive approach that might have preserved the fidelity of randomization. Even trials with careful attention to internal validity may provide less meaningful results
Data Loading...