Unlock Trial Insights: How to Broaden Research Impact Beyond the Lab
"Bridging the Gap Between Clinical Trials and Real-World Populations for Better Health Outcomes"
Randomized Controlled Trials (RCTs) are essential for creating accurate estimates, serving as a key tool for causal inference research. These trials help researchers understand the real effects of interventions. However, limiting findings to the experimental group can restrict their usefulness. Achieving external validity, where results apply to broader populations, is crucial for wider scientific progress. Addressing these external validity challenges is the focus of current research and collaborative efforts.
A workshop at the Institute for Computational and Experimental Research in Mathematics (ICERM) in Fall 2023, gathered experts from different fields, including social science, medicine, public health, statistics, computer science, and education, to discuss obstacles in applying experimental results to diverse groups. This article consolidates ongoing efforts, highlights methodological overlaps across fields, reviews generalizability and transportability based on workshop discussions, and identifies remaining obstacles, suggesting paths for future research. By doing this, we aim to improve the understanding of how causal effects can be generalized and transported, encouraging collaboration across disciplines and providing valuable insights for those refining and applying causal inference methods.
This article will discuss three main topics from the workshop: (i) how to assess the external validity of trials, (ii) how to consider external validity beyond just the intention-to-treat effects, and (iii) how to use machine learning to improve external validity. We will place these advancements in the context of current research. After discussing these themes, the article will conclude by identifying five key areas for future research, including the problems and possibilities, and list all talk titles and speakers in the Appendix.
Validity of Identifying Assumptions Under Covariate Shifts
When aiming to make externally valid causal effect estimates, a primary source of bias arises from differences in the distribution of treatment effect moderators between the experimental sample and the target population. Treatment effect moderators are covariates that describe how the treatment will differentially affect individuals. As a result, researchers must adjust for the covariate shift in the underlying moderators between the experimental sample and the target population to estimate a valid causal effect across a target population of interest. Two common identifying assumptions leveraged in practice are (1) selection on observables (also referred to as conditional exchangeability) and (2) positivity of trial participation.
- Understanding Treatment Effect Heterogeneity: Selection on observables and positivity of trial participation rely on researchers knowing which covariates moderate the treatment effect of the intervention of interest. This is important for both the design stage of an experiment, and for post-hoc adjustments.
- Developing Methods for Researchers to Incorporate Substantive Knowledge: Incorporating substantive knowledge into research methodologies is crucial for formulating effective policy recommendations, especially when utilizing existing experimental data in new domains.
- Assessing Validity of Identifying Assumptions: Sensitivity analyses allow researchers to consider the robustness of results to violations of the underlying assumptions.
Future Directions and Challenges
Our paper summarizes some of the recent advancements in causal methods for generalizability and transportability, as presented at the ICERM workshop. The workshop focused on three key areas of focus: (i) evaluating the robustness of foundational assumptions, (ii) extending the scope of causal analysis beyond traditional intention-to-treat effects, and (iii) incorporating machine learning to enhance the applicability of findings across various contexts. The discussion at the workshop also highlighted several challenges in bridging the research-to-practice gap as well as interesting future research directions. We provide a few examples of key discussion points from the workshop.