Hostname: page-component-89b8bd64d-sd5qd Total loading time: 0 Render date: 2026-05-07T23:35:03.929Z Has data issue: false hasContentIssue false

An application of the Causal Roadmap in two safety monitoring case studies: Causal inference and outcome prediction using electronic health record data

Published online by Cambridge University Press:  21 September 2023

Brian D. Williamson*
Affiliation:
Biostatistics Division, Kaiser Permanente Washington Health Research Institute, Seattle, WA, USA
Richard Wyss
Affiliation:
Division of Pharmacoepidemiology and Pharmacoeconomics, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, USA
Elizabeth A. Stuart
Affiliation:
Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
Lauren E. Dang
Affiliation:
Department of Biostatistics, University of California, Berkeley, CA, USA
Andrew N. Mertens
Affiliation:
Department of Biostatistics, University of California, Berkeley, CA, USA
Romain S. Neugebauer
Affiliation:
Division of Research, Kaiser Permanente Northern California, Oakland, CA, USA
Andrew Wilson
Affiliation:
Parexel International, Durham, NC, USA
Susan Gruber
Affiliation:
TL Revolution, Cambridge, MA, USA
*
Corresponding author: B. D. Williamson, PhD; Email: brian.d.williamson@kp.org
Rights & Permissions [Opens in a new window]

Abstract

Background:

Real-world data, such as administrative claims and electronic health records, are increasingly used for safety monitoring and to help guide regulatory decision-making. In these settings, it is important to document analytic decisions transparently and objectively to assess and ensure that analyses meet their intended goals.

Methods:

The Causal Roadmap is an established framework that can guide and document analytic decisions through each step of the analytic pipeline, which will help investigators generate high-quality real-world evidence.

Results:

In this paper, we illustrate the utility of the Causal Roadmap using two case studies previously led by workgroups sponsored by the Sentinel Initiative – a program for actively monitoring the safety of regulated medical products. Each case example focuses on different aspects of the analytic pipeline for drug safety monitoring. The first case study shows how the Causal Roadmap encourages transparency, reproducibility, and objective decision-making for causal analyses. The second case study highlights how this framework can guide analytic decisions beyond inference on causal parameters, improving outcome ascertainment in clinical phenotyping.

Conclusion:

These examples provide a structured framework for implementing the Causal Roadmap in safety surveillance and guide transparent, reproducible, and objective analysis.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of The Association for Clinical and Translational Science
Figure 0

Figure 1. Directed acyclic graph (DAG) showing a simplified causal model for the case example comparing the effect of initiating nonselective nonsteroidal anti-inflammatories (NSAIDs) vs opioid use on acute kidney injury (AKI). In this causal model, A represents a binary treatment, W represents a high-dimensional set of confounders and/or proxies for confounders, Y represents a binary outcome, and C is a censoring indicator representing whether or not the outcome was observed or not observed (C = 1 indicates outcome was observed). In this causal model, we make the simplifying assumption that censoring is nondifferential across treatment groups so that the only bias for the effect of treatment (A) on the outcome (Y) is confounding by W.

Figure 1

Figure 2. Outcome-blind simulation results for study 1.

Figure 2

Figure 3. Estimated effects and 95% confidence intervals from a sensitivity analysis (intervals in gray) showing how departures from the assumption of exchangeability (no unmeasured confounding, informative censoring, etc.) would impact the calculated risk difference and 95% confidence interval (interval in black).

Figure 3

Figure 4. An example directed acyclic graph (DAG) showing a possible causal model relating baseline covariates, W; an exposure of interest, A; an outcome of interest, Y; and variables captured during interim or diagnostic medical encounters.