Uses and pitfalls with AI for decision support - harmful self-fulfilling prophecies

WEON masterclass 2024 - AI-based prediction models in healthcare: from development to implementation

Department of Data Science Methods, Julius Center, University Medical Center Utrecht

2024-05-30

Uses of AI in health care

AI may have many uses in health care

Use AI to make health care

more efficient or easier

  • administration / documentation
  • translation

better: change decisions

  • diagnosis (e.g. skin cancer from imaging)
  • prognosis (e.g. survival given medical image)
  • treatment effect (e.g. genetic biomarker)

  • prognosis (e.g. survival given medical image)
  • treatment effect (e.g. genetic biomarker)

Tip

Whereas treatment effect estimation is typically thought of as a causal task requiring causal approaches (e.g. randomized controllerd trials), prognosis models are often advertised for making treatment decisions.

The in-between: using prediction models for (medical) decision making

  • prognosis (e.g. survival given medical image)

Using prediction models for decision making is often thought of as a good idea

For example:

  1. give chemotherapy to cancer patients with high predicted risk of recurrence
  2. give statins to patients with a high risk of a heart attack

TRIPOD+AI on prediction models (Collins et al. 2024)

“Their primary use is to support clinical decision making, such as … initiate treatment or lifestyle changes.

This may lead to bad situations when:

  1. ignoring the treatments patients may have had during training / validation of (AI) prediction model
  2. only considering measures of predictive accuracy as sufficient evidence for safe deployment

When accurate prediction models yield harmful self-fulfilling prophecies

Building models for decision support without regards for the historic treatment policy is a bad idea

The question is not “is my model accurate before / after deployment”,

but did deploying the model improve patient outcomes?

Treatment-naive prediction models

\[\begin{align} E[Y|X] \class{fragment}{= E[E_{t~\sim \pi_0(X)}[Y|X,t]]} \end{align}\]

Treatment-naive prediction models

Results from (vanamsterdamWhenAccuratePrediction2024a?)

  1. good or bad discrimination post deployment may be a sign of a harmful or a beneficial policy change
  2. models that are perfectly calibrated before and after deployment are certainly not useful for decision making because they didn’t change the distribution

Is this obvious?

Bigger data does not protect against harmful prediction models

More flexible models do not protect against harmful prediction models

Gap between prediction accuracy and value for decision making

What to do?

What to do?

  1. Evaluate policy change (cluster randomized controlled trial)
  2. Build models that are likely to have value for decision making

Building and validating models for decision support

Deploying a model is an intervention that changes the way treatment decisions are made

How do we learn about the effect of an intervention?

With a randomized experiment

  • for using a decision support model, the unit of intervention is usually the doctor
  • randomly assign doctors to have access to the model or not
  • measure differences in treatment decisions and patient outcomes
  • this called a cluster RCT
  • if using model improves outcomes, use that one

Using cluster RCTs to evaluated models for decision making is not a new idea (Cooper et al. 1997)

“As one possibility, suppose that a trial is performed in which clinicians are randomized either to have or not to have access to such a decision aid in making decisions about where to treat patients who present with pneumonia.”

What we don’t learn

was the model predicting anything sensible?

So build treatment-naive prediction models and trial them for decision support?

Not a good idea

  • baking a cake without a recipe
  • hoping it turns into something nice
  • not pleasant to people that need to taste result of the experiment
    • (i.e. patients may have side-effects / die)

We should build models that are likely to be valuable for decision making

  • Build models that predict expected outcomes under hypothetical interventions (prediction-under-intervention models)
  • doctor / patient can pick the treatment with best expected outcomes, depending on patient’s values
  • whereas treatment-naive prediction models average out over the historic treatment policy, prediction-under-intervention allows the user to select a treatment option

Hilden and Habbema on prognosis (Hilden and Habbema 1987)

“Prognosis cannot be divorced from contemplated medical action, nor from action to be taken by the patient in response to prognostication.”

  • prediction-under-intervention is not a new idea, but language and methods on causality have come a long way since (Hilden and Habbema 1987).

Estimand for prediction-under-intervention models

What is the estimand?

  • prediction: \(E[Y|X]\)
  • treatment effect: \(E[Y|\text{do}(T=1)] - E[Y|\text{do}(T=0)]\)
  • prediction-under-intervention: \(E[Y|\text{do}(T=t),X]\)

using treatment naive prediction models for decision support

prediction-under-intervention

More on prediction-under-intervention models

development:

  • ideally estimated from RCTs, but these are often too small or don’t measure the right data
  • alternatively can use observational data and causal inference methods
    • this approach relies on strong assumptions especially regarding confounding
  • but likely a better recipe than treatment-naive models

evaluation:

  • prediction accuracy can be tested in RCTs, or in observational data with specialized methods accounting for confounding (e.g. Keogh and van Geloven 2024)
  • a new policy can be evaluated in historic RCTs (e.g. Karmali et al. 2018)
  • ultimate test is cluster RCT

Take-aways

  • when developing or evaluating (AI) prediction models for medical decisions, think about
    • what is the effect of using this model on medical decisions?
    • what is the effect of this policy change on patient outcomes?
  • deploying models for decision support is an intervention and should be evaluated as such
  • prediction-under-intervention models have a foreseeable effect on patient oucomes when used for decision making

From algorithms to action: improving patient care requires causality (amsterdamAlgorithmsActionImproving2024?)

When accurate prediction models yield harmful sel-fulfilling prophecies (vanamsterdamWhenAccuratePrediction2024a?)

New summerschool: Introduction to Causal Inference and Causal Data Science

Learn more about causal data science

  • Dates: 5 Aug. - 9 Aug. 2024
  • Location: Utrecht
  • Instructors:
    • Oisin Ryan
    • Bas Penning-de Vries
    • Wouter van Amsterdam
  • Sign up still possible

References

Collins, Gary S., Karel G. M. Moons, Paula Dhiman, Richard D. Riley, Andrew L. Beam, Ben Van Calster, Marzyeh Ghassemi, et al. 2024. TRIPOD+AI Statement: Updated Guidance for Reporting Clinical Prediction Models That Use Regression or Machine Learning Methods.” BMJ 385 (April): e078378. https://doi.org/10.1136/bmj-2023-078378.
Cooper, Gregory F., Constantin F. Aliferis, Richard Ambrosino, John Aronis, Bruce G. Buchanan, Richard Caruana, Michael J. Fine, et al. 1997. “An Evaluation of Machine-Learning Methods for Predicting Pneumonia Mortality.” Artificial Intelligence in Medicine 9 (2): 107–38. https://doi.org/10.1016/S0933-3657(96)00367-3.
Hilden, Jørgen, and J. Dik F. Habbema. 1987. “Prognosis in Medicine: An Analysis of Its Meaning and Rôles.” Theoretical Medicine 8 (3): 349–65. https://doi.org/10.1007/BF00489469.
Karmali, Kunal N., Donald M. Lloyd-Jones, Joep van der Leeuw, David C. Goff Jr, Salim Yusuf, Alberto Zanchetti, Paul Glasziou, et al. 2018. “Blood Pressure-Lowering Treatment Strategies Based on Cardiovascular Risk Versus Blood Pressure: A Meta-Analysis of Individual Participant Data.” PLOS Medicine 15 (3): e1002538. https://doi.org/10.1371/journal.pmed.1002538.
Keogh, Ruth H., and Nan van Geloven. 2024. “Prediction Under Interventions: Evaluation of Counterfactual Performance Using Longitudinal Observational Data.” January 10, 2024. https://doi.org/10.48550/arXiv.2304.10005.