Setting: Gathering of Patient Safety & AI Leaders

Several Pascal leaders attended and presented at a patient safety AI conference, “Improving Patient Safety with Artificial Intelligence and Advanced Analytics”. One of the papers distributed in advance was a review article co-authored by Bates et al entitled, “The potential of artificial intelligence to improve patient safety: a scoping review.”

The Paper: Approach

The paper conducted a literature review to summarize the potential of AI to improve patient safety in eight different domains:

  • healthcare-associated infections
  • adverse drug events
  • venous thromboembolism
  • surgical complications
  • pressure ulcers
  • falls
  • decompensation
  • diagnostic errors

The co-authors reviewed 392 studies. The most common data sources reported by the study were vital sign monitoring, wearables, pressure sensors, and computer vision.

The Paper: Findings

Bates et al conclude: “Overall, AI has great potential to improve the safety of care.” They posit that the “attractive early” adverse event targets include ADEs, decompensation, and diagnostic errors. Further, essential to building “robust and equitable” models will be “transparent population-based datasets”, such as EHR and claims data, as well as novel data, such as coming from sensors, wearables, and broader determinants of health. Finally, the effectiveness of AI will require analytic implementation that requires “organizations to develop, support, and iterate clinician, team, and system workflows for continued patient safety improvements.”

The Paper: Pascal Comments

Pascal was impressed with this review and offers the following in order to consider additional factors not identified by the paper as well as to comment on Pascal’s own offering in light of the paper’s findings, discussion, and conclusions:

  1. AE Outcomes. The paper does not contemplate the inclusion of clinically validated adverse event outcomes using health IT data (AE Outcomes). This is not surprising, because historically even elite researchers and predictive model developers with access to the best data sources have been relegated to claims data and more recently EHR data that is limited by being coarse-grained (e.g. mortality and morbidity data). Even if model builder finds some adverse events with outcomes in the EHR, or in most cases “proxy” outcomes that are sufficiently robust, they do not have access to AE Outcomes by adverse event categories and subcategories that traverse a robust range of all causes of harm. Pascal submits that optimal, if not essential, to efficient, effective, and equitable adverse event outcome predictive models will be building a model purporting to predict X outcome by training such a model with X outcome data. Proxy outcome data training is a second best solution – and not the only alternative. Today, Pascal holds a very large data set of clinically validated adverse event outcomes using EHR data – the largest worldwide – and our rate of generation of these curated outcomes is increasing materially year by year.
  2. Clinical Adjudication & Validation. The paper acknowledges what every trained and experienced clinical leader in patient safety knows: clinical adjudication and validation is required in order to generate a research-, regulatory-, and real-world-grade adverse event outcome that will be credible in a clinical operating environment. Table 3 confirms, “A variety of automated approaches have been effective at identifying patients likely to have experienced an ADE, but typically clinical adjudication is still required.” (emphasis added) Given that ADEs make up a sizable portion of all-causes of harm, and given that clinical adjudication is necessary to validate that an adverse event has occurred in many other cases of patient harm (i.e. because of the care provided as opposed to being a result of the disease from which a patient is suffering), Pascal submits that the only way to achieve the operationally efficient and clinically effective identification and reduction of adverse events is to have a system-wide capability of clinically adjudicating and validating adverse events – which Pascal has demonstrated at very large health system scale as well as CFO-grade financial ROI exceeding 3x per year.
  3. Expected Impact. The paper in the abstract suggests that the “greatest impact” of AI is expected (i) where “current strategies are not effective” and (ii) where “integration and complex analysis of novel, unstructured data are necessary to make accurate predictions; this applies specifically to adverse drug events, decompensation, and diagnostic errors.”

First, Pascal submits that current strategies have been measurably effective only where there has been substantial investment in measuring in the last 20 years, namely in HAIs and HACs that constitute a very small fraction of harm. Landmark studies have shown (Classen et al, Landrigan et al, Sammer et al) that the scope of harm is far broader than these categories upon which regulatory compliance-driven attention national campaigns addressing particular harms (e.g. CLABSIs, CAUTIs) have focused. Most health systems don’t systematically measure all-cause harm with an evidence-based approach, resulting in a lack of knowledge of the true scope of preventable patient harm. Therefore, Pascal would suggest that current strategies have been measurably ineffective broadly, meaning that the utility and applicability of AI is far broader across patient safety than the paper suggests. Measuring patient safety broadly across all causes of harm should be an imperative, first, to determine how effective current strategies really are.

Second, the implication of AI being less helpful with regard to adverse event outcomes where structured data is available underestimates current limitations that will affect the scientific validity, clinical credibility, and consequent field adoption of reducing patient harm where there is more structured data. Specifically, even when there is structured data, it is not normalized; this is missed by health system executives announcing with great fanfare a common EHR across inpatient and outpatient settings, missing the key point that doing so does not necessarily solve the longstanding semantic interoperability problem. And, even if so, AE Outcomes and AI models using these data benefit most when there is normalized data and outcomes available across health systems, as Pascal VPS generates and holds. To address the problem patient safety playing “whack-a-mole” with what Dr. Don Berwick calls “the particles of harm” – even if we’re using the most advanced technology on the planet to do so – we must adopt a new patient safety strategy. Health systems should proactively seek to identify (and reduce) all-causes of harm all the time across every single patient – using all of the data we have available, including both structured and unstructured data, and clinical validating outcomes.

The Paper: Final Thoughts on Patient Safety and AI

For many reasons it seems clear that AI promises improvements in patient safety unfathomable even 10 years ago. When Pascal – the first in the field to do so – developed an all-cause harm predictive model by training an ensemble machine learning model with AI techniques such as boosting, bagging, and random forest (c-statistic of 0.9) over a decade ago, some of the leading clinicians first exposed to this method dismissed the initiative as crazy.

A decade later, the conventional wisdom has flipped. It would be crazy not to explore how to apply AI to patient safety. Preventable harm is pervasive and injures or kills patients every day.

Pascal’s view is that the key to achieving improved outcomes with AI-assisted patient safety is to start with AE Outcomes which:

  1. Enable health systems to measure adverse events comprehensively, continuously, and with clinical validation;
  2. Use those AE Outcomes to train analytics models which use machine learning, AI, and other advanced technologies.
  3. Use subsequent AE Outcomes to measure with validating the clinical effectiveness of those AI technologies to predict and support improvement – likely one of the most significant contributions that AE Outcomes will make in the journey to AI-assisted patient safety.