No Magical Bullets

The common theme across the misconceptions about successfully operationalizing trigger-based patient safety and risk management is this:  while clinical triggers with maximum “signal-to-noise” improvement are foundational, required to be successful is a comprehensive system and a new operating model.

In short, there are no “magic bullets” to simply buy or a “tool” to implement in a traditional patient safety program that will achieve the level of safety that patients and their families deserve.

Why should there be any surprise?  The conventional wisdom has been for decades that care delivery is highly complex.  It is.  Therefore, for a problem as fundamental [e.g. “First, do no harm”], pervasive [e.g. HHS: >25% patients harmed], and complex as patient harm, why would the solution be simple?  It is not.

The following are key misconceptions often arising from even highly experienced clinical leaders seeking to implement trigger-based outcomes improvement.

Misconception #1:  Automating the Global Trigger Tool is the key to outcomes-based patient safety.

The Global Trigger Tool (GTT) is, in fact, a “tool” that enables a delivery system to generate a rate of patient harm.  While the GTT enabled many for the first time to calculate a rate of defect – a metric that should be required in any high-risk industry – the GTT was highly retrospective and did not provide actionable data for use in management.  

Consequently, the GTT remained a back-office tactical tool to provide measurement with little prospect of achieving value without the necessary component of actionability.  

Therefore, simply automating a measurement tool proved inadequate for those health systems that attempted to address the GTT’s shortcomings with automation.

What is needed alongside and integrated with measurement is management.  Pascal’s Virtual Patient Safety (VPS) solution enables that.

Misconception #2:  Maximizing the positive predictive value of triggers is the key to outcomes-based patient safety.

Many sophisticated academic clinicians have sought understandably to break the problem down into an effort to solve one particular harm at a time.  As a result, they have then focused attention naturally on one trigger – which alone often is noisy – with the intention to maximize the positive predictive value (PPV) of that trigger.  Not much better, in some cases a clinical team has chosen a handful of triggers aligned with a handful of clinical objectives – and had similar results.

Consequently, certain efforts have spent months or even years trying to improve the PPV of a trigger only to despair that a trigger’s output is not worth the investment of time and energy required to produce that output. 

What is needed is a broad, consistent set of all-cause harm triggers that are clinically effective, operationally efficient, and optimized with “signal-to-noise” enhancement technology from years of R&D and iterative testing based on deep clinical and informatics expertise.  

Pascal’s Virtual Patient Safety (VPS) solution offers that.

Misconception #3:  Predicting harm is optimal and would solve the patient safety problem.

Given the emergence of EHR data liquidity, trendy in the last half a dozen years has been the notion that being able to predict patient harm would enable the ability to intervene in timely fashion and avoid the harm.  

However, this approach also faces questions that yield unsatisfactory answers:

  • If fine-grained clinically validated adverse event outcomes using real-time EHR data (AE Outcomes) are not available, how will a machine learning, AI, or advanced analytic model be trained? 
    • Answer:  With proxy data that are unsatisfying; indeed, one must train a model with X outcomes data to predict X outcomes.
  • Without AE Outcomes, how will we know if an ML-AI model is clinically effective? 
    • Answer:  We won’t.
  • Even if we can predict patient harm X with a 0.99 c-statistic [highly accurate] and then successfully intervene, is it the most efficient use of resources?
    • Answer:  No.  The more efficient resourcing is to measure AE Outcomes, identify underlying common causes, and improving the process in order to negate a predictor from ever firing.

Therefore, while Pascal sees predicting harm as augmenting mature patient safety and risk management programs, it is no substitute for first measuring and managing AE Outcomes. Indeed, health systems cannot and will not predict their way to performance.  Retrospective, concurrent, and prospective data and insight are all needed.