Why monitor something you're not going to do anything about?

This study of a medical alarm rollout is really interesting:

When rolling out new cardiac telemetry monitoring equipment in 2008 to all adult inpatient clinical units at Boston Medical Center (BMC), a Telemetry Task Force (TTF) was convened to develop standards for patient monitoring. The TTF was a multidisciplinary team drawing people from senior management, cardiologists, physicians, nursing practitioners and directors, clinical instructors, and a quality and patient safety specialist.

BMC's cardiac telemetry monitoring equipment provide configurable limit alarms (we know this as "thresholding"), with alarms for four levels: message, advisory, warning, crisis. These alarms can either be visual or auditory.

As part of the rollout, TTF members observed nursing staff responding to alarms from equipment configured with factory default settings. The TTF members observed that alarms were frequently ignored by nursing staff, but for a good reason - the alarms would self-reset and stop firing.

To frame this behaviour from an operations perspective, this is like a Nagios check passing a threshold for a CRITICAL alert to fire, the on-call team member receiving the alert, sitting on it for a few minutes, and the alert recovering all by itself.

When the nursing staff were questioned about this behaviour, they reported that more often than not the alarms self-reset, and answering every alarm pulled them away from looking after patients.

A pattern I have witnessed with a disturbing frequency during after-incident and post-mortem reviews is a discussion that goes something like this:

"It says here that this CPU alert had been going off for three hours."

"Yeah, we ack'd it."

"GAH! But then the service went down an hour later!"

"Yeah, but that alert goes off all the time. We get thousands of them."

"But this one was important. Make sure this doesn't happen again!"

Then the operations team goes into some high-alert mode for a few weeks, until either someone realizes the whole thing is a silly waste of time, or some other crisis comes along and diverts everyone's attention elsewhere.

If that is going to continue to the be the approach, then this study suggests that you might as well stop monitoring that stuff. The false positives are a waste of everyone's time, and the alerts that actually might matter aren't being caught anyway.

The more interesting part comes a bit later:

The key takeaway the authors of the article make clear is this:

Review of actual alarm data, as well as observations regarding how nursing staff interact with cardiac monitor alarms, is necessary to craft meaningful quality alarm initiatives for decreasing the burden of audible alarms and clinical alarm fatigue.

Regardless of whether you think any of the methods employed above make sense in the field of operations, it's difficult to argue against collecting and analysing alerting data.

This option seems like a better alternative to simply declaring monitoring bankruptcy, but only if teams are willing to actually look at and doing something intelligent with all the data that is being collected.

Show Comments