For Change's Sake

12 min read December 30, 2017 at 6:45am

I periodically get asked if I'm aware of the evidence surrounding routinely changing intravenous catheters (aka cannulae aka "drips") vs leaving them in until they need changing because they start to get sore or red or otherwise stop working (known as changing them "when clinically indicated"). I find that it's usually in the context of someone having read about these guidelines and wanting to change their practice away from doing routine resites.

I think it's important, so I interrupted my holidays briefly to send a mini-essay as an email back about it, which I'm editing to put here, so I can reference it in future.

Evidence in Infection Control

It's important to recognise that there's not really very much evidence in what we do in infection control - and certainly not in sense that cardiologists think about evidence, with randomised, controlled trials of 20,000 people. Lots of advice is based on understanding some microbiology, some human behaviour and some aspects of building design and even industrial process control (it's actually part of what I like about infection control).

These same cardiologists, as well as the surgical teams and (I've no doubt) the anaesthetists have had someone from infection control rouse on them for wearing their scrubs in the coffee shop. (Actually, anaesthetists may not have experienced this, as they don't drink cafeteria coffee). There's no good quality, clinical outcome focused evidence that I'm aware of that it reduces surgical site infections.  There's a little bit of data it reduces bacterial counts in the operating theatre, but what does this mean for patients?

Similarly, Bare Below the Elbows has a very limited evidence base in preventing infection. A couple of abstracts at meetings in the USA (where they wear white coats - you really should stop that, America) which again look at bacterial counts on sleeve cuffs, rather than hard patient outcome data is hardly compelling.

"Ah, but biological plausibility!" I hear some of you shout at your monitor. For those not familiar, it's kind of a medical tarting-up of the precautionary principle which says "basic science says this makes sense in terms of improving outcomes, so maybe we should do it, even in the absence of robust clinical evidence".

I'll come back to that in a bit.

Risks of Intravenous Catheters

Indwelling medical devices are major causes of healthcare-associated infection (HAI). Indwelling devices include urinary catheters, central lines, surgical drains and various other things that you have done to you while in hospital (and sometimes outside of it), but IVCs are by far the most commonly used.

One of the most serious sorts of HAI is a bloodstream infection - bacteria in your blood, which is normally sterile. Unsurprisingly, having a bit of plastic connecting the outside world with your bloodstream - which gets handled multiple times per day so people can administer things into your blood through it - is a major factor in the risk of bloodstream infections.

The classic work on catheter-related infections is by Denis Maki which suggests a risk of about 1/1000 IVCs developing bloodstream infections.

The bloodstream infections I'm most involved with managing are the ones due to Staphylococcus aureus.

Scanning EM of Staph aureus
Scanning electron micrograph of Staphylococcus aureus - CDC Public health image library, via Wikimedia commons, public domain.

Staph aureus lives on the skin of somewhere between a third and half of people, and can make up part of the billions of bacteria living on your skin. Unlike most of these "skin flora" for most people, it can be a cause of serious human disease.  Australia has a national surveillance program for Staph aureus bloodstream infection. The most recent data found a mortality rate of between 15-20%, and the largest peer-reviewed Australian series from 2009 found similar figures.  Of hospital-onset Staphylococcus aureus bloodstream infections (SABSIs from here), 1/3 of them were due to indwelling devices. ASSOP wasn't able to ascertain causality, but it's safe to say that more that a greater proportion of healthcare-associated vs hospital-onset were device related (this is complicated, and the post is already too long).

I'm not able to easily find contemporary data on the cost to the health system of a SABSI, but has been estimated overseas as €16,500, and between $US 10,000-$25,000.  I vaguely remember an Australian paper quoting about $20,000 per episode but can't find it at the moment.  These costs include mandatory 2 weeks (or more) of intravenous antibiotics, a PICC line through which to administer them (which is also a risk of infection), an echocardiogram +/- TOE (which requires sedation, with the attendant risks) and then all the follow-on costs of this care.

So why change them?

If you leave an IVC in for too long, it will get inflamed (red, sore, swollen). This is called phlebitis (inflammation of the vein), and is not serious. Much of the pathology is inflammatory (ie: your body's response to having a plastic straw in your vein) rather than infection from the skin flora. It can be treated simply with a warm compress, with anti-inflammatories (if safe) or by pretty much ignoring it. There is little data to suggest that phlebitis increase the risk of bloodstream infection.

In a small proportion of cases, infection can develop - this is known as cellulitis (infection of the skin and skin structures). This, too, is rarely serious, and can be treated with a few days of oral antibiotics. Cellulitis probably increases the risk of BSI a little, but again, hard data is scant.

So the rationale is that by routinely changing the IVCs, you will get them before the phlebitis develops, and hopefully prevent local infection, which may reduce your risk of bloodstream infection a little. This is based on work on phlebitis (again by Denis Maki) in the 1990s, so is hardly contemporary evidence.

Note that a Victorian series found that 2/3 of healthcare-associated infections with onset <48hrs were also device related, so clearly insertion technique is important in the development of infection, as well as just length of device dwell time.

Why not change them?

So having established there's biological plausibility (gasp!) for changing them, why would you not want to?

Poking fewer holes in patients' arms is obviously a nice thing to aim for. In some ways, it surprises me a little that Australia is leading this research, given we've not (yet) moved to the patient-experience-above-all driven model of healthcare quality, aiming to not annoy patients by replacing the IVCs routinely (although "this is what we do to try and stop them from getting infected" wouldn't have been a difficult sell, I wouldn't have thought).

Given that 70-80% of admitted patients have an IVC, there are significant cost savings associated with not routinely replacing them (about $7 per replacement, factoring in staff time as well as materials), and obviously this frees up clinical staff for other tasks. But you can replace a lot of $7 IVCs for the cost of a single SABSI (see above), so you'd want to be pretty confident that leaving them in doesn't increase the risk of serious infection.

What do the guidelines say?

Because the evidence around this is pretty weak (more on that shortly) the guidelines are suitably wishy-washy. The UK NICE guidelines[pdf] don't specifically recommend against changing IVCs, but nor does it recommend routinely changing them. The American CDC 2011 guidelines[pdf] do still recommend routine replacement, but note that the "clinically indicated" change is an unresolved issue. 

Well, what about the evidence?

Well, thankfully*, there's a Cochrane review of the trials on this area. [NB: This link seems to have a redirect error for me, I'm not sure if it's my setup or a problem with Wiley's library]. It's a meta-analysis of seven trials; four of which had as their primary authors two of the Cochrane review group. I have some concerns about this - unsurprisingly, the Cochrane conclusions mirror those of the primary trials pretty closely.

One of the authors of the review (and of some of the meta-analysed papers) is Professor Claire Rickard, from Griffith Uni, who is the Principle Director of the Alliance for Vascular Access Teaching and Research (passes the academic acronym test - AVATAR). You can see from this page that the results of the research are spun primarily as being cost savings and improved patient comfort, and:

that leaving a functioning catheter in place until it was no longer required made absolutely no difference to the onset of infection or other complications

So let's unpick that a little.

The primary endpoints analysed in the Cochrane are:

  • Catheter-related BSI
  • Thrombophlebitis
  • All-cause BSI
  • Cost

and the secondaries are:

  • Infiltration/tissuing
  • Other device failure
  • Number of resites
  • Local infection
  • Pain
  • Patient satisfaction
  • Mortality

This all seems reasonable so far.

The meta-analysis looked at 7 trials, a total of 4895 patients. The Cochrane evidence statement:

Evidence quality high for most outcomes but was downgraded to moderate for the outcome catheter-related bloodstream infection… due to wide confidence intervals which created a high level of uncertainty around the effect estimate. CRBSI was assessed in five trials”

As I've already mentioned, I consider CRBSI to be the most clinical important endpoint, and already there's a statement that the evidence isn't optimum (and is only reported in 5 of the 7 trials analysed; four of which were by the meta's authors). The event rates were 1/2365 vs 2/2441:

Forest plot of CRBSI results

Forest plot of CRBSI results

from Webster et al, Cochrane Datab Syst Reviews, 2015

So that's a logarithmic scale on the Forest plot.  The 95% confidence intervals are 0.08-4.68 - ie so broad that no conclusions can be drawn. So if the event rate was 3/4806, or 1/1600 (note that this is pretty close to the figure from Maki of 1/1000), how many people in the trial would you need to have enough statistical power to find an event-rate difference between the groups?

I'm not a biostatistician, but my back of the envelope attempt at the sample size suggests you'd need  7100 (80% power), 8871 (90%) or 10500 (95%) enrolled patients to get a meaningful difference between the groups for an outcome this rare. This means probably at least twice as many patients as were included in the entire meta-analysis. 

Thankfully, Prof Rickard's group is leading an expansion of her previous randomised controlled trials - the (also law of acronym compliant) One Million Global PIVCs. This will hopefully report in the next year or two, which may provide some better answers to this very important question.

Unsurprisingly, the evidence supports that you have less tissue cannulae if you routinely resite them, and changing when clinically indicated means less cannulae, and less cost.


Here's the forest plot for phlebitis

Forest plot of phlebitis results

Forest plot for phlebitis

from Webster et al, Cochrane Datab Syst Reviews, 2015

As an aside, the BSI results were described in the text as "a 39% reduction in BSI rates, but the confidence intervals were wide" while this one was "no significant difference" - despite the interval being much narrower.

Even leaving aside the interpretation, however, my primary reason for disagreeing with the findings is that

Phlebitis is not a clinically important safety measure.

Most of the time it's not an infection. It very likely doesn't increase your risk of bloodstream infection (on the very scant evidence we have). It is easily treated with cheap drugs for a couple of days, and warm compresses.


The trials that have been done to date, and this includes the meta-analysis haven't yet answered to my satisfaction that moving away from 3-day change is safe from the point of view of Staph bloodstream infections. I make no apology from wanting to prevent them; they make up a substantial chunk of my workload, and they are a very bad thing to have.


Precaution about the precautionary principle

I struggle to understand why these recommendations have been embraced as they have - the Australian College of Nursing has put them first on their list of recommendations for the Choosing Wisely campaign. Infection control is, as I've mentioned above, sadly devoid of evidence, and we have a long history of recommending things on the basis of biologic plausibility, inference from basic science, and (occasionally) the last refuge of the scoundrel, "common sense".

Why then, when actual evidence comes along (hooray!) do people race to uncritically embrace it, without stopping to read beyond the abstract or the headline, and ask the question "does this trial apply to my patients?" and "is it asking the right question?".  Saving the health system money and time is great and all, but if ever as infection control practitioners we were going to strongly apply the precautionary principle, it should be about SABSIs instead of the interminable discussions I see people have on the infection control mailing lists about the risks associated with dust from files that podiatrists use to grind corns off people's feet (this is a true story).

Secondary benefits

My other take on this issue is that I want staff thinking about IVCs. I'm sure there's a number of times people say "we need to change this IVC", and the response is "well, let's just take it out and put them on tablets". The best way of preventing IVC infection is to not have IVCs, and by putting a system in place whereby people are at least (hopefully) going to think about the IVC every 72-96 hours, hopefully some of them can come out.  Human nature also being what it is, this acts as a "speed bump" so hopefully patients won't be left with IVs in for days and days without anyone looking at them. The RCTs all had trial staff carefully inspecting cannula sites for phlebitis. Non-trained, non-trial staff (who remember are wanting to adopt non-routine-resites because they're too busy) aren't going to pay the same degree of attention to the drips, and removing one way of getting people to look at them at least once every four days seems a bad idea to me.

Anecdata in medicine

Individual patient stories have a powerful anchoring effect on people's clinical practice, and has been responsible for a lot of delay in implementing sensible policies. This is natural, as clinicians are generally patient-focused. Hopefully, the fact I've just written a couple of thousand words on the topic will let you think that I'm being illustrative, rather than anchored.


Entry in patient's chart from when I was a advanced trainee by the night ward call RMO. Our hospital's policy was for 72hr change, and this was quite a few years back (2010), before the meta-analysis was even published.

"Called by nursing staff to routinely resite IVC. There is evidence [reference to this paper - an RCT of 362 patients] that this practice is unnecessary, therefore I won't resite the IVC"

I was seeing the patient because she'd developed a Staphylococcus aureus bloodstream infection.

As far as we could tell, from that IVC (there was some cellulitis, and it was dwell day 6).

She died.


Don't be that guy.


Image credit: Laboring Mommy, by Bart Heird, via FlikrCC-BY-NC-ND-2.0