Wednesday 22 April 2015

Rational regulation - where I get touchy about no-touch taps

Let me start with a quotation from easily the most useful medical textbook I have ever read - one that has actually helped me save lives,
(Speaking of operating in a developing country) 'In an emergency you may even have to operate by the light of a hurricane lantern. The light will attract insects, and these will fall into the wound, but even so they are unlikely to influence the patients' recovery'

The point being made here depends upon the concept of marginal risk reduction. Using surgery as the example there are a number of ways you progressively reduce the risk to the patient.
Firstly, choose your cases carefully.
Not opening an abdomen that doesn't need opening is always going to be safer. Of all the potential cases for laparotomy, 40% may not need surgery.
Pre-op: have a check-list - 15% risk further reduction
Good anaesthetic technique - 15% more, say
When you operate use sterile equipment - that's another 15% risk reduction
Use good surgical techniques - handle tissues gently and you have another 10%
Sterile gloves 2%, experienced assistant 2% and so on.
Each addition gives a marginal risk reduction which is subject to the law of diminishing returns.
So what if a moth falls in the wound (desirable if it doesn't of course and believe me fly screens are much more useful in Africa than here) but it only has a minor impact on outcome when all the rest is in place.
Using single use instruments will reduce risk further, but by nothing like as much as sterility. Similarly different scrub techniques of longer duration and newer antiseptics and laminar air flow and hotter water and plastic floors and de-cluttering and no touch taps all will help but, in ordinary primary care minor procedures, not detectably so when the rest is in place.
It is at these margins when already high levels of safety are in operation that I start to question the regulatory regimes. For new builds or major renovation, no problem; but when the margin of potential further benefit is small the risk of generating harm by perturbing an existing system becomes significant. If the process of implementing a regulation carries a greater risk than the potential marginal risk reduction it could bring, it is stupid to implement the regulation. 

Let me relate the story of my tap.
At 5am on 28/4/2014 I was called to the surgery by our cleaners as one of 4 new automated taps, only changed to meet CQC regulations and done 3 days earlier, had flooded my consulting room, the corridor and part of a neighbouring room with over 30l of water, rendering it smelly and unusable for over a week. We had to employ a specialist cleaning company to make my room habitable again. Prior to this my single lever mono basin mixer tap had functioned for over 20 years (actually 7,390 days) without flooding, Legionnaires Disease or any attributable infection in well over 1000 minor procedures carried out in my office. The introduction of a new tap would appear to be at least 615 times (7,390 / [3x4]) more likely to cause an incident that damages patient care than the old tap in usual operation! As one of our lovely-wiser-than-CQC cleaners put it, 'If it ain't broke...'

In order to be sure of overall patient benefit from changing a system, the ongoing risk to patients from the old system must be greater than the risk to patients from the new system plus the risk to patients from the process of change-over. However, from our experience, it would seem that the maximum absolute risk reduction achievable by changing taps (which equals the absolute risk to patient care from the old taps) is likely well over 600 times smaller than the risk attributable to the process of changing the taps.
Should CQC have required that we only change taps at the end of their operational lives (ie when we have to incur the risk of change)? I wondered if CQC risk-assessed the implementation of their regulations by balancing achievable marginal risk reduction against the risk of system change. I asked Prof Steve Field at the RCGP Conference 2014 but I don't think he really 'got' the question and possibly I didn't ask it very clearly, so I still don't know. I didn't find anything on the CQC website on this.

Now maybe we were just exceptionally unlucky in having the only incident in the installation of at least 2,400 (4 x 600) taps or maybe my old tap was actually far more dangerous than direct observation suggests but it seems very unlikely.

This may seem a trivial example, but if it is generalisable there is a real chance that systematic damage is being done up and down the country in CQC's name. Tick box regulation: 'Have you got this new super tap Y/N?' is simple but at best simplistically naive and if it has not accounted for marginal risk properly, at worst it is negligent.

However, the risks are not confined to direct service delivery; there are opportunity costs and unquantifiable risks (but increasingly real as the GP recruitment crisis unfolds) such as the negative effect over-regulation might have on morale. We had to spend practice resource on these changes and as the marginal risk reduction is tiny then the benefit/cost ratio is also and may have been a lot smaller than say spending the money on increased staff. To spend on improperly assessed regulatory requirements is to waste NHS resource.

I would be grateful if CQC could reassure us that it routinely assesses the risk to patient care resulting from the process of implementation of its regulatory requirements and only insists on implementation where the overall risk is shown to be lower. 

Monday 6 April 2015

Taking exception

Over recent years there has been a growing trend to use the UK General Practitioner (GP) Quality and Outcomes Framework* (QoF) exception reporting rates as a quality measure or standard. For example, in Leicester, one entry requirement for the Primary Care Diabetes (Enhanced) Service (a scheme to reward more specialised community diabetes care) is:-
The specification requires the provider(s) to (show):
  • Evidence of QoF low exception reporting ( less than 10% )
In the annual quality review of our practice the overall exception reporting rate is routinely reported as a quality measure, comparing it with the Clinical Commissioning Group (CCG) average, but with no explanation of what this actually is supposed to indicate.
This marks an unwelcome development as I believe it betrays a misunderstanding of the topic. This blog is about why.

For the uninitiated some explanation of the system is necessary. QoF payment is based on points scored in a range of disease areas for achieving certain quality standards. The number of points available for each area such as for how well blood pressure (BP) is controlled is fixed. The actual point score for a practice is calculated from the proportion where the audit criterion was achieved at the year end of a defined percentage range of eligible patients within which points are counted. So if the points available are 10 for this disease area and the point scoring range is 50-90%, if 70% of eligible patients meet the audit criterion, say of good BP control, 5 points (10 x (70-50)/(90-50) are scored.
So for the above QoF area a practice scores nothing if it does less than half the work and nothing more if it meets the QoF criterion in over 90% of those potentially eligible.

So where does exception reporting come in? The key is in the phrase 'eligible patient'. Individual patients who might be suitable for assessment of care in each QoF disease area can be deemed ineligible and exception reported. There are various valid reasons for this. Eg a person with a poorly controlled BP not meeting the target value required may also be suffering from a terminal cancer, in which case worrying about their degree of BP control is hardly relevant. Such a person could be exception reported as 'unsuitable'. Someone else may just refuse to be followed up making it impossible to meet the care standard which can be exception reported as 'informed dissent'. These patients are then not counted when it comes to assessing points.
The rationale for this is to level the playing field between practices, as the number of patients in these groups is going to vary year on year and by population served and to avoid inappropriate treatment.

So why the fuss about exception reports above?

Suppose you have two practices with 100 patients each who are potentially eligible for a particular QOF indicator. Just before the year end one has met the criterion in 83 patients and another in 89. Both are a little short of the maximum 90% target. In the first practice there are 8 patients who could be legitimately excepted, an 8% exception reporting rate. The practice excepts them, achieving 83/92 or 90.2% and thus gets the maximum number of points available for that area.
In the second they except two, a 2% exception reporting rate, achieving 89/98 or 90.8%. The rub is this, both get the same maximum QoF points but the second practice has treated 6 more patients to target. So you can see why having a low exception report rate might be seen as a 'good thing' but does a low rate actually indicate a better quality of care?
Supposing in the two practices the maximum number of patients that could reasonably be excepted was 8 and 2 respectively. In this case the first practice failed to treat 9 (=92-83) to target of the patients it could have treated, as did the second (=98-89). So despite a fourfold variation in exception reporting rates they under-treat the same number of people. (The first has treated 6 more people to target but only because it was easier to do so; exception reported patients are frequently more complex or less compliant and so it is perfectly reasonable that they both get maximum QoF points in this area, despite the variation in exception reporting rate.)
But supposing both practices could have excluded 8 patients, then the second practice is performing better, it just didn't need to report as many to get maximum points, because once you have achieved maximum points there is no point in taking exception reporting further. The problem is you cannot know this just by looking at exception reporting rates; you need to know the number who could have been exception reported, and this latter figure is never assessed in QoF.

The point is both practices have at least met their contractual obligations to the same degree but that one might be over-performing, if its exception reporting rate could have been higher. So on their own exception reporting rates tell you nothing about quality of care.

One might argue that despite this, downward pressure on exception reporting will help keep coverage rates up. However, it cannot be assumed that this is necessarily a good thing as it may result in over-treatment which itself carries hazards we are increasingly aware of.

A better strategy to keep treatment rates high would be to increase the top of the QoF target range and this is what has happened as QoF had gone on. But even this may not be wise given the dis-benefits are bound to increase and as the above figures show the potential improvements are marginal.
In our example suppose you had to hit 95% to get maximum points. Our first practice would indeed have to boost performance from 83 to 88 patients as it could except no more; our second, if it could except more, could except 7 to achieve the 95% target. It's exception reporting rate will have jumped over threefold to 7% in response to the 90 to 95% change in top target range but its quality of care is no different; it's just making real its over-performance under the old 90% target. On its own exception reporting cannot be used to meaningfully compare practices or to assess a single practice over time when the target range or audit criteria have changed.

So the message is exception reporting is not a meaningful quality measure unless you also know the maximum potential exception reporting rate for each indicator in each practice. So please Area Teams and CCGs stop using it as such!

Maybe the worry is that some practices over-exception report merely to hit QoF targets and that patients for whom a care standard is appropriate are being denied it by being wrongly exception reported. If this is happening then this is a probity issue, not a quality of care one. CCGs, by all means query outliers in exception reporting rates and do some post payment verification. All practices should be recording reasons on the exception codes to justify them. But please drop the uninterpretable exception reporting rates from your quality dashboards and service specifications.

* a quality incentive scheme still responsible for a significant but shrinking proportion of GP remuneration where payments are made depending on the points scored