"The NHS will last as long as there are folk left with the faith to fight for it"
Aneurin Bevan

Saturday 4 August 2012

Ranking

How do you rank one hospital trust against another?

It is not easy. For a start there are lots of general criteria you can use, like clinical quality, financial probity and patient experience. It is difficult to combine, say, a financial ranking with a clinical ranking: which is more important, and how do you weight them? Even within such a criteria, there are lots of sub-criteria. For example, in clinical quality you could include things like standardised mortality (HSMR), waiting times and hospital acquired infections (HAIs), but how do you combine them?

Dr Foster attempted to create a ranking in 2009. To do this they used 16 indicators, some of which were numeric values (like the HSMR) and others were logical (a question to which there is a yes or no answer). The numeric values had different scales, so some were percentages, others were whole numbers (for example, the number of HAIs). In some cases large numbers were "good", in other cases large numbers were "bad". All of this meant that Dr Foster had to manipulate these indicators before they could be combined. They scaled the numeric values so that all "good" values were -ve and the more negative the better, and all "bad" values were +ve and the larger were worse. (Their methodology says that high is "bad".) They also had to scale the values so that the highest for one indicator was as high as another indicator (for example a high HSMR may be 120, a high HAI rate maybe 10, so these two values would have to be scaled). Since outliers (exceptionally high or low values) will alter this scaling they had to be excluded and outlier trusts given the new maximum (or minimum) value. Dr Foster also had to find a way to turn a good "yes" and a bad "no" (and vice versa) into a number. There was a lot of manipulations of these indicators.

Even after converting all the indicators into numbers of approximately the same scale, it was still difficult to combine them because you have to determine how important one indicator is compared to another. For example, how important is the mortality rate compared to the HAI rate? So Dr Foster had to use some kind of weighting. The way they did this was through fuzzy logic using Bayesian ranking (look it up). With 16 different indicators, some of which can have a wide range of values, you would expect the final ranking to be unique, but the Dr Foster technique managed to give some hospital trusts the same value (and the same rank). In later years Dr Foster omitted to give the total "safety score" or the rank position, and instead chose to band trusts. This is a much better technique, but it prevents people from identifying the "best" trust, and so, although it is a fairer way to grade trusts, it is not a good way to get Press attention (who love rankings).

It is very difficult to combine different values to get a ranking. However Eoin Clarke has attempted to do this in a hamfisted way: he has produced his rank of the 143 Foundation Trusts according to their risk of "bankruptcy and dissolution". It is nonsense and I left a comment to explain why his methodology is nonsense although he's chosen not to publish the comment, so instead I will explain it here. Monitor uses a a finance rating of 1 to 5 and a red-amber-green (RAG) for governance. It uses two different scoring schemes precisely because it is difficult to compare finance and governance, yet Clarke does exactly that. He converts both the finance numeric, and governance RAG ratings, into scores between 0 to 10 and then adds them together. This means that he gives finance and governance equal weighting: does that make any sense? For example, breaching the 4 hour A&E target, or making more private income than the private patient income cap, are both regarded by Monitor as "breaches of authorisation" and will result in a lower governance rating. But are these breaches as bad as ending the year with a £50k deficit? Or ending the year with a £5m deficit? Apples and oranges. Clarke does not take into account the way that Monitor has created their ratings. For example, a trust could make a surplus and meet its cost improvement plan (CIP), but if it has a historical debt (regardless of how quickly it is paying off this debt) it cannot get a 5 finance rating.

One of the trusts that Clarke lists in his worst 30 trusts is a trust I know well. This trust generates a surplus every year and has done so since 2007. It meets its CIP every year. It is in the top 15 of trusts according to the Reference Cost Index (ie it is an efficient trust). But since this trust took out a DH loan in 2005, it cannot have a 5 finance rating and the maximum it can get is a 3 (which is has, and Clarke's scoring regards this as middling). The trust has also missed its 4 hour A&E target (not regularly, but once is enough) and hence the governance rating is not Green. Using Clarke's ranking this trust is in danger of bankruptcy and dissolution, but it is the strongest trust in its local health economy, and is nowhere close to bankruptcy.

Clarke's ranking is pretty much meaningless, so ignore it.

No comments:

Post a Comment