The UK government promised a “drive towards an equal response to mental and physical health” in England as part of a five-year plan. Two years later and there is little sign that any progress has been made. Calls to improve mental health services peaked this month when 20 years’ worth of former health secretaries wrote an open letter criticising the government for “warm words” but no action.
There is a consensus that more funding should reach mental health care. But what should be funded and exactly how? From April 2017, payments to adult mental health services must be linked to the quality and outcomes of care provided. National guidance published by NHS England and NHS Improvement claims that doing so will improve care, “ensuring value for money and the best use of limited resources”. But there is worrying evidence that doing so might have little impact and, at worst, actually be harmful to services.
How will payment for performance work?
The money flows are complex. Here is a picture showing key parts of the system.
At the top end is the Treasury, which determines how much money health care receives, alongside all other public services. The Treasury does not directly determine how much money goes to mental health, however – it receives advice from below in the hierarchy so it can calculate a total including all other areas of health.
The money flows on to the Department of Health, NHS England, then Clinical Commissioning Groups which are distributed across the country. They may choose to commission an NHS Foundation Trust. They may also commission a commercial company such as Virgin Care, which recently won a £700m contract.
Payment for performance will be at this final stage between commissioner and provider, and will be agreed locally between them. National guidance on how to implement the approach suggests that the chosen targets should be achievable yet stretching; informed by clinicians and people with experience of mental health problems; avoid creating an adversarial relationship between commissioners and providers; and should be used for the “reinforcement of positive behaviour”.
Oxford Health NHS Foundation Trust is provided as an example in the guidance. A fifth of its income will be linked to performance, which will include ensuring that people “improve their level of functioning”, determined using two measures.
One is the Mental Health Recovery Star, which tracks the progress of people who use mental health services by their ability to manage their mental health and feelings of hopefulness. This measure is completed jointly by people who use mental health services and staff providing care (such as psychiatrists, psychologists or nurses).
The other measure is a checklist rated only by staff which is used to track changes in symptoms such as depression and self-injury. The service has also promised its commissioners that it will ensure people live longer.
Does payment for performance improve services?
A recent systematic review of research found no evidence of impact when payment was linked to health outcomes, such as how long people live – which makes Oxford Health’s choice of outcomes puzzling. There was a small benefit when payment was linked to what services actually did, for example, providing cancer screening or recording whether someone smokes, as this was much easier for services to control than were the consequences of care.
Given national advice to involve people who use mental health services in decisions about outcomes chosen, it is also curious that the recovery star has been chosen. An increasingly influential group who use mental health services, called Recovery in the Bin, singled out the measure as “redundant, unhelpful, and blunt”, and suggested an alternative focusing more on the social causes of mental distress which are often ignored in outcomes.
Putting high-stakes targets on measures tends to mean that the measures stop measuring what they are supposed to measure because people cheat to achieve the targets. The effect is so common that it has a name: Goodhart’s law. For example, ambulance services had a target to get to the patient in eight minutes for life-threatening emergencies. This led to a third of services fiddling their timings towards the target.
There are various subtle ways to cheat outcome measures in mental health, such as by not bothering people who drop out of services with questionnaires to complete. People who drop out are less likely to have benefited from treatment, so excluding their answers from data analyses will improve a service’s apparent outcomes. Given the complexity of people’s experiences and predicaments, reducing them to scores on questionnaires can feel absurd, so it might be easy to justify this kind of gaming if it results in more funding which could improve the care provided. It seems especially easy for measures completed by staff who are under pressure from management to tick the right boxes.
Outcomes measures have an important role to play in understanding and improving the care people receive and should be tracked as part of care, but linking them to payment risks demoralising staff and making the measures meaningless. This seems a dangerous path to take given the state mental health services are in. A better solution might lie further upstream at the Treasury when it decides how much money is available for mental health.