First do no harm. It’s a basic tenet of medicine. When intervening in peoples lives – even with good intentions – we need to check whether we are doing them any damage. But sadly, this key principle from the medical profession has not been taken to heart by charities.
The third sector has become a common provider of social and health services in high income countries, from children’s services to offender rehabilitation, but we know very little about how charitable organisations evaluate their activities.
While pressure on charities from government and funders to illustrate their impact and performance has increased – particularly in light of recent scandals such as the one that hit Oxfam – research suggests that many organisations struggle to evaluate their activities.
My colleague Anders Malthe Bach-Mortensen and I recently undertook a systematic review of the studies in this area, to try and understand why organisations struggle so much to provide good quality evidence of the effects of their work.
A classic example of harm caused by a social intervention is the Cambridge-Somerville Youth Study which began in 1935 and aimed to prevent delinquency in young boys through a system of what we would now think of as “wrap-around” care. It was a soundly designed, randomised trial which lasted for five years, during which time nearly 300 young boys took part in a programme that included fortnightly social work visits, tutoring, medical treatment, psychiatric help, summer camps, and other other community activities.
But, when the boys were followed up 30 years later, it was found that the intervention made no difference to delinquency and that those who took part were actually more likely to have been arrested for crimes or be in receipt of psychological care. Likewise, the programme had no positive effects on health. The importance of this kind of rigorous, long-term evaluation was established – but has still not been widely taken up.
The hunt for evidence
Some recent initiatives are starting to try address these problems. To help third sector organisations to meaningfully measure their own impact and to implement programmes and policies that are proven to work, many have turned to “evidence hubs” – but their take up is not widespread. Initiatives such as the What Works Network, Project Oracle and Blueprints for Healthy Youth Development each produce databases of interventions and programmes that are “effective” both within and outside the third sector.
There have been efforts by think tanks, such as New Philanthropy Capital and campaigners such as Giving Evidence, to persuade charities to be much more open and transparent in sharing the results of the evaluations that they do. More formally, there are “data labs”, in which organisations share data about interventions and outcomes.
Meanwhile, data is also changing. The government is leading moves towards open data and the use of administrative datasets for research. In the third sector, the main initiative so far has been the opening up of data from the Charity Commission register, as well as information about government procurement.
One example is Britain’s Ministry of Justice, which enables charities to contribute data detailing the characteristics of the people in the criminal justice system with whom they have worked, the nature of the intervention, and the outcome. By linking cases to administrative data held by the Ministry of Justice, and making comparisons between a charity’s data and that of a matched and anonymised sample of offenders, it’s possible to determine how effective its activities might be.
These resources could help service providers – both private, public, and third sector – to adopt and implement programmes that are supported by sound scientific evidence with the highest potential for effectiveness.
Barriers to doing more
Our review of 24 studies, mostly in the health and social services sector, found several key barriers in the way of third sector organisations evaluating their work. Lack of financial resources was the biggest limitation, followed by the lack of technical capability and evaluation literacy which many organisations face. They also identified challenges in deciding upon and measuring appropriate outcomes – such as whether to focus on a particular health or welfare indicator.
Our review found that the main third sector organisations fail to consider issues surrounding effective implementation of their programmes, such as using a manual (a common blindspot) and understanding “true” intervention effects. For example, some parenting programmes have ten sessions. Understanding whether the mothers or fathers need to attend all of them is important to figuring out costs and benefits fully and therefore what works for whom and in what setting.
But we also found that charitable organisations were more likely to evaluate themselves when they were adequately supported and where the organisation had a culture that supported evaluation. Critically, motivation is key factor if an organisation is to accept the importance of measuring outcomes.