I’ve been working at Charities Evaluation Services (CES; now part of NCVO) for almost 16 years, helping charities to assess their outcomes and impact – or assessing it for them as part of independent evaluations. When I started, I frequently had to argue the case that assessing any kind of change was important at all; ‘We know we are having an effect, why do we need to monitor it?’ was a not-uncommon complaint. Thankfully, many within the voluntary sector now realise that – while no one wants charities to become quasi-researchers – they do need to collect at least some basic information on the changes they create. Without this it is almost impossible for organisations to make well-informed decisions about how to allocate limited resources to achieve maximum impact for the people they serve.
I’m really proud of the changes CES has helped bring about in the sector since it was set up in 1990, and excited that, although many charities are still starting on their impact journey, others are collecting robust data that they use to make decisions that ultimately improve the lives of vulnerable people at the frontline. Many are also exploring more sophisticated methodologies, especially those that might help them attain the Holy Grail – showing evidence of their impact. ‘Impact’ means many things to many people, but increasingly is understood to be about the causal attribution of long-term change; assessing the extent to which change can be attributed to what we did.
Are RCTs right for the voluntary sector?
Randomised controlled trials (RCTs) are a form of experimental method that the UK voluntary sector is itself currently experimenting with, for assessing impact. For example, the Big Lottery is funding a demonstration programme for youth offending projects that is evaluating with experimental methods. In London, Project Oracle, an ‘evidence hub’ for children and young people’s projects, puts a high value on experimental methods as part of its evidence standards.
The use of RCTs holds exciting possibilities for our sector in its quest to show effectiveness. There is an ethical imperative to assessing outcomes and impact – we have a duty of care to ensure we do the best possible for the vulnerable people we work with and certainly to do no harm – and we must welcome new tools in our evaluation armoury. However, RCTs are derived from the medical sciences. Their application in the field of social intervention is not without problems, as many commentators have noted. There is also doubt as to the practical applicability of RCTs to much of the voluntary sector, beyond just cost and complexity. For example, you usually need a very fixed, static intervention, with a high throughput of users; both may count out a lot of voluntary organisations. And the nature of the methodology – which usually involves studying a group of users who don’t get the intervention being studied – may involve denying an intervention to people in need. This is unpalatable in many services, especially those working with people in crisis.
Do RCTs generate truly useful information?
As an evaluator, my interest is in creating information that can be used by charities and their funders. Creating evidence of the effectiveness of a type of intervention is vital within a programmatic evaluation, looking across multiple services within a linked programme of work. But for individual charities, the evidence created by RCTs alone may not be useful, or at least not sufficient. For example, an RCT may well not report in time; RCTs take a long time to set up and report and most organisations want information well within their funded period, as well as a regular drip-feed of evaluation data to help them improve their work as they go along. Also, while RCTs may be able to tell you conclusively that something does work, they often fail to tell you why something works, which is necessary information if you wish to replicate the intervention. If we still have to do our ‘normal’ monitoring and evaluation alongside RCTs, their use becomes less attractive, especially when our funds for monitoring and evaluation are usually (sadly) limited.
My new article, Randomised controlled trials – gold standard or fool’s gold? The role of experimental methods in voluntary sector impact assessment discusses some of these issues in more depth. It also sets the scene for an upcoming second article from CES on the potential for qualitative ways to assess impact, that may be more palatable for many in the voluntary sector. Watch this space.
3 Responses to RCTs – the next step for impact assessment?