Join the club; impact assessment is a common topic these days. Of course, there is still confusion as to what we mean by impact, and for many, impact is simply synonymous with outcomes. However, there are lots of people who define impact as longer-term change that is explicitly causally attributed to an intervention.
Demonstrating cause and effect is increasingly expected
Although some voluntary organisations have yet to make first base with assessing change, the focus of the UK voluntary sector has moved on in large part. Regardless of how you define impact, it is no longer sufficient simply to say ‘we did some stuff and some changes happened’. We need to see some kind of sensible link between the two. For some, in particular government, the ‘gold standard’ in rigorous impact assessment has been taken to involve experimental methodologies.
The right paradigm?
But this type of approach may simply be the wrong paradigm for many social interventions. Experimental methods may work best for interventions where there is a primary cause and effect and results are easily identifiable and measureable. For voluntary sector organisations which often have complex services which overlap with those of others, comparison groups may be less suitable. My recent article, Randomised controlled trials – gold standard or fool’s gold?, discussed the possibilities for use of randomised controlled trials, one form of experimental methodology, within our sector.
The qualitative alternative
And there is an alternative to carrying out social experiments. In her new article Dr Jean Ellis, brilliant Charities Evaluation Services (CES) associate and erstwhile CES colleague, discusses qualitative approaches to assessing impact (PDF, 120kb). Qualitative methodologies can also offer rigorous and sophisticated ways to assess impact. Some of the newer approaches, for example Qualitative Comparative Analysis, are quite complicated and I won’t try to explain them here for fear of embarrassing myself. But qualitative approaches can also offer a more practical, and sometimes more useful and appropriate, approach to the voluntary sector. These may focus on identifying what contributes to change and how, rather than scientifically attributing change.
What voluntary organisations can do
At the very least, an organisation that is already collecting good quality outcomes data can try to place that data in the context in which outcomes were achieved. This might involve a basic theory of change approach, identifying outputs and processes that led to the outcomes and factors in the external environment that affected them.
Going further, voluntary organisations can start to take steps to test the causal links between the intervention and the outcomes they think it achieves in order to make their evidence more credible. For example, they can look for more supporting or disproving evidence and carry out more in-depth analysis to provide explanations for the change.
In sum, I think there is a lot the UK voluntary sector can do to improve evaluation practice regarding causal attribution without necessarily reaching for the pot of experimental methodologies. Have a look at Dr Ellis’ excellent article and see what you think.
More from CES soon
In September, watch out for the third in CES’ series of summer papers.