Jurgen Grotz is the Institute for Volunteering Research’s research manager where he has worked for three years. He has more than 25 years’ experience working as a social policy researcher and project manager, with a particular interest in community action, disability issues, volunteering, participation and engagement.
So, why do I hate impact measurement? Well, truth be told, I only hate it when it doesn’t follow good practice.
Literally, and I do not mean that figuratively, I cannot get through a week at work without someone asking me how to prove the impact of an activity, project or programme. Given that I am the research manager of the Institute for Volunteering Research that shouldn’t come as a complete surprise or for that matter annoy me. Generally speaking, it doesn’t, because I like to know what difference volunteering makes. ‘Course I do.
However, my bugbear is that I come across too many examples of impact measurement that are proposed for the wrong reasons, or trying to prove assumptions that are simply not possible to prove or without any intention of acting on the results.
Want to measure real impact? Join us at Evolve 2015
I have been invited, for my sins, to chair a workshop at NCVO’s Evolve conference, Measuring impact is a waste of time. The panel members Fazilet Hadi (RNIB), Sally Cupitt (CES) and Sarah Mistry (BOND) are simply excellent.
On those occasions I am not only irritated, I feel insulted not least on behalf of those practitioners and service users already hard pressed for time, providing vital services to the community, who have contributed their time and provided honest responses to such ill-conceived impact measurements.
The call for good practice in impact measurement is of course not new. In 2013 seventeen organisations directly contributed to The Code of Good Impact Practice, as part of the Inspiring Impact Initiative and prior to that, our colleagues at Charities Evaluation Services have been writing good practice guidance in outcome and impact work for several decades. Yet, I still bemoan the lack of application of the principles of the Code of Good Impact Practice, in particular:
Lack of purpose (Principle 2)
If you don’t know what you want to change you cannot effectively measure what difference you make. It is that simple. In some of the Theory of Change graphics I have come across, that’s if projects have gone to the effort of producing one, you couldn’t point to a purpose if you had a magic wand. Unfortunately, in those graphics it becomes brutally obvious that there is no logical link between what is being measured and what the purpose of the organisation or the activity is.
Wrong horse for right course or vice versa (Principle 4)
There are times when the resources allocated to an assessment are so woefully inadequate that it would be much better not to start in the first place. In other instances the nature of the project or the fact that assessments are clearly time restricted limits confident findings so much that sticking with a few reasonable output measures and maybe a couple of outcome measures would be much more suitable. And sometimes a sensible question is simply asked at the wrong time.
No determination to act (Principle 7)
As a researcher the only ethical reason to undertake any impact measurements, or research for that matter, is to learn from it, to make changes where it is possible to improve and to share new sound knowledge widely. I really do hate it when assessment activities are only an adjunct where neither funder nor recipient has any intention to change as a result of the findings.
So, thinking about it, that’s what I might put to the panel members at the workshop. I will ask them whether it resonates with their experience and what they think can be done about it. And maybe they know of good examples so that my next blog can read: three reasons for why I love impact measurement.