Evaluation is one of many factors affecting decision making
I am, as a rule, a relentless optimist. After almost two decades of involvement in evaluation and impact, I still truly believe that our clients use our evaluations to help them make stronger decisions about their work, not just as shiny reports to present to their funders. And indeed, most of them do.
However, optimism is different from naivety and I am well aware that decision making is seldom truly evidence based. Evaluation and impact assessment is simply one of many variables that effects how those in authority make decisions.
Over the last year we have heard a lot about post-truth politics. Appeals to emotion have proved incredibly powerful, regardless of whether the fact checkers agree with the claims being made. Independent experts have been treated with increasing suspicion, portrayed as part of an out-of-touch elite – or worse, part of the establishment’s tactics to conceal the ‘truth’ and maintain the status quo.
What does this move away from experts mean for evaluation? Are evaluators an endangered species in a post-truth world?
Evaluation is still vital
Evaluation is still needed – perhaps more than ever. Whatever the current climate, there will always be an audience for well-collected, thoughtfully analysed and constructively presented evidence that enables people to make better decisions. But, current debates may reinforce several issues with evaluation that we have known about for a while.
Evaluation needs to be participatory
Post-truth politics, even more so than before, sounds the death knell to the idea that an independent evaluator can sit in her ivory tower looking down in judgement at the subjects of her evaluation. This has never been NCVO CES’ approach to evaluation (no one ever gave me an ivory tower, anyhow) and it certainly cannot be in the future.
In a post-truth era we should also continue to involve people in the process of evaluation as much as possible, so that it’s relevant to the community it’s intended to help. Without compromising independence, we often, for example, co- create an evaluation’s design or recommendations. What better way to decide what to measure or how to move forwards on an issue than to involve someone with first-hand experience?
Evaluation must continue to manage different versions of the truth
As evaluators of outcomes and impact in the voluntary sector it is our job to seek to understand people’s experiences and to consider what change has, or hasn’t, happened, and why. This reality is often complex and messy, usually involving a range of perspectives, and sometimes producing conflicting findings. The evaluator’s responsibility is to accurately reflect this in their evaluation, and to give voice to all stakeholders and their views on what constitutes ‘fact’, including the views of groups that are seldom heard.
Evaluation must be creative and engaging
Some people will always like quantitative approaches, loaded with statistics and charts. But these benefit from presentation alongside more qualitative methods like case studies. More than ever, these qualitative approaches are what really convince people of the success of a project or of the need to take action because they have greater emotional impact.
Simply producing a final report is now no longer enough. We need a range of ways to share data, from formal reports to cartoons and podcasts, so we can engage a range of people. We should start by considering who our evaluation audiences are and what concerns they have, rather than defaulting to more formal or traditional methods.
So, if the full potential of impact-focused evaluation is to be unleashed, it needs to come off the page and out of the office, and fully engage with the people and the issues it is intended to serve. In this way, evaluation and evidence can be tools for good, helping us to sort between the truths and post truths and to make better-informed decisions to help create positive change in the world.