Skip to main content
Published on 19 December 2022

Over the last year and alongside wider organisational work on anti-racism and decolonisation, the Global MEL Team at Christian Aid has been doing a lot of thinking about decolonising our evaluation practice. We spoke to 71 colleagues and peers both internally and externally and based in 15 countries to reflect on what we might call ‘conventional’ evaluation practice in the sector. We asked ourselves how, why and for whom our current practice might be problematic, and what areas we might need to address in order to reflect a more decolonial approach to evaluation.

Overwhelmingly, colleagues agreed that the way most of us are commissioning, designing and carrying out evaluations is problematic and in conflict with our decolonial ethic. We have come to recognise that how we evaluate our work is a key area which has been shaped by colonial history, structures and understandings of knowledge and power.

As a result, Christian Aid recognises that we need to forge a new and intentional way forward in which people experiencing poverty and marginalisation define for themselves how interventions are perceived, assessed and valued. We need less rigid and externally designed evaluation questions and criteria and more open and reflective, collaborative evaluation spaces.  

Over the next few paragraphs, we’ll try to give a flavour of the conversations, ending with the practical recommendations we’ll be taking forward as we redesign our evaluation policy and guidance.  

So firstly, why did our colleagues and peers conclude that current ‘conventional’ evaluation practice is so flawed when we think with a decolonial mindset?  

1. Externally driven: Evaluations are, in the most part, externally driven and designed in response to donor requirements or outside criteria, rather than being locally conceived and led. This means that, even when participatory methodologies are used for data collection, project stakeholders only participate after the assessment criteria has been set and therefore their involvement is limited to answering pre-defined questions. As one focus group participant said: 

'Communities participate in giving us the data - but they don't decide if the project is a success. They are subjects of evaluation study, but do not define what makes it a good project.'

This goes against decolonial practice, prioritising external criteria over the priorities of project participants. 

2. Evaluation ‘experts’: Common practice is to employ an external consultant to lead our evaluations, an ‘expert’ chosen for their academic qualifications and professional experience and often somebody from outside the project country or region.

We (perhaps unconsciously) assume their knowledge counts for more, and we discount the knowledge of those who have built expertise through their lived experiences. Scholars and practitioners explain this as colonisation of the mind, in which Eurocolonial ways of doing, being and knowing about the world are seen as superior to other.

3. Which knowledge is valued: Eurocolonial understandings of how we generate knowledge and what type of knowledge counts continue to take priority in how we carry out evaluations.

Generally, evaluators use scientific and technical methods to collect data from people and communities involved in an intervention, analyse the data based on their (often limited) knowledge of the context and draw conclusions in line with pre-determined criteria.

The problem is that these methods exclude other ways of seeing and understanding the world, such as indigenous practices of generating and sharing knowledge, which are equally valid forms of knowledge. 

4. Evaluation products: The universalisation of Eurocolonial academic practice continues in how evaluations are presented and used – mostly as technical reports in English focused on upwards accountability and prioritising compliance, with external requirements over the knowledge needs and rights of people who were involved in the project - or who stood to benefit from it.

This underlines the significant power imbalances in current evaluation practice. As one interviewee reminded us: 'The bottom line is: who are evaluations for?'

We would argue that there's a need to rebalance the focus towards people experiencing poverty and injustice.  

This image shows community members in Chikwawa district, Malawi walking to film a scene during a pilot project earlier this year, using Participatory Video and Most Significant Change to reflect on our response to Cyclone Ana, together with our partner CICOD.
Community members in Chikwawa district, Malawi, walking to film a scene during a pilot project earlier this year, using Participatory Video and Most Significant Change to reflect on our response to Cyclone Ana, together with our partner

Now that we've explained how current evaluation practices are reproducing colonial structures and logics, what can we do as Christian Aid to move towards a decolonial approach to evaluation?  

  • Let’s be intentional about shifting power and design a set of values or statement of intentions for evaluation practice and then establish these as a non-negotiable first step for every evaluation. We will learn from others on this, such as the African Evaluation Association and Washington Evaluators who have already developed principles.  
  • We will broaden our understanding of what counts as knowledge and evidence, challenging our assumptions around rigour and robust data and proactively seeking other forms of knowledge and ways of knowing. We will prioritise lived experiences and learn from efforts to indigenise evaluation, such as the Made in Africa Evaluation concept.    
  • We will rethink the evaluation cycle, promoting a greater role for our civil society partners and involving project participants in evaluation design. We will try to open space for co-creation of evaluation criteria, where project participants explore openly what they see as important and how they understand change and will take care to avoid tokenistic participation.  
  • We will broaden our understanding of expertise, proactively seeking evaluators with lived experience of the intervention context. We will also build in reflexive praxis as part of every evaluation, making efforts to understand and counteract unequal power relations at every level.  
  • We will pilot and promote decolonial methodologies for data collection and analysis, such as photo, audio and video recordings which have been produced (or at least guided) by project participants 
  • We will seek opportunities to pilot decolonial approaches to evaluation with like-minded funders and will share results and learning with peers and donors to influence practice.  
  • For ‘conventional’ evaluations (such as those required by institutional grants) we will increase budgets and time allocated to opening and closing feedback loops and will specifically include adequate budget for appropriate language use at all stages, including for non-literate project stakeholders.   
  • Finally, we will rethink evaluation products to better represent the voices of the primary project stakeholders, and to reflect their information and learning needs. We will pilot oral, pictorial and video reporting, and will proactively seek ideas from our partners, project stakeholders and other good practice in the sector.   

Download the full paper to learn more about this.

Jennifer Backhouse is a Global Monitoring, Evaluation and Learning Advisor at Christian Aid. She has a keen interest in improving systems and processes to enable greater inclusion, and place more power in the hands of key stakeholders.