top of page

How can we ensure the results of research and evaluation projects come full circle; benefiting all stakeholders?

Updated: Nov 4


In recent years, the importance of including community voices and establishing feedback loops in research and evaluation has gained widespread recognition among development scholars and agencies. Thought leaders like Chambers (2007) and Patton (2008), along with numerous development agencies, have advocated for participatory approaches that actively engage stakeholders and prioritise learning from the ground up.


At ImpactLoop, we aim to take this commitment a step further. By incorporating a looped-learning approach in our research and evaluations, we strive to create an ongoing learning journey with stakeholders. This approach not only generates knowledge but also drives transformational change. Our name, ImpactLoop, reflects our mission: to ensure that insights from research come full circle, providing meaningful benefits for everyone involved.

However, despite these aspirations, we frequently encounter systemic and sectoral barriers that make implementing a true “looped-learning” approach challenging. These obstacles limit the reach and effectiveness of our evaluations, restricting the potential for all stakeholders to benefit from the findings.


In this blog post, we explore a pressing question: How can we ensure that the results of research and evaluation projects truly come full circle, benefitting all stakeholders? 

We begin by discussing (i) the background of information inequality in development projects and evaluations, then examine (ii) the growing shift towards participatory approaches. Finally, we address (iii) the persistent bottlenecks we’ve observed and (iv) suggest actionable steps to enhance progress.


This post is also an invitation for broader dialogue. We don’t claim to have all the answers; instead, we believe that only through collective knowledge generation can we move closer to answering this critical question.


Background: The Problem of Information Inequality


The issue of information inequality is deeply rooted in the history of development work. Information inequality arises when there is an imbalance in who controls and has access to data and knowledge. In development contexts, this often manifests as researchers and external evaluators holding the majority of the information, while the communities they study are left with little access to the results and findings. This imbalance can reinforce existing power dynamics, where those who are meant to benefit from development initiatives are excluded from the processes that could empower them.


Scholars like Robert Chambers have long argued that conventional development practices often marginalise the voices of the poor and the powerless, leading to interventions that are misaligned with the needs and realities of those they aim to serve (Chambers, 1997; 2007). Anthropologists also uncovered the way development practices may fail to consider the local realities and exclude the political and historical context (e.g. Ferguson, 1994; Li, 2005).


This "top-down" approach can result in the extraction of data without reciprocal benefits, leaving communities with little ownership over the knowledge generated. Additionally, Koch (2024) discusses how foreign aid, despite its intentions, can sometimes lead to unintended consequences that exacerbate information asymmetries and disempower local communities.


The Shift Towards Participatory Approaches


In response to the challenges of information inequality, there has been a significant shift towards participatory approaches in development and evaluation (see for instance Cornwall, (2002)). Participatory approaches emphasise collaboration which includes working alongside communities throughout the development process, from needs assessment to implementation and evaluation and knowledge sharing, ensuring that both development institutions and communities share information and expertise. This participatory approach also found its way into the monitoring and evaluation sector, in the form of participatory evaluations (see for instance: Participatory evaluation | Better Evaluation), and participatory impact assessments (Chambers, 2010).


Cognizant of ongoing challenges (see next section) and the need to shift away from traditional evaluation methods centred solely on data collection and reporting, many researchers have also developed new evaluation methods, such as utilisation-focused evaluation (UFE). Developed by Michael Quinn Patton, UFE is based on the principle that an evaluation should be judged on its usefulness to its intended users (See: Utilisation-focused evaluation | Better Evaluation). 


Similarly, others, including ourselves, see the need for something that we dub - “looped learning", borrowing the term from organisational learning approaches (see for instance Argyris, 1977). Looped learning can enhance evaluations by fostering a culture of continuous improvement and adaptation. This iterative learning process allows for the refinement of evaluation methodologies, strengthening data collection and analysis, and ultimately leading to more relevant, credible, and useful evaluation findings (Argyris, 1977). However, it’s crucial to acknowledge that this approach, while promising, requires continuous exploration of diverse and effective strategies to maximise impact.


Persistent Systematic and Sectoral Challenges


The shift towards participatory approaches has been hindered by several persistent challenges: 


  • Power Dynamics: The entrenched power dynamics between external evaluators and local communities often hinder meaningful participation. Evaluators may unintentionally dominate the process, side-lining community voices (Cooke & Kothari, 2001).


  • Resource Constraints: Effective participatory processes require time, resources, and training, which are often in short supply in development projects. Without adequate investment in these areas, participatory approaches can become tokenistic rather than transformative (Cornwall, 2007).


  • Institutional Resistance: There is often resistance within development institutions to adopt new approaches that challenge traditional hierarchies of knowledge. Overcoming this requires not only technical changes but also shifts in institutional culture (Mosse, 2005).


  • Communication and Understanding Issues: Evaluation teams often encounter challenges in establishing and maintaining effective communication channels with participants (Ryan et al., 1998). Despite the widespread availability of mobile phones, some communication barriers remain. These challenges can include language differences, varying levels of literacy, and cultural nuances that complicate the sharing of information. Additionally, while obtaining informed consent is now standard practice, ensuring that participants fully understand the implications of the research—including how the results will be used and disseminated—remains a challenge. Informed consent requires that participants comprehend research processes, which are often laden with jargon and may be explained through unclear channels, further complicating their ability to engage fully (Molyneux et al., 2005).


As a result of this (and perhaps many other reasons - see next section), we continue to observe: 


  • (Continued) Limited Participation in the Evaluation Process: Project participants often still have limited opportunities for meaningful input into the design, data collection, and interpretation of results. This lack of participation can undermine the credibility and utility of evaluation findings (Cornwall, 2007). Without input from those who are aware of the on-the-ground realities, evaluations may fail to address the root causes of issues and overlook unintended consequences (Koch, 2024). Furthermore, a growing number of evaluation studies does not indicate that knowledge and learning among the relevant stakeholders are growing but may create an overload of information that is often difficult to navigate (Reinertsen et al., 2022). 


  • (Continued) Limited Sharing of Results: While participatory approaches are a step forward, Brett (2003) highlights a crucial remaining challenge: effectively bridging the gap between learning and sharing back. According to ethical guidelines for research, sharing findings with the participants is crucial to paying back for participants’ contributions (Freire, 2007; American Anthropological Association, 2011, Friedlander et al, 2021). Despite this, the sharing of results is often limited. This can prevent communities from benefiting from the knowledge generated, reinforcing the information gap between researchers and participants. 


  • (Continued) Limited Input in Project Design: When external experts and institutions design programs without sufficient input from local communities, an information gap often emerges between the project designers and the intended beneficiaries. This gap can result in a lack of transparency and communication, leading to feelings of alienation and distrust among the communities involved (Cooke & Kothari, 2001). Such issues frequently go unnoticed during evaluations. The exclusion of local participants in the project design phase can also hinder efforts to uncover the true impact of the project on the communities. Brett (2003) highlights the importance of including community members in the design process to build trust and ensure the evaluation captures the full scope of the project's impact.


Our Experience


Despite our intention to shift towards a looped learning mechanism, we encountered multiple bottlenecks in sharing research and evaluation findings with project participants:


  • Limited scope in evaluation design: Terms of reference for evaluations often leave limited space, time, and resources to factor in participant feedback loops and the sharing of results (in an easy-to-understand and condensed manner).


  • Tight Timelines: Carrying an evaluation often prioritises immediate implementation over revisiting the evaluation plan and sharing findings, due to tight timelines and contract dates. 


  • Insufficient Feedback Loops: A significant hurdle lies in effectively capturing and incorporating participant feedback after information sharing; while disseminating evaluation findings is crucial, ensuring that beneficiaries can provide input and influence subsequent actions is equally important. This again requires time and planning to establish clear feedback mechanisms and demonstrate a genuine commitment to acting upon received feedback.


  • Communication: Evaluation outputs are often complex and inaccessible to beneficiaries with varying levels of education, favouring the rise of visual approaches to communicate findings. Yet, this again takes time and resources (that need to be planned into the evaluation design and contract).


  • Logistical hurdles: While digital channels are expanding, reaching beneficiaries, especially in remote areas, can still remain a challenge. Also, contacting participants post-evaluation can be challenging as people move or change their phone numbers. 


  • Data accessibility gap: Sensitive data cannot always be shared publicly due to confidentiality agreements.


Potential Solutions? 


Based on our learnings, experiences and challenges, as well as some review of the literature, we identified some potential interventions that can be undertaken to expand the learning loop:


Building Capacities

  • Investing in the capacity of local communities to engage meaningfully in research processes is crucial. This involves not only technical training but also fostering an environment where community voices are valued and heard (Eade, 1997).


Planning and Collaboration

  • Establish defined information-sharing structures and milestones in the work plan.

  • Involve participants from the outset, validating research questions and incorporating their contributions.

  • Partner with clients to understand the importance of knowledge sharing and collaboratively map out the project value chain.

  • Plan communication channels with participants, and sequence data collection strategies (e.g. have findings from qualitative data influence the design of the survey questionnaire)

  • Map out potential impact pathways for project participants in a Theory of Change model

  • Aim to understand which outcomes were most important to participants and communities (not just donors)

  • Aim to understand the unintended consequences of development projects in evaluations, such as spillover effects, and negative backlashes (Koch, 2024) from the onset.


Interpreting and Validating Findings 

  • Validation and learning workshops: Creating robust feedback loops where research findings are shared with communities in accessible and actionable formats is essential. This not only closes the information loop but also empowers communities to take ownership of the knowledge generated. We recommend validation and learning workshops with participants after the data collection (and different stakeholder groups), which should be aware of the following:

    • Recognise different stakeholder audiences and adjust communication methods based on their needs (e.g., summaries, and visual aids).

    • Offer information in various formats (hard copies, summaries, visual presentations, video clips); collaborate with graphic designers to create visually appealing materials, including in child-friendly and visually self-explanatory formats.


Dissemination of Results

  • Tailoring Communication to the Audience: When sharing research findings, it's important to tailor the communication style and format to the audience. This may involve translating technical language into local dialects, using visual aids, or organising community meetings. We recommend using easy-to-understand formats, such as info-graphics, videos, WhatsApp messages, etc. for local communities and project participants. Utilising digital tools like webinars and blog posts can also help overcome geographical barriers and expand reach to other stakeholders.


  • Community-Led Dissemination: Identify local points of contact (e.g. figures of authority) to disseminate information through local networks, improving accessibility. Encouraging communities to lead the dissemination of research findings can enhance their engagement and ownership of the data, and can help in spreading the knowledge more widely within the community and beyond.



Invitation to Dialogue

As we continue to explore these issues, we recognise that there is no one-size-fits-all solution. We invite our readers to join us in this dialogue, sharing their experiences, insights, and ideas on how we can collectively ensure that the results of research and evaluation projects come full circle, benefiting all stakeholders involved.

References


American Anthropological Association. (2011). Principles of archaeological ethics. [Online] Available at ethics.aaanet.org (accessed May 31, 2024).


Argyris, C. (1977). Double loop learning in organisations. Harvard Business Review, 55(5), 115-125.


Brett, E. A. (2003). Participation and accountability in development management. The Journal of Development Studies, 40(2), 1-29.


Cooke, B., & Kothari, U. (Eds.). (2001). Participation: The new tyranny? Zed Books.


Cornwall, A. (2002) Making spaces, changing places: situating participation in development. Working paper series, 170. Brighton: IDS.


Cornwall, A. (2007). Unpacking 'Participation': Models, Meanings and Practices. Community Development Journal, 42(3), 269-283.


Chambers, R. (1997) Whose Reality Counts? Putting the First Last. London: Intermediate Technology Publications.


Chambers, R. (2007). From PRA to PLA and Pluralism: Practice and Theory. Brighton: Institute of Development Studies.


Chambers, R. (2010) A Revolution Whose Time Has Come? The Win? Win of Quantitative Participatory Approaches and Methods. IDS Bulletin 41(6): 45-55


Ferguson, J. (1990) The Anti-Politics Machine: Development, Depoliticization, and Bureaucratic Power in Lesotho. Cambridge University Press.


Freire, P. (2007) Pedagogy of the oppressed. Continuum.


Friedlander, S., Rabb, M., Tangoren, C., Aker, J., Alan, S., & Udry, C. (2021) Sharing Research Results with Participants: An Ethical Discussion. Center for Global Development. https://www.cgdev.org/blog/sharing-research-results-participants-ethical-discussion 


Koch, D.J. (2024) Foreign Aid and Its Unintended Consequences. Taylor & Francis.

Li, T.M. (2007) The Will to Improve. Governmentality, Development, and the Practice of Politics. Duke University Press.


Reinertsen, H., Bjørkdahl, K., & McNeill, D. (2022) Accountability versus learning in aid evaluation: A practice-oriented exploration of persistent dilemmas. Evaluation, 28(3), 356-378. 


Ryan, K., Greene, J., Lincoln, Y., Mathison, S., & Mertens, D. (1998) Advantages and challenges of using inclusive evaluation approaches in evaluation practice. American Journal of Evaluation, 19(1), 101-122. https://doi.org/10.1177/109821409801900111


Comments


bottom of page