Amber Peterman and Nikola Balvin, UNICEF Office of Research—Innocenti, November 2016
In international development, research is never purely an academic exercise. Its purpose, ultimately, is to provide knowledge that can be used to improve the lives of poor and vulnerable populations. Yet, despite increased focus on understanding the influence that research has on development policy and practice, methodology for measurement is still in its infancy.
The process of evaluating research impact is challenging. True attribution is rarely possible and even assumptions around contribution need to be scrutinized to avoid bias and enhanced perceptions of influence. Some of the methods used for assessing research impact – for example, bibliometrics or “most significant change” – have been around for some time, while new innovations continue to emerge, including Research Contribution Framework, SenseMaker®, AltMetrics and Social Network Analysis.
Applying a case study approach, UNICEF, FAO and partners recently published From Evidence to Action: The Story of Cash Transfers and Impact Evaluation in Sub-Saharan Africa. The book focuses on 8 evaluations of government-run social protection programs in African countries, conducted under the Transfer Project (Ghana, Ethiopia, Kenya, Lesotho, Malawi, South Africa, Zambia and Zimbabwe). The special focus of the edition is on how evidence from evaluations were used to catalyze policy and programme change in the social protection arena. Co-authored by researchers, policymakers (including government officials), donors and implementers, each country chapter offers an honest and convincing account of how changed happened.
While recognizing variation across countries, the book concludes that 10 years of evidence from the Transfer Project (Chapter 2) contributed to:
- Building the overall credibility of an emerging social protection sector;
- Strengthening the case for social protection as an investment tool, and addressing public perceptions and misconceptions;
- Supporting learning around program design and implementation to inform program improvements; and
- Shaping policy discussions and informing regional social protection agendas.
For those of you who have less time to dig into the 350 page volume, here are six key lessons from the editors.
Lesson 1: Make sure evaluations are linked to national policy priorities: One of the simplest ways we can ensure results are relevant for policy is to evaluate ‘real’ programs, which are of interest to stakeholders. In the Transfer Project, this was accomplished by focusing on government-led programming which had gained enough momentum to be squarely on the political agenda. Further, evaluations were commissioned at key moments, when research findings could feed into policy decisions regarding the design, expansion and funding of programs. Because government stakeholders were involved from the outset, key questions of national interest could be included in the evaluation, avoiding the pitfall of answering questions solely driven by academic novelty.
Lesson 2: Stronger relationships lead to improved policy linkages: Research teams including both national and international professionals, with quantitative and qualitative expertise, able to interact with and respond to diverse sets of stakeholders, increased national ownership and involvement in the research process. Ultimately this led to trust in, and credibility of, the research teams as well as the evaluation results. Consequentially, when adverse or unexpected findings were uncovered, they were more likely to be accepted, critically discussed, and acted upon, rather than “swept under the rug.” For example, in Ghana when irregular payments led to decreased program effectiveness (Chapter 7), or in Zimbabwe when lack of harmonization across targeted transfers led to decreased benefits for households (Chapter 10)—there was commitment to find solutions, instead of finger pointing.
Lesson 3: Diversify research products over the evaluation timeline: A common critique of impact evaluations is that once results are delivered—it is too late to make ‘course corrections’ or inform program scale-up due to the lengthy time between evaluation and publication of results. This was addressed by conducting targeting and baseline analyses, rapid assessments, qualitative work, simulation of local economy impacts, and other products to inform decisions in a timely manner. These analyses complemented the end-of-program impact results to feed into quick ‘policy wins’ and program change along the evaluation timeline. In Kenya, targeting analysis led to a revision of the targeting formula, better accounting for regional and livelihood differences (Chapter 6) and rapid assessments in Lesotho led to responsive adjustment from a flat transfer to one that varied by household size (Chapter 11). Each country has numerous examples.
Lesson 4: Don’t overlook the importance of packaging evidence: Another important component of translating evidence into policy change is the framing and presentation of actionable messages through diverse media platforms at key policy junctures. Use of easily accessible products to a non-research audience, such as policy briefs, oral presentations, fact sheets, and advocacy videos (among others) improves links to diverse stakeholders. For example, the Government of Ghana released a series of branded policy briefs utilizing evaluation evidence, which were heavily used in national and regional fora (Chapter 7). Crafting messages to mitigate myths and perceptions which were not backed by evidence also contributed to creating an enabling policy environment. Messaging helped position transfers as investments that create economic multiplier effects as opposed to costs that promote dependency.
Lesson 5: Create regional learning communities: Although evaluations were nationally focused, the combination of evaluations undertaken by the consortium of actors under the Transfer Project contributed to a regional learning culture with its own formal and informal information exchange mechanisms. Annual Transfer Project Workshops assisted with cross-country learning and awareness raising among non-evaluation country stakeholders in the region and beyond. Combining evaluation findings from countries at different stages of evaluation and program maturity, to examine both commonalities and divergences, led to a rich regional learning agenda and facilitated an enabling policy environment around social protection.
Lesson 6: Build local capacity: Many evaluations work with national research firms and institutes, and build local capacity by training local enumerators or publishing with local academics. The Transfer Project takes this a step further by establishing ongoing research partnerships (e.g. with the African Economic Research Consortium), conducing evaluation training for networks of Ph.D. students, and running a fellowship programme which encourages promising early career African researchers to collaborate with Transfer Project researchers on joint publications, among other activities. Building capacity is not only good practice, but also contributes greatly to ownership and lasting influence of findings at a national level.
The authors and editors of From Evidence to Action acknowledge that many decisions come down to politics, or are taken based on influences which are outside the control of research or stakeholder teams. In these instances, and in cases where research findings point out implementation or other challenges, the book still offers important learning that we can glean for future programming (Chapter 14).
UNICEF and others who conduct research in international development should strive to use the evidence in a meaningful way. The challenges associated with this outlook are numerous, and include developing and refining sound methods for assessing research impact, as well as using the acquired lessons to maximize the influence of research in the future. Congratulations to the Transfer Project book team, and for raising the bar on research uptake for meaningful change in the lives of poor and vulnerable children and households around the globe!
The Transfer Project book “From Evidence to Action” was published by Oxford University Press and edited by Benjamin Davis (FAO), Sudhanshu Handa (UNC, former UNICEF Innocenti), Nicola Hypher (Save the Children UK), Natalia Winder Rossi (FAO), Paul Winters (IFAD) and Jennifer Yablonski (UNICEF), and includes contributions from over 80 authors. For more information on launch events and to download the book, click here.
This blog was originally posted on UNICEF Connect, Evidence for Action: https://blogs.unicef.org/evidence-for-action/making-research-count-turning-evidence-into-action-from-transfer-project/
It has also been re-posted on Better Evaluation: http://betterevaluation.org/en/blog/making-research-count-evidence-into-action