Nikola Balvin (UNICEF Office of Research-Innocenti), October 2015
What do snakes, flat batteries, limited privacy, and identifying a suitable cut-off point have in common? As I recently observed, they are some of the many challenges that can occur when conducting an impact evaluation in a remote village.
On a recent trip to Ghana, we observed baseline data collection for an evaluation of the Ghana Livelihood Empowerment Against Poverty (LEAP) 1000 cash transfer programme. The programme is administered by the Government of Ghana with technical support from UNICEF and targets households with women who are pregnant or have children under the age of 12 months. The impact evaluation is taking place in five programme districts and has a target sample size of 2,500 households: half from the treatment group and half from the comparison group. Because it wasn’t possible to randomly assign participants to a control group and carry out a Randomized Controlled Trial (RCT), the evaluation uses another rigorous approach called Regression Discontinuity Design (RDD; see page 7 of Brief 8 for a description). The results will inform the Ghanaian government of changes in families’ lives caused by cash transfers and inform future delivery of similar programmes.
There’s a demand within UNICEF, but also in the broader international development community, to share good research practices and lessons learned. This is why the Office of Research-Innocenti is developing a new series of methodological briefs, Impact Evaluation in the Field. The series will go beyond textbook advice to document practical challenges and innovative solutions from the Transfer Project and other UNICEF-supported impact evaluations, providing examples of both challenges and innovative solutions in an often under-resourced development context.
My observations of the work in the Upper East Region of Ghana highlighted that despite some difficulties, technological innovations certainly made the work much easier and more efficient. Using wireless tablets during the targeting phase (when women were informed of the programme and registered to determine eligibility) and laptops during the evaluation interviews meant that data could be uploaded and communicated to analysts almost instantly. This allowed for ongoing quality control of the data and monitoring of how much more needed to be collected. The communities where the interviews took place were in remote areas and households were often inaccessible by vehicles. The enumerators relied on locals to show them around and take them to the right household. To increase the likelihood of finding the same participants when they come back to do endline interviews, the enumerators used a GPS device to note the dwelling’s location coordinates. Mobile technology was also extremely important for keeping data collection teams connected to each other during the long days among the maize fields.
The challenges were many and varied. Some – like the electronic scales not reading participants’ weight correctly because of the uneven ground and dirt surfaces, laptop batteries going flat, and the occasional snake dropping from a tree – were relatively easy to deal with. But others were much more complex and required an informed consideration of the context, review of international standards and updating of study protocols.
The value of working with local institutions was very clear and I was impressed with the enumerators’ professionalism as they administered the survey questions and protocols, ensuring participants’ privacy and skipping sensitive topics when a private space was interrupted by a visiting neighbour or a curious relative. They were fluent in several of the local dialects and able to translate the survey questions on the spot or secure alternative translators in the community when needed. They also understood the culture well, allowing them to read the social dynamics and ask questions in a way that made sense in the local context – sometimes accompanied by vivid examples and body language.
Another challenge revolved around managing delays – during targeting, baseline data collection and even the first payment – and ensuring that the time needed for one stage did not take away from another. Targeting needed to be fully completed before the government confirmed the cut-off score/threshold which identifies the eligible beneficiaries and it was only after this decision that the treatment and comparison groups for the impact evaluation could be formed and the research could begin. This meant that baseline data collection was squeezed between the government’s targeting of eligible households and registration for the first payment, and the enumerators had to be trained, ready to go, and work quickly and efficiently to complete this phase.
Perhaps the most valuable lesson learned from my visit was shared by the Chief of Social Policy, Sarah Hague, who stressed the importance of working closely with the government and providing technical support. It is through this close relationship, built over many years, that social protection work supported by UNICEF Ghana is making a positive impact on the lives of Ghanaian women and children, while also providing important global lessons learned.
After only a few days in the field, I found so much to think about and share. Imagine the richness and value of UNICEF colleagues and partners sharing their knowledge and experience gained over many years in the Impact Evaluation in the Field methodological series. The first briefs in this new series are expected to come out at the end of 2015.
Nikola Balvin is a Knowledge Management Specialist in the UNICEF Office of Research-Innocenti. The author would like to thank Tia Palermo, Richard de Groot, Sarah Hague, Jonathan Nasonaa Zakaria, Daisy Demirag, Maxwell Yiryele Kuunyem, and the enumerators from ISSER for the helpful discussions that contributed to the writing of this blog.
This blog was originally posted on UNICEF Connect: https://blogs.unicef.org/blog/doing-impact-evaluation-in-a-remote-region-of-Ghana/