Assessment of the use of evidence (including through partnerships) is minimal, but some systematic approaches to track policy impact over time are on the rise, as shown by 3ie’s evidence use and impact measurement approach (2020) and CEGA’s retrospective review (Fishman and Christiano 2020). Building on existing tools, routine diagnostic processes of evidence use within governments, conducted either internally or externally, could help countries, donors, and researchers measure their commitment to institutionalizing evidence and adjust which areas and capacities to focus on accordingly.
As called for by the Africa Evidence Network (2021b) in their Manifesto on Capacity Development for Evidence-Informed Decision Making in Africa, capacity strengthening should be centered around identifying and building on existing capabilities and local talent and knowledge, challenging power dynamics that perpetuate transactional and wholly ineffective approaches (Green 2017). In light of unevenly distributed capacity among research institutions, the International Centre for Evaluation and Development’s ALL-IN program calls for researchers at African institutions to take the lead in defining agricultural research priorities and facilitate management of large and complex awards.
Other examples of promising partnership initiatives to strengthen government capacity and interest in generating, interpreting, translating, and acting on data include Twende Mbele, a peer learning network of African monitoring and evaluation offices; Utafiti Sera, a program that develops sector-specific “houses” to review and appraise evidence; the International Network for Government Science Advice, a forum to develop approaches to use evidence in policy; the Transfer Project, a multi-country cash transfer research network that invests in long-term relationships with government officials and begins with identifying the most relevant research question and fitting the methods to the question; and the African Institute for Development Policy (AFIDEP)’s parliamentary engagement in Malawi. Capacity building through research cocreation also facilitates buy-in of the evaluation design, results, and recommendations, thereby reinforcing the culture of evidence use.
One institutional function ripe for transformation is procurement. External donors can help governments and other institutions that fund evaluations to better hire high-quality, locally-immersed technical talent by establishing prequalified groups and easy matchmaking programs, among other levers. The partnerships community of practice could systematically solicit and socialize better procurement practices designed to support governments and funders in selecting partners and funding high-quality evidence generation.
Program implementers contracting evaluators to assess their own projects may introduce conflicts of interest, but these tradeoffs may be warranted depending on the goals at hand (e.g., iterative learning versus accountability). Multilateral and bilateral funders should make available model procurement documents and contracts for hiring evaluation agencies, templates of evaluation designs, and survey instruments, which could be accessed by local evaluation firms and government agencies committed to undertaking more impact evaluations. The structure could be similar to the World Bank’s Public-Private Infrastructure Advisory Facility and build on DIME’s existing efforts. Housing this within a multilateral institution will help give government leaders access to resources.
Last, philanthropic funders and other interested donors should commit to mobilizing and pooling additional resources through a demand-driven fund dedicated to supporting governments in articulating evidence agendas that they can then take forward and implement in collaboration with policy-oriented researchers.