How context matters

Development Policy01 Nov 2011Thomas de Hoop

There is currently too little understanding of how context matters for development effectiveness.

There is currently too little understanding of how context matters for development effectiveness. As a result, few donors manage to move beyond rhetoric and continue to implement blueprint approaches for development effectiveness. Developing policies that combine knowledge of development effectiveness with knowledge of context requires a serious effort, and one that urgently needs to be made.

The current misspending of development money partly results from a lack of information about what works in development. The last decade has, nonetheless, witnessed a large increase in the knowledge base of what works in particular contexts. The use of rigorous impact evaluations, “studies which address what difference an intervention made, i.e. tackle attribution”, has played an important role in this trend. Both randomised control trials and quasi-experimental evaluations can be considered rigorous impact evaluations. When beneficiaries are randomly selected the average impact of a development program can be derived by comparing the mean outcome of the beneficiaries of the program with the mean outcome of a group of non-beneficiaries. In quasi-experimental evaluations, impacts can be measured by comparing beneficiaries with a non-randomly selected comparison group and adjusting for differences by the use of various statistical techniques.

Knowledge about the impact of development programs that is derived from impact evaluations can be a very useful tool for policy makers. For example, the Mexican government decided to scale up the conditional cash transfer program PROGRESA, but only after it became known that the program had a large and positive effect on school enrolment. Partly because of the proven positive effects, the program has kept receiving support from several successive governments. Similar successful programs have been developed in, amongst others, Brazil, Colombia, Honduras and Nicaragua.

The lack of knowledge about the relationship between context and development effectiveness complicates the decisions of policy makers who are responsible for distributing their scarce resources. What should a policy maker in Kenya do when he hears that a conditional cash transfer programme in Brazil is highly effective in stimulating school enrolment? The answer remains unclear unless we find out more about the relationship between the program’s effectiveness and contextual characteristics. A focus on transparency, as was agreed in the Paris agenda, is only the first step towards the effective use of knowledge about development effectiveness in policy circles.

So what can be done to increase the knowledge base? First, there remains a need for investment in knowledge of what works in particular contexts. Incentives of aid agencies need to be twisted such that it is in their best interest to be transparent and to find out what works in development. Second, impact evaluations of development programs should be replicated in different contexts. When impact evaluations demonstrate the effectiveness of a specific program in both Brazil and Malawi, policy makers in Kenya will be more inclined to use the results than when the impact evaluation was only conducted in Brazil.

However, resources available for impact evaluations are scarce. Not every impact evaluation can be replicated in a different setting. Therefore, any research project should aim to always make its context explicit. For example, recent research indicates that conditional cash transfers and women’s empowerment programs are more effective in communities with liberal gender norms. Additional research shows that programs that aim for technology adoption are more effective in areas with dense networks. This type of research shows how context matters for development effectiveness. However, researchers are only slowly moving in the direction of making context more explicit and policy makers are not yet sufficiently engaged in this type of research.

In Busan, policy makers should no longer be stating the fairly obvious but vague notion that context matters. Instead, we should make context explicit by encouraging the use of impact evaluations in a strategic way. When we expect development programs to work differently in areas with different social and cultural norms, different types of networks and different ethnic compositions, impact evaluations should be implemented in areas with different characteristics. The learning potential of impact evaluations can improve tremendously if they are implemented in areas where one can distinguish between different contextual characteristics that are expected to be related to aid effectiveness. When both policy makers and researchers take this message seriously, the returns to transparency of results could increase tremendously.