News

Lawrence Haddad: Grading research for development

Development Policy06 Sep 2009Lawrence Haddad

I just returned from Coleraine and the Development Studies Association Annual Conference. Lots of interesting papers and presentations: Charles Gore on new global paradigms (knowledge-dominated), Santosh Mehotra from the Indian Planning Commision on the impacts of the downturn on India’s growth (not too bad) and poverty (not clear but likely to be not good), Mayra Buvinic from the World Bank on what to do to protect women in the downturn (not too many new interventions it seemed to me, mainly intensification of existing ones), the new Director of Research at DFID, Chris Whitty, on a quality graded research evidence base, and DFID DG Andrew Steer reporting on the new DFID White Paper. There were many interesting papers from parallel attended by the 200 participants inlcuding some good IDS sessions on the impacts of the downturn and the implications for development policy. Catch some of the clips on this website.

Chris Whitty’s session generated the most heat, and perhaps a little bit of light. Chris shares the same concern I do–research is not fulfilling its potential to reduce poverty. It’s hard to prove this. But we do know that there is an accelerating amount of research being generated–much funded by DFID–and that the very volume of it makes it very tough to keep up with. Just think how hard it is to stay on top of developments in one’s own field and then imagine how hard it is for generalists in decision making positions either in policy or frontline positions to do so. So how do we systematically organise the material around questions and contexts, separating out the careful from the not so careful, and then communicate that in an accessible way? Outside of the development social sciences this is fairly routine–there is the Cochrane database and the Campbell Collaboration. Inside the development social sciences it is not unknown (see this example from the World Bank) but fairly rare.

The debate at the session revolved, it seemed to me, around which research questions one applies such a mechanism to and what that mechanism looks like, especially who does the grading. On the first issue–which question–one needs questions that decision makers want answers to and questions that lend themselves to comparisons across contexts. One example: when does conditionality of particpant behaviour improve social protection programmes and when not? Even this question is challenging to create an evidence base for–what qualifies as a social protection programme? What does “improve” mean?–but other questions such as: can pro-poor growth be pro-environment? will be more difficult, and more open ended questions such as: how do politics shape the use of knowledge? even more so and potentially counterproductive to even try. On the second issue–grading–it would be good to have peers reviewing, but perhaps in an open wiki-style way. I will keep you posted on this debate as it plays out.