Alumni Profile – Cormac Mangan
In the first of a series of reports from our Alumni, Cormac Mangan (WIP Class of 2010) talks about his experiences in Ghana after completing the Washington Ireland Program.
Controlled Randomness in Development
Following a summer working in the ivory tower of development at the World Bank – thanks to WIP -, I was provided with the opportunity to craft a new perspective on the subject by helping to manage a development economics research project on the ground in Ghana.
In the Northern regions of Ghana, where I am currently based, thousands of people die each year from malaria. It’s a popular development issue at the moment with which we are all familiar: we have seen Bill Gates and Bono speak about the loss of life and suffering caused by such an easily treatable disease. The drugs to successfully treat this endemic illness have been developed and proved by clinical trials. Well documented and audited clinical trials are now the gold standard in medicine and now nothing less is accepted by medical journals. Such procedures in medicine have been remarkably successful in differentiating bogus placebos from effective treatments and have no doubt saved millions of lives. We rightly insist on such standards due to the direct human impact of medicine on its subject; but why not apply the same rigorous evaluation standards to social and economic programs and policies which may well have an equally significant impact on human beings?
The organisation for which I currently work, Innovations for Poverty Action (IPA), in close partnership with the Jameel Poverty Action Lab at MIT, have a simple, if radical, goal. They want seek to use the same scientific clinical approach – as used in medicine – to overhaul development aid to ensure that more is spent on programs which are proven to make a difference. At first glance, it seems intuitive that aid programs make a positive difference; however, a long run ideological debate has taken place between aid groups who tout their work as ‘ indispensable’ and their critics, who condemn them as misdirected, inefficient and even counterproductive. The truth, as so often is the case, is probably somewhere in between.
Randomised Controlled Trials (RCTs) have recently emerged in development economics as an expensive but effective method of program evaluation. RCTs have helped shed light on many unanswered questions and debunked many others. A prominent issue addressed was how best to ensure school attendance in Kenya; supplies, support, incentives, food? RCTs have now established that one of the most effective and best value programs is to de-worm the school kids. It’s not the most glamorous of results, but we are now aware of a relatively simple, cheap and effective way to raise school attendance in Kenya, as well, of course, of hugely improving the general health of the children.
The main feature which adds immense explanatory power to RCTs is its randomised application. While all subjects are monitored, only some are provided with the treatment. This helps avoid the classic problem with the evaluation of aid programs: it’s too often impossible to separate cause and effect. Randomisation helps isolate effects and thereby measure the degree of causality of a given treatment. For example, to evaluate WIP we would need to recruit 60 extremely bright young leaders, all selected in exactly the same way. But, rather than subject all of them to the treatment we want to investigate (the WIP program) we randomly select 30 to partake in the program and closely track the progress of all 60 students. After the observation period we would analyse the results with the difference between the two groups being attributed to WIP. Of course, this is a gross simplification and there are unresolved problems with the project design in that “success” within WIP is extremely difficult to define and is a subjective measure. But, nonetheless, perhaps Bryan will let me experiment on next year’s class.
The particular project I’m working on involves investigating ways to lift the ultra-poor out of poverty. Initially, it may seem like a straight forward question but even such a basic tenet of development has yet to be scrutinised effectively. Is it: savings, regular income, assets, agricultural capabilities and knowledge, financial literacy, social integration, health support and education, or some combination of those? We aim to isolate the effect of each and identify exactly how best to support those who have the least. Then we can confidently tackle impoverished communities.
Every day we initiate hundreds of treatments on ourselves and our kids in the form of government policies that have immense impacts but are rarely subject to any effective testing. Policy evaluations can often be ‘spun’ to suit the vested interests of a particular entity. Politicians are not rewarded for admitting policy failures as their careers depend on promoting ‘successful’ policies; projects are willed to succeed by almost every stakeholder so policy evaluations conveniently reflect expectations. This approach stifles experimentation and compounds and prolongs ineffective projects. RCTs provide an expensive but a trustworthy method of simply establishing what works and what does not work. The use of RCTs is slowly growing beyond development – into educational and social policy programs. Hopefully, in future we will have the same knowledge of the effectiveness of our education policies as we have about our Calpol. If we can prioritise programs and eliminate those with minimum or no impact, we can ensure that the millions spent each year are more effectively employed elsewhere and actually help those who need it.