Both Owen Barder and Matt have posted about aid transparency recently. It’s a field I know well, as I work on aid management and transparency for recipient country Governments, and have spent a lot of time creating access to data, working on how best to categorize and store it in different places and how to use all of this information usefully. Owen and Matt have both made a number of good points about why transparency can be important, and why it needs to be approached from a perspective that builds upwards from the recipient country’s information needs. These posts and the discussions therein are high quality, and mean I’m going to limit my observations to a few points which I think have escaped the discussion somewhat.
I’m resolutely in favour of increased transparency of aid operations from beginning to end of each aid process: commitments made by donors, disbursements and predictability thereof, expenditures made by project implementers and physical implementation. All of this is great. However, it is subject to a few conditions.
1) We need far more disaggregation than it might first seem
Presenting data on aid flows at the project level is not actually disaggregated sufficiently for most recipient country needs. One of the centrally important uses of good data on aid is in budgeting. At present budgets in Sub-Saharan Africa, reflect only a small portion of aid, which means that voting by Parliamentarians to distribute funding is done using incomplete information, and is usually sub-optimal. For aid data to be useful in budgeting, it needs to be disaggregated below the project level. Many projects have a Government and an NGO component, only the former of which should be reported on budget (the latter should be reported elsewhere as the budget is an audited document of funding the Government has control of). They also often have components for different sectors or Ministries, which need to be reported under different Votes. And if there are loan and grant components of a project, they again need to be reported separately. This may seem overly-burdensome, but it isn’t necessarily. In Malawi, in pre-IATI days, I helped establish a system that collected this level of data from virtually every donor active in the country. It transformed our budget, which became far more comprehensive, and also allowed us to report effectively on NGO expenditure for the first time, both of which developments were acknowledged and praised by external evaluators in our PEFA (Public Expenditure and Financial Accountabilty) Assessment
This need for disaggregation means that data is usually best collected at the country level, where the fine details of project implementation are known, and where staff of donor and project teams can break down the commitments, disbursements and expenditures based on who they go to. My experience of comparing OECD-DAC data to that collected domestically is that the latter is a far better fit for how aid is actually implemented, while the former may be more accurate in terms of what leaves the donor country.
2) Interpretation of data should not be taken for granted
This is one area where I sharply diverge from Owen’s opinions. He argues that we should not be producing analysis, but simply opening data up to the public and allowing civil society to interrogate it as they will.
I have absolutely no problem with letting civil society look through the data as they wish, but am very sceptical as to the capacity of CSOs in developing countries to do so. I’ll give an example from Malawi again. We produced an annual report on our aid portfolio, and actively distributed this to all our donors, all of the major CSOs (including their umbrella organisation) and made it available for free from our website. We made it clear we would provide all the raw data used for the report to anyone who asked, and we even provided the personal e-mail addresses of the people who would do so. In the three years I remained in Malawi after the publication of this original report, not a single Malawian civil society organisation asked for any of the data we had.
The analyses of predictability of aid and aid dependence we produced were moderately technical and CSOs did not have the capacity to produce anything more complex. Simply making data available does not mean that regular people can make head or tail of it. As an analogy, it’s like simply publishing vast swathes of technical data on a medical phenomenon, such as cancer survival rates from different kinds of treatment. Anyone without a good understanding of what all of the data means and how it can be interpreted will be lost with all this information; worse, they could produce enormously misleading conclusions from it. Many (including myself) would not bother looking at the data in the knowledge that they were unqualified to make a meaningful interpretation of it.
By all means, publish the data to allow people to interrogate it themselves. But also provide high-quality, penetrative analysis in a simple, easily understood form to support their understanding. They can always go beyond that if they wish.
3) Transparency is not a result in itself
A while back, I wrote a piece for Change.Org which makes this point extensively. It’s worth quoting the most relevant section in full:
… we need to make sure that the lines of accountability are open – that when people complain or make suggestions for aid, these are considered and acted upon when necessary. In donor countries, this means making sure that when we question effectiveness of the aid we provide, the Minister in charge has to respond. If our aid agencies are favour of transparency, they needs to back this up with accountability – and this should mean a regular forum which allows NGOs and voters the chance to grill the Minister or administrator and his or her advisors on their performance.
In recipient countries, both civil society and the Government needs to be helped to use the data available to work out how far the aid received in total and from each country deviates from their needs, and this again needs to be backed by a real form of accountability – and this is the hardest part of all.
The point here is that the major constraining factor in making aid, indeed Government, work better is not only information, but the feedback mechanisms that make accountability work. In many developing countries, knowledge that the Government or donors is inept is almost useless, because there is no way of acting upon this knowledge: elections are unclean or confused by other determinants of voting patterns (such ethnicity or religion), or the country is an effectively a dictatorship. What’s more, in many countries there is no real culture or political accountability.
Owen disagrees with me on this one, suggesting that if we publish the data, citizens of developing countries will find a way to make accountability happen. I think this is unfoundedly optimistic. People will not rebel over aid money, because it’s additive. Historically, we’ve seen they’ve rebelled over tax and over economic cataclysm (which may be related to donors, as in Zambia, but almost never aid). And far more direct and visible problems abound in developing countries which people haven’t managed to force accountability over. In Tanzania, for example, 2 weeks after the second set of explosions caused by negligent management of weapons silos caused the deaths of more than twenty people and the loss of homes of many more, not a single person in any official capacity has been sacked, has resigned or even faces an inquiry. Transparency is patently not enough to force results; to say this does not diminish it’s importance, though.
None of these arguments are pointed against transparency or against the IATI. Rather, they demonstrate that there is far more we need to do before we can consider the work the IATI aspires to even close to complete.