Who is Sam Childers?

He goes by many names, Reverend Sam and the “Machine Gun Preacher” amongst them. If you haven’t heard much from Sam Childers, you will soon. To date he’s been featured in a few mainstream publications, but most of his exposure has come from forays into Christian media outlets and cross-country speaking tours of churches. In 2009 he published his memoir, Another Man’s War. But Childers is about to become much better known: his life story is being made into a movie titled Machine Gun Preacher. It hits the big screen this September, starring Gerard Butler (300) and directed by Oscar-winner Marc Forster (Monster’s Ball, Quantum of Solace).

Why should you care? If you’re concerned about Africa (especially the newly independent South Sudan), neutrality and humanitarianism, or how small charities sometimes make it big on dubious stories, Childers is a scary character. By his own admission Sam Childers is a Christian and a savior to hundreds of children, as well as a small-time arms-dealer and a killer. And, as far as I can tell, he’s a self-aggrandizing liar who chronically exaggerates his own stories and has been denounced by many, including the rebel group of which he claimed to be a commander.

It’s hard to get to the bottom of much of Childers’ story. I first heard of him months ago and have been scouring the web, but the trail is still pretty thin. On the on hand there’s a ton of copy written about him – but almost all of it originates with Childers’ own storytelling. I think there are a number of good reasons we should be skeptical.

The short version of his coming-to-the-big-screen story is this: Childers used to be a drug-dealing gang member who loved motorcycles almost as much as he craved women, drugs, and violence – especially violence. He fell in love with his wife after they met through a drug deal, and she convinced him to turn his life around. Sam found Jesus, got involved with the church, and went to Africa. There he encountered the Joseph Kony’s Lord’s Resistance Army and it use of child soldiers. He found his calling leading armed rescue missions to free enslaved children in northern Uganda and southern Sudan. Now that his life story is being made into a movie -- a goal Childers has long sought -- his ministry will only grow stronger and save more children.

His website, MachineGunPreacher.org, makes no apologies about his violent tactics. Here’s one of the banners that adorns the front page:

What you see now is a slickly-polished presentation, but it hasn’t always been that way. Childers’ story has grown over time, apparently aided by a PR firm, sympathetic media, and a quest to be ever more sensational. My gut reaction is that he’s making much of it up – and I’ll present evidence that shows at least some of his claims are likely falsehoods. We can choose to believe that Childers’ claims are true, in which case he is dangerous, or that they’re false and he’s untrustworthy. The reality is probably that he’s a bit of both.

This is part 1 of a longer article on Childers. Continue reading part 2 here, or you can read the whole series as one long article.

Best practices of ranking aid best practices

Aid Watch has a post up by Claudia Williamson (a post-doc at DRI) about the "Best and Worst of Official Aid 2011". As Claudia summarizes, their paper looks at "five dimensions of agency ‘best practices’: aid transparency, minimal overhead costs, aid specialization, delivery to more effective channels, and selectivity of recipient countries based on poverty and good government" and calculates an overall agency score. Williamson notes that the "scores only reflect the above practices; they are NOT a measure of whether the agency’s aid is effective at achieving good results." Very true -- but I think this can be easily overlooked. In their paper Easterly and Williamson say:

We acknowledge that there is no direct evidence that our indirect measures necessarily map into improved impact of aid on the intended beneficiaries. We will also point out specific occasions where the relationship between our measures and desirable outcomes could be non-monotonic or ambiguous.

But still, grouping these things together into a single index may obscure more than it enlightens. Transparency seems more of an unambiguous good, whereas overhead percentages are less so. Some other criticisms from the comments section that I'd like to highlight include one from someone named Bula:

DfID scores high and USAID scores low because they have fundamentally different missions. I doubt anyone at USAID or State would attempt to say with a straight face that AID is anything other than a public diplomacy tool. DfID as a stand alone ministry has made a serious effort in all of the areas you’ve measured because it’s mission aligns more closely to ‘doing development’ and less with ‘public diplomacy’. Seems to be common sense.

And a comment from Tom that starts with a quote from the Aid Watch post:

“These scores only reflect the above practices; they are NOT a measure of whether the agency’s aid is effective at achieving good results.”

Seriously? How can you possibly give an aid agency a grade based solely on criteria that have no necessary relationship with aid effectiveness? It is your HYPOTHESIS that transparency, overhead, etc, significantly affect the quality of aid, but without looking at actual effeciveness that hypothesis is completely unproven. An A or an F means absolutely nothing in this context. Without looking at what the agency does with the aid (i.e. is it effective), why should we care whether an aid agency has low or high overhead? To take another example, an aid agency could be the least transparent but achieve the best results; which matters more, your ideological view of how an agency “should” function, or that they achieve results? In my mind it’s the ends that matter, and we should then determine what the best means are to achieve that result. You approach it with an a priori belief that those factors are the most important, and therefore risk having ideology overrule effectiveness. Isn’t that criticism the foundation of this blog and Dr. Easterly’s work more generally?

Terence at Waylaid Dialectic has three specific criticisms worth reading and then ends with this:

I can see the appeal, and utility of such indices, and the longitudinal data in this one are interesting, but still think the limitations outweigh the merits, at least in the way they’re used here. It’s an interesting paper but ultimately more about heat than light."

I'm not convinced the limitations outweigh the merits, but there are certainly problems. One is that the results quickly get condensed to "Britain, Japan and Germany do pretty well and the U.S. doesn’t."

Another problem is that without having some measure of aid effectiveness, it seems that this combined metric may be misleading -- analogous to a process indicator in a program evaluation. In that analogy, Program A might procure twice as many bednets as Program B, but that doesn't mean it's necessarily better, and for that you'd need to look at the impact on health outcomes. Maybe more nets is better. Or maybe the program that procures fewer bednets distributes them more intelligently and has a stronger impact. In the absence of data on health outcomes, is the process indicator useful or misleading? Well, it depends. If there's a strong correlation (or even a good reason to believe) that the process and impact indicators go together, then it's probably better than nothing. But if some of the aid best practices lead to better aid effectiveness, and some don't, then it's at best not very useful, and at worst will prompt agencies to move in the wrong direction.

As Easterly and Williamson note in their paper, they're merely looking at whether aid agencies do what aid agencies say should be their best practices. However, without a better idea of the correlation between those aid practices and outcomes for the people who are supposed to benefit from the programs, it's really hard to say whether this metric is (using Terence's words) "more heat than light."

It's a Catch-22: without information on the correlation between best aid practices and real aid effectiveness it's hard to say whether the best aid practices "process indicator" is enlightening or obfuscating, but if we had that data on actual aid effectiveness we would be looking at that rather than best practices in the first place.

The Tea Test

If you haven't been following it, there's currently a lot of controversy swirling around Greg Mortenson, co-author of Three Cups of Tea and co-founder of the Central Asia Institute. On Sunday 60 minutes aired accusations that Mortenson fabricated the 'creation myth' of the organization, a story about being kidnapped by the Taliban, and more. The blog Good Intentions Are Not Enough is compiling posts related to the emerging scandal, and the list is growing fast. If you haven't read it already, Jon Krakauer's mini-book, Three Cups of Deceit: How Greg Mortenson, Humanitarian Hero, Lost His Way, is really worth the read. Completely engrossing. It's a free download at byliner.com until April 20. It's about 90 pages, and Krakauer has obviously been researching it for a while -- in fact, my guess is that Krakauer turned 60 Minutes onto the story, rather than vice versa, which would help explain why he was featured so heavily in their piece. In the TV interview Krakauer quotes several former employees saying quite unflattering things about how CAI is run, so it's good to see that he gets many of those people on record in his ebook.

A few disclaimers: I think it's worth pointing out that a) as a one-time supporter and donor to CAI, Krakauer arguably has an axe to grind, b) several of Krakauer's previous books (Into Thin Air, Into the Wild, and Under the Banner of Heaven) have had sections disputed factually, though to me Into the Wild is the only case where he seems to have actually gotten things wrong, and c) I'm a big fan of him as a writer and thus am possibly a bit predisposed to believe him. Admitting by biases up front like good epidemiologist.

That said, it sounds like CAI has been very poorly led. Krakauer's book levels many damning claims about Mortenson and CIA's financial management that, while less emotionally shocking than the exaggerations about the 'creation myth,' should be much more troubling. CAI and Mortenson's responses to the accusations so far on 60 Minutes have seemed superficial, and I think it's safe to say that they will not come out of this looking squeaky clean.

I believe this episode raises two broader questions for the nonprofit community.

First, Krakauer chronicles a string of board members, employees, and consultants who came in, were shocked by how things were done and/or discovered discrepancies, and ended up leaving or resigning in protest. This section (pages 50-51) jumped out at me:

After Mortenson refused to comply with CFO Debbie Raynor's repeated requests to provide documentation for overseas programs, Raynor contacted Ghulam Parvi (the Pakistan program manager) directly, instructing him to provide her with documentation. For two or three months Parvi complied - until Mortenson found out what was going on and ordered Parvi to stop. Raynor resigned.

In 2007, Mortenson hired an accomplished consultant to periodically fly to Central Asia to supervise projects. When he discovered irregularities and shared them with Mortenson, Mortenson took no action to rectify the misconduct. In 2010, the consultant quit in frustration.

In September 2007, CAI hired a highly motivated, uncommonly capable woman to manage its international programs. Quickly, she demonstrated initiative and other leadership skills the Institute sorely needed. She had exceptional rapport with Pakistani women and girls. In 2008, she unearthed serious issues in Baltistan that contradicted what Mortenson had been reporting. After she told Mortenson about these problems, she assumed he would want her to address them. Instead, as she prepared to return to Pakistan in 2009, Mortenson ordered her to stay away from Baltistan. Disillusioned, she resigned in June 2010.

Seriously -- ff this has been going on for so long, how on earth is it just coming out now? Evidently a nationally known organization can have nearly its entire board resign and multiple employees quit, and it doesn't make the news until years later? Some of this (I'm speculating here) likely results from a hesitance on the part of those former employees to speak ill of CAI, whether because they still believed in its mission or because they were worried about being the sour grape person. Were they speaking out and nobody listened, or is there simply no good way to raise red flags about a nonprofit organization?

Second, while most organizations aren't guilty of fraud -- we hope -- there's at least one other take-away here. Another excerpt that jumped out at me:

On June 13, 2010, Parvi convened a meeting in Skardu to discuss Three Cups of Tea. Some thirty community leaders from throughout Baltistan participated, and most of them were outraged by the excerpts Parvi translated for them. Sheikh Muhammad Raza—chairman of the education committee at a refugee camp in Gultori village, where CAI has built a primary school for girls—angrily proposed charging Mortenson with the crime of fomenting sectarian unrest, and urged the District Administration to ban Mortenson and his books from Baltistan.

Based on Krakauer's footnotes, Parvi may be one of his less reliable sources, but this idea -- that the people portrayed in the book were outraged when it was translated to them because of how misleading it is -- comes up several times. Yes, fabricating stories is really bad. But how many other things do nonprofits say in their advertising that would be uncomfortable or downright offensive if you translated it for (and/or showed the accompanying pictures to) the recipients or beneficiaries or their services?

I propose a simple way to check this impulse -- to write about people as if they are victims or powerless -- and in honor of Three Cups of Tea, I call it the "Tea Test":

Step One: read the website content, blog posts, or email appeal you just got from your charity of choice. Or, if you work for a nonprofit organization, read your own stuff.

Step Two: imagine arriving in the recipient city or village, with a translated copy of that text. Would you be uncomfortable reading that website or blog or email to the people you met? Would it require tortured explanations, or would it instantly make sense and leave them feeling dignified?

That's it: if Step Two didn't make you cringe, then you passed the Tea Test. If it made you uncomfortable, made them feel ashamed, or got you attacked -- re-draft your copy and try again. Or find another organization to support.

I think there are many organizations that pass the Tea Test, but probably many more that fail. These organizations don't necessarily share all the faults of CAI as laid out by Krakauer and others, but they wouldn't fare much better in this situation, because they say something for one audience that was never intended to get back to the others.

I hope the idea of the Tea Test -- reading a translated copy of that material to the people it's describing -- will be helpful for donors and nonprofiteers alike. As a former online fundraiser I know I've broken this rule, and as a donor I've found things appealing that I probably should have reacted strongly against. I'm going to try to do better.

Update: I've posted a slightly revised (and I hope easier to remember) version of the Tea Test on a permanent page here.

Modelling Stillbirth

William Easterly and Laura Freschi go after "Inception Statistics" in the latest post on AidWatch. They criticize -- in typically hyperbolic style, with bonus points for the pun in the title -- both the estimates of stillbirth and their coverage in the news media. I left a comment on their blog outlining my thoughts but thought I'd re-post them here with a little more explanation. Here's what I said:

Thanks for this post (it’s always helpful to look at quality of estimates critically) but I think the direction of your criticism needs to be clarified. Which of the following are you upset about (choose all that apply)?

a) the fact that the researchers used models at all? I don’t know the researchers personally, but I would imagine that they are concerned with data quality in general and would much preferred to have had reliable data from all the countries they work with. But in the absence of that data (and while working towards it) isn’t it helpful to have the best possible estimates on which to set global health policy, while acknowledging their limitations? Based on the available data, is there a better way to estimate these, or do you think we’d be better off without them (in which case stillbirth might be getting even less attention)? b) a misrepresentation of their data as something other than a model? If so, could you please specify where you think that mistake occurred — to me it seems like they present it in the literature as what it is and nothing more. c) the coverage of these data in the media? On that I basically agree. It’s helpful to have critical viewpoints on articles where there is legitimate disagreement.

I get the impression your main beef is with (c), in which case I agree that press reports should be more skeptical. But I think calling the data “made up” goes too far too. Yes, it’d be nice to have pristine data for everything, but in the meantime we should try for the best possible estimates because we need something on which to base policy decisions. Along those lines, I think this commentary by Neff Walker (full disclosure: my advisor) in the same issue is worthwhile. Walker asks these five questions – noting areas where the estimates need improvement: - “Do the estimates include time trends, and are they geographically specific?” (because these allow you to crosscheck numbers for credibility) - “Are modelled results compared with previous estimates and differences explained?” - “Is there a logical and causal relation between the predictor and outcome variables in the model?” - “Do the reported measures of uncertainty around modelled estimates show the amount and quality of available data?” - “How different are the settings from which the datasets used to develop the model were drawn from those to which the model is applied?” (here Walker says further work is needed)

I'll admit to being in over my head in evaluating these particular models. As Easterly and Freschi note, "the number of people who actually understand these statistical techniques well enough to judge whether a certain model has produced a good estimate or a bunch of garbage is very, very small." Very true. But in the absence of better data, we need models on which to base decisions -- if not we're basing our decisions on uninformed guesswork, rather than informed guesswork.

I think the criticism of media coverage is valid. Even if these models are the best ever they should still be reported as good estimates at best. But when Easterly calls the data "made up" I think the hyperbole is counterproductive. There's an incredibly wide spectrum of data quality, from completely pulled-out-of-the-navel to comprehensive data from a perfectly-functioning vital registration system. We should recognize that the data we work with aren't perfect. And there probably is a cut-off point at which estimates are based on so many models-within-models that they are hurtful rather than helpful in making informed decisions. But are these particular estimates at that point? I would need to see a much more robust criticism than AidWatch has provided so far to be convinced that these estimates aren't helpful in setting priorities.

GlobalHealthLearning.org

USAID evidently offers a number of short online courses on global health, including quite a few related to PEPFAR. I just registered but haven't tried these out yet -- if you have, please let me know what you think in the comments. They're available at www.globalhealthlearning.org. From an email:

We are pleased to announce the launch of six new eLearning courses on the U.S. Agency for International Development’s Global Health eLearning Center (www.globalhealthlearning.org):

  • Healthy Businesses: Familiarizes learners with strategies to design and deliver activities to ensure that commercial for-profit health care providers have the business, operational, and financial capacity to sustainably provide essential health services.
  • Male Circumcision: Policy and Programming: Provides learners with an overview of scientific evidence of male circumcision’s (MC's) protective effect against HIV transmission, the acceptability and safety of MC, challenges to MC program implementation, and policy and program guidance.

PEPFAR-related eLearning courses:

  • Data Use for Program Managers: Provides learners with a systematic approach to planning for the use of data, specifically within the field of HIV/AIDS.
  • Economic Evaluation Basics: Gives learners a basic understanding of the common methods used to conduct an economic evaluation and the role of economic evaluations in policy and program decision-making in the field of international public health.
  • Geographic Approaches to Global Health: Acquaints learners with spatial data and the use of such data to enhance the decision-making process for health program implementation in limited resource settings.
  • PEPFAR Next Generation Indicators Guidance: Allows learners to gain a better understanding about the newest version of the NGI Reference Guide and how the information contained in the guide can be used to report progress of PEPFAR programs within national monitoring and evaluation frameworks.

Thank you very much for your interest in and support of the Global Health eLearning Center!

h/t Kriti

How much can farming improve people's health?

The Economist opines on agriculture and micronutrient deficiencies:

Farming ought to be especially good for nutrition. If farmers provide a varied diet to local markets, people seem more likely to eat well. Agricultural growth is one of the best ways to generate income for the poorest, who need the most help buying nutritious food. And in many countries women do most of the farm work. They also have most influence on children’s health. Profitable farming, women’s income and child nutrition should therefore go together. In theory a rise in farm output should boost nutrition by more than a comparable rise in general economic well-being, measured by GDP.

In practice it is another story. A paper* written for the Delhi meeting shows that an increase in agricultural value-added per worker from $200 to $500 a year is associated with a fall in the share of the undernourished population from about 35% to just over 20%. That is not bad. But it is no better than what happens when GDP per head grows by the same amount. So agriculture seems no better at cutting malnutrition than growth in general.

Another paper† confirms this. Agricultural growth reduces the proportion of underweight children, whereas non-agricultural growth does not. But when it comes to stunting (children who do not grow as tall as they should), it is the other way round: GDP growth produces the benefit; agriculture does not. As a way to cut malnutrition, farming seems nothing special.

Why not? Partly because many people in poor countries buy, not grow, their food—especially the higher-value, more nutritious kinds, such as meat and vegetables. So extra income is what counts. Agriculture helps, but not, it seems, by enough.

How to talk about countries

Brendan Rigby, writing at WhyDev.org, has these useful tips for how to talk about countries and poverty and whatnot while avoiding terms like "Western" and "developing":

  • Qualify what you mean
  • Avoid generalisations althogther (highly recommended)
  • Use more discrete and established categories, such as Least Developed Countries (LDCs), or Low Income & Middle Income Countries, which have set criteria
  • Reference legitimate and recognised benchmarks such as the UNDP’s Human Development Index or the World Bank’s poverty benchmark (These have there own methodology problems)
  • Examine development issues and challenges of individual communities, countries in the context of regional geography, history and relations rather than losing countries within references to regions and continents. There is a big different between ‘poverty in Africa’ and ‘poverty in Angola’ or ‘poverty in South Africa’.

Good rules to follow. I'm generally OK with using "low and middle income countries," except that I'm not sure "income" should be the standard by which everything is defined. I wish there were a benchmark that took into account human development, but was uncontroversial (ha!) and thus accepted by all, and then we could easily classify nations (and these naming conventions are, after all, useful shorthands) by that index without worrying about accuracy or offense. Until we get to that point, I think using clearly defined measures of income and qualifying what we mean is the best way forward when generalizing -- when that's necessary or helpful at all. Which is at least sometimes, and maybe often.

"Small Changes, Big Results"

The Boston Review has a whole new set of articles on the movement of development economics towards randomized trials. The main article is Small Changes, Big Results: Behavioral Economics at Work in Poor Countries and the companion and criticism articles are here. They're all worth reading, of course. I found them through Chris Blattman's new post "Behavioral Economics and Randomized Trials: Trumpeted, Attacked, and Parried." I want to re-state a point I made in the comments there, because I think it's worth re-wording to get it right. It's this: I often see the new randomized trials in economics compared to clinical trials in the medical literature. There are many parallels to be sure, but the medical literature is huge, and there's really one subset of it that offers better parallels.

Within global health research there are a slew of large (and not so large), randomized (and other rigorous designs), controlled (placebo or not) trials that are done in "field" or "community" settings. The distinction is that clinical trials usually draw their study populations from a hospital or other clinical setting and their results are thus only generalizable to the broader population (external validity) to the extent that the clinical population is representative of the whole population; while community trials are designed to draw from everyone in a given community.

Because these trials draw their subjects from whole communities -- and they're often cluster-randomized so that whole villages or clinic catchment areas are the unit that's randomized, rather than individuals -- they are typically larger, more expensive, more complicated and pose distinctive analytical and ethical problems. There's also often room for nesting smaller studies within the big trials, because the big trials are already recruiting large numbers of people meeting certain criteria and there are always other questions that can be answered using a subset of that same population. [All this is fresh on my mind since I just finished a class called "Design and Conduct of Community Trials," which is taught by several Hopkins faculty who run very large field trials in Nepal, India, and Bangladesh.]

Blattman is right to argue for registration of experimental trials in economics research, as is done with medical studies. (For nerdy kicks, you can browse registered trials at ISRCTN.) But many of the problems he quotes Eran Bendavid describing in economics trials--"Our interventions and populations vary with every trial, often in obscure and undocumented ways"--can also be true of community trials in health.

Likewise, these trials -- which often take years and hundreds of thousands of dollars to run -- often yield a lot of knowledge about the process of how things are done. Essential elements include doing good preliminary studies (such as validating your instruments), having continuous qualitative feedback on how the study is going, and gathering extra data on "process" questions so you'll know why something worked or not, and not just whether it did (a lot of this is addressed in Blattman's "Impact Evaluation 2.0" talk). I think the best parallels for what that research should look like in practice will be found in the big community trials of health interventions in the developing world, rather than in clinical trials in US and European hospitals.

A formula for informed global health commentary

Here's a formula for intelligent conversation on pretty much anything in public health:

"[Method/Project/Tactic/Strategy X] is an awesome idea, and we need more of [X], but it can be challenging to do well because of problems with education / technology / resources, etc."

Now you know the secret. When you hear about Technology Y or Strategy Z, you can sound like a global health expert too.

I think this problem is one reason why there are fewer really good global health blogs than there are in some other fields. There are good ones -- Karen Grepin and Alanna Shaikh for starters -- and I can't quantify the shortfall, but there do seem to be more good blogs on economic development and aid work in general than global health in particular. (There are a lot of organizational blogs, of course, but they tend to be more self-promotional, and thus less interesting to a more critical reader.)

One possible reason is that the arguments in global health tend to be about the best way to do things, such as the best mix of resources or the right tactic for fighting a particular disease like malaria, rather than what we should be doing in the first place.

The truth is that a lot of the things we want to do in global health are inherently good. Vaccinating more children = good. Stopping disease outbreaks = good. More trained health care workers = good. More funding for [insert favorite disease] = good. And so on. Disagreements typically arise because advocates of these different approaches are sometimes pulling from the same pot of resources, but it's hard to argue that any single tactic or disease or organization should be getting less money.

Contrast that with the broader debates in development. Bill Easterly recently argued that "We don't know how to solve global poverty and that's a good thing." There's just so much still up for debate. Which leaves a lot more room for interesting commentary and argument that amongst global health experts. As a final example, I'll offer this Lancet article by several of my professors: "Can the world afford to save the lives of 6 million children each year?" (for the record, they answer "yes"). From their abstract:

"the lives of 6 million children could be saved each year if 23 proven interventions were universally available in the 42 countries responsible for 90% of child deaths in 2000."

Evaluation in education (and elsewhere)

Jim Manzi has some fascinating thoughts on evaluating teachers at the American Scene. Some summary outtakes:

1. Remember that the real goal of an evaluation system is not evaluation. The goal of an employee evaluation system is to help the organization achieve an outcome....

2. You need a scorecard, not a score. There is almost never one number that can adequately summarize the performance of complex tasks like teaching that are executed as part of a collective enterprise....

3. All scorecards are temporary expedients. Beyond this, no list of metrics can usually adequately summarize performance, either....

4. Effective employee evaluation is not fully separable from effective management

When you zoom out to a certain point, all complex systems in need of reform start to look alike, because they all combine social, political, economic, and technical challenges, and the complexity, irrationality, and implacability of human behavior rears its ugly head at each step of the process. The debates about tactics and strategy and evaluation for reforming American education or US aid policy or improving health systems or fostering economic development start to blend together, so that Manzi's conclusions sound oddly familiar:

So where does this leave us? Without silver bullets.

Organizational reform is usually difficult because there is no one, simple root cause, other than at the level of gauzy abstraction. We are faced with a bowl of spaghetti of seemingly inextricably interlinked problems. Improving schools is difficult, long-term scut work. Market pressures are, in my view, essential. But, as I’ve tried to argue elsewhere at length, I doubt that simply “voucherizing” schools is a realistic strategy...

Read the rest of his conclusions here.

Why World Vision should change, but won't

Note: I've edited the original title of this post to tone it down a bit. World Vision has recently come under fire for their plan to send 100,000 NFL t-shirts printed with the losing Super Bowl team to the developing world. This gifts-in-kind strategy was criticized by many bloggers -- good summaries are at More Altitude and Good Intentions are Not Enough. Saundra S. of Good Intentions also explained why she thinks there hasn't been as much reaction as you might expect in the aid blogosphere:

So why does Jason, who did not know any better, get a barrage of criticism. Yet World Vision, with decades of experience, does not? Is it because aid workers think that the World Vision gifts-in-kind is a better program? No, that’s not what I’m hearing behind the scenes. Is it because World Vision handled their initial response to the criticism better? That’s probably a small part of it, I think Jason’s original vlog stirred up people’s ire. But it’s only a small part of the silence. Is it because we are all sick to death of talking about the problems with donated goods? That’s likely a small part of it too. I, for one, am so tired of this issue that I’d love to never have to write about it again.

But in the end, the biggest reason for the silence is aid industry pressure. I’ve heard from a few aid workers that they can’t write - and some can’t even tweet – about the topic because they either work for World Vision or they work for another nonprofit that partners with World Vision. Even people that don’t work for a nonprofit are feeling pressure. One independent blogger told of receiving emails from friends that work at World Vision imploring them not to blog about the issue.

While I was one of the critical commentators on the original World Vision blog post about the NFL shirt strategy, I haven't written about it yet here, and I feel compelled by Saundra S.'s post to do so. [Disclosure: I've never worked for World Vision even in my consulting work and -- since I'm writing this -- probably never will, so my knowledge of the situation is gleaned solely from the recent controversy.]

And now World Vision has posted a long response to reader criticisms, albeit without actually linking to any of those criticisms -- bad netiquette if you ask me. Saundra S. responds to the World Vision post with this:

Easy claims to make, but can you back them up with documentation? Especially since other non-profits of similar size and mission - Oxfam, Save the Children, American Red Cross, Plan USA - claim very little as gifts-in-kind on their financial statements. So how is it that World Vision needs even more than the quarter of a billion dollars worth of gifts-in-kind each year to run their programs? To be believed, you will need to back up your claims with documentation including: needs assessments, a market analysis of what is available in the local markets and the impact on the market of donated goods (staff requests do not equal a market analysis), an independent evaluation of both the NFL donations (after 15 years you should have done at least one evaluation) and an independent evaluation of your entire gifts-in-kind portfolio. You should also share the math behind how World Vision determined that the NFL shirts had a Fair Market Value - on the date of donation - of approximately $20 each. And this doesn't even begin to hit on the issues with World Vision's marketing campaigns around GIK. Why keep perpetuating the Whites in Shining Armor image.

So to summarize Saundra S.'s remaining questions:

1. Can WV actually show that they rigorously assess the needs of the communities they work in for gift-in-kind (GIK)? especially beyond just "our staff requested them"?

2. Why does WV use a much larger share of GIK than other similarly sized nonprofits.

3. Has WV tried to really evaluate the results of this program? (If not, that's ridiculous after 15 years.)

4. How did WV calculate the 'fair market value' for these shirts? (This one has an impact on how honestly WV is marketing itself and its efficiency.)

Other commenters at the WV response (rgailey33 and "Bill Westerly") raise further questions:

5. Does WV know / care where the shirts come from and how their production impacts people?

6. Rather than apparently depending on big partners like the NFL to help spread the word about WV is doing and, yes, drive more potential donors to WV's website (not in itself a bad thing) shouldn't they be doing more to help partners like the NFL -- and the public they can reach -- realize that t-shirts aren't  a solution to global poverty? After all, wouldn't it be much more productive to include the NFL in a discussion of how to reform the global clothing and merchandise industries to be less exploitative?

7. WV must have spent a lot of money shipping these things... isn't there something better they could do with all that money? And expanding on that:

Opportunity cost, opportunity cost, opportunity cost. The primary reason I'm critical of  World Vision is that there are so many things they could be doing instead!

For a second, let's assume that GIK doesn't have any negative or positive effects -- let's pretend it has absolutely no impact whatsoever. (In fact, this may be a decently good approximation of reality.) Even then, WV would have to account for how much they spent on the programs. How much did WV spend in staff time, administrative costs like facilities, and field research by their local partners coordinating donations with NFL and other corporate groups? On receiving, sorting, shipping, paying import taxes, and distributing their gifts-in-kind? If they've distributed 375,000 shirts over the last few years, and done all of the background research they describe as being necessary to be sensitive to local needs... I'm sure it's  an awful lot of money, surely in the millions.

Amy at World Vision is right that their response will likely dispel some criticism, but not all. But that's not because we critics are a particularly cantankerous bunch -- we just think they could be doing better. Her response shows that, at least in one sense, they are a lot better than Jason of the 1 Million Shirts fiasco, if they're spreading the shirts out and doing local research on needs -- but those things are more about minimizing potential harm than they are maximizing impact. In short, World Vision's defense seems to be "hey, what we're doing isn't that bad" when really they should be saying "you know what? there are lots of things we could be doing instead of this that would be much greater impact." So in another way World Vision is much worse than Jason, because they have enough experts on these things to know what they're doing and that this sort of program has very little likelihood of pulling anyone out of poverty, they know there are better things they could be doing with the same money, and they still do it.

To get to why I think that's the case, let's go back to WV's response to the GIK controversy. From Amy:

At the same time, I’ll also let you know that, among our staff, there is a great deal of agreement with some of the criticisms that have been posted here and elsewhere in the blogosphere.  In my conversations, I’ve heard overwhelming agreement that product distribution done poorly and in isolation from other development work is, in fact, bad aid.  To be sure, no one at World Vision believes that a tee shirt, in and of itself, is going to improve living conditions and opportunities in developing communities. In addition, World Vision doesn’t claim that GIK work alone is sustainable.  In fact, no aid tactic, in and of itself, is sustainable.  But if used as a tool in good development work, GIK can facilitate good, sustainable development.

There are obviously a lot of well-intentioned and smart people at World Vision, and from this it sounds like there are differences of opinion as to the value of GIK aid. One charitable way of looking at the situation is to assume that employees at WV who doubt the program's impact justify its use as a marketing tool --  but if that's the case they should classify it as a marketing expense, not a programmatic one. But I imagine the doubts run deeper, but it's pretty hard for someone at any but the most senior of levels to greatly change things from inside the organization, because it's simply too ingrained in how WV works. Clusters of jobs at WV are probably devoted to tasks related to this part of their work: managing corporate partnerships, coordinating the logistics of the donations, and coordinating their distribution.

One small hope is that this controversy is giving cover to some of those internal critics, as the bad publicity associated with it may negate the positive marketing value they normally get from GIK programs. Maybe a public shaming is just what is needed?

[I really hope I get to respond to this post in 6 months or a year and say that I was wrong, that World Vision has eliminated the NFL program and greatly reduced their share of GIK programs... but I'm not holding my breathe.]

Microfinance Miscellany

I had a conversation yesterday with a PhD student friend (also in international health) about the evaluation of microcredit programs. I was trying to summarize -- off the top of my head, never a good idea! -- recent findings, and wasn't able to communicate much. But I did note that like many aid and development programs, you get a pretty rosy picture when you're using case studies or cherry-picked before-and-after evaluations without comparison groups. So I was trying to describe what it looks like to do rigorous impact evaluations that account for the selection biases you get if you're just comparing people who self-select for taking out loans versus controls. After that discussion, I was quite happy to come across this new resource on David Roodman's blog: yesterday DFID released a literature review of microfinance impacts in Africa.

On a related note, Innovations for Poverty Action hosted a conference on microfinance evaluation last October, and many of the presentations and papers presented are available here. The "What Are We Learning About Impacts?" sections includes presentations given by Abhijit Banerjee (PDF) and Dean Karlan (PDF) of Yale. Worth reading.

Haiti: Constancy and Change

A friend of mine recently moved to Haiti to work for a local organization. I've never been to Haiti, and as with many places to which I have yet to travel, it's difficult for me to picture the reality on the ground, especially when I know how much the places I've traveled to have differed from media reports and books I've read. While my friend and I were talking about Haiti, I mentioned that it would be interesting if I could email some questions and post the answers here on my blog. While I don't think any of the sentiments below are that controversial, I hope this will be a continuing series where I can ask questions and get frank answers (and share them with my readers), so we decided to keep it anonymous. I'll call my friend "F" here. Please let me know (in the comments or by email) if you have any questions you'd like me to relay to F for follow-up posts.

Brett: Can you tell me a little about how long you've been in Haiti, how long you lived there in the past, and what you're doing now (in a vague sense)?

F: I spent a nearly a year in Haiti in 2005-06. I always knew I'd be back some time, and after the earthquake on January 12th, 2010, I regretted that I hadn't returned sooner. I finally arrived back a few weeks ago, to take up a new position with the same organization I worked for five years ago.

Brett: How have things changed since the last time you were there? Did you have a lot of expectations about how things would be post-earthquake, and if so, how does the reality compare to what you were expecting?

F: Of course it's very sad to see so many landmarks in Port-au-Prince reduced to rubble, and what used to be great public spaces packed full of thousands and thousands of people living under tents and blue tarpaulins. Walking around the city is a little creepy: I'll wander down streets I know well, and find that a house or church I used to pass every day is gone.

But I've also been surprised by how much hasn't changed. The same fruit vendor I used to buy from five years ago still sits on the same street corner with her basket of oranges - even though the grocery store behind her has completely vanished. From my first morning back in the office, catching up with old friends and co-workers, it was as if I'd never left. Knowing how Haiti had switched from being a developing country to being (in international NGO terms) a humanitarian emergency, I think I was expecting to see some kind of fundamental change in the way things happen here. In reality, while the problems are perhaps more urgent now, the way of life is just the same as before.

Brett: What's the latest on cholera? Is everyone incredibly concerned, or is it just one crisis among many?

F: I think people see cholera as yet another disaster in a terrible year for the country. It's very sad that cholera seems most probably to have been brought here by the UN "assistance" force (which was already almost-universally reviled among Haitian people). However, I have to say I've been genuinely impressed with the speed and effectiveness of the response by the government and NGOs. I'm as cynical as anyone else about how little there is to show for years and years of public health efforts by international NGOs in Haiti: but this time, they seem to have got it more or less right. I arrived only two weeks after the outbreak started, and already by then everybody I met knew exactly what the steps for prevention were. I see people living in even the most basic conditions being meticulously careful about washing their hands and chlorinating their water.

Last week I was visiting a rural community, and I met a woman who was using water from an irrigation channel to wash her pots and pans. My colleagues, and also the local woman who was showing us around, were furious, telling her in no uncertain terms that her children will die of cholera if she continues doing that. But three months ago, it would have been completely normal.

Brett: What do you think I'm missing about Haiti from reading the news and the occasional blog?

F: Wow, where to start? I don't think that the journalistic staples of tent cities, cholera, rock-throwing demonstrators, and heroic Americans battling against poverty gives you much idea of what life in Haiti is really like. Perhaps what would most surprise an outsider is just how normal life here is most of the time. For example, Haiti was again in the international headlines with post-election protests in December. It's true that most people stayed at home for a couple of days while the situation was tense. But on the third day things started quietening down - and by the fourth day, the merchants were back on the streets, children were again hurrying to school in their little checkered uniforms, and the morning traffic jams were as bad as ever. Haitian people have seen a lot of political upheaval and many natural disasters over the years, they've seen international attention come and go, and life has carried on throughout.

There's a fascinating story waiting to be told about the social and economic effects of the 2010 earthquake. Almost every newspaper article I read about Haiti starts by describing it as the poorest country in the western hemisphere. That's true - but the situation is far more complex than that. This country has a lot of very poor people, but also quite a number of reasonably wealthy people too, and some super-rich. (Port-au-Prince has long had a Porsche dealership, believe it or not.) Before the earthquake, the level of inequality in Haiti was even higher than Brazil. Of course the earthquake was indiscriminate: it hit rich and poor alike, destroying the National Palace and the Montana Hotel as well as tens of thousands of single-room block-and-tin-roof houses. But this destruction of houses (combined with an enormous influx of foreigners, who all need a place to stay) has meant a huge increase in the price of accommodation, and a boom for landlords whose property was not damaged. My landlady is frantically adding extensions to our apartment building: that means she's employing a dozen or so construction workers, which is great. Some jobs are being created, but at the same time inflation is soaring. Then there's the complication of the massive internal migrations caused by the earthquake. I don't think anyone really knows what all this means for the long term, but it would be great to see some informed analysis.

Most of all, while there's a lot that's going wrong in Haiti, I wish the media would sometimes mention some of the great things about the country: the lively kompa music which surrounds you constantly in the street, the colorful, expressive language, the way Haitian people are so scrupulously polite and courteous (even among the urban youth, or more so than you'd expect), and the way they have such a strong sense of identity and of their proud history. Coming back has also made me realise how I had missed the Haitian sense of humor. When I get on a bus in the city and ask the people next to me how they're doing, I sometimes get a response of "lamizè ap kraze nou": "we're crushed by misery" - that seems to be the sort of thing people expect foreigners want to hear. But then more often than not, before we've gone a hundred yards down the road, my neighbors are laughing and joking with me - often teasing me about my terrible Creole. People here are certainly resilient: even after all the troubles and tragedy of the last 12 months, they are still able to find reasons to be cheerful.

Gates and Media Funding

You may or may not have heard of this controversy: the Gates Foundation -- a huge funding source in global health -- has been paying various media sources to ramp up their coverage of global health and development issues. It seems to me that various voices in global health have tended to respond to this as you might expect them to, based on their more general reactions to the Gates Foundation. If you like most of Gates does, you probably see this as a boon, since global health and development (especially if you exclude disaster/aid stories) aren't the hottest issues in the media landscape. If you're skeptical of the typical Gates Foundation solutions (technological fixes, for example) then you might think this is more problematic.

I started off writing some lengthy thoughts on this, and realized Tom Paulson at Humanosphere has already said some of what I want to say. So I'll quote from him a bit, and then finish with a few more of my own thoughts. First, here is an interview Paulson did with Kate James, head of communications at the Gates Foundation. An excerpt:

Q Why does the Gates Foundation fund media?

Kate James: It’s driven by our recognition of the changing media landscape. We’ve seen this big drop-off in the amount of coverage of global health and development issues. Even before that, there was a problem with a lack of quality, in-depth reporting on many of these issues so we don’t see this as being internally driven by any agenda on our part. We’re responding to a need.

Q Isn’t there a risk that by paying media to do these stories the Gates Foundation’s agenda will be favored, drowning out the dissenting voices and critics of your agenda?

KJ: When we establish these partnerships, everyone is very clear that there is total editorial independence. How these organizations choose to cover issues is completely up to them.

The most recent wave of controversy seems to stem from Gates funding going to an ABC documentary on global health that featured clips of Bill and Melinda Gates, among other things. Paulson writes about that as well. Reacting to a segment on Guatemala, Paulson writes:

For example, many would argue that part of the reason for Guatemala’s problem with malnutrition and poverty stems from a long history of inequitable international trade policies and American political interference (as well as corporate influence) in Central America.

The Gates Foundation steers clear of such hot-button political issues and we’ll see if ABC News does as well. Another example of a potential “blind spot” is the Seattle philanthropy’s tendency to favor technological solutions — such as vaccines or fortified foods — as opposed to messier issues involving governance, industry and economics.

A few additional thoughts:

Would this fly in another industry? Can you imagine a Citibank-financed investigative series on the financial industry? That's probably a bad example for several reasons, including the Citibank-Gates comparison and the fact that the financial industry is not underreported. I'm having a hard time thinking of a comparable example: an industry that doesn't get much news coverage, where a big actor funded the media -- if you can think of an example, please let me know.

Obviously this induces a bias in the coverage. To say otherwise is pretty much indefensible to me. Think of it this way: if Noam Chomsky had a multi-billion dollar foundation that gave grants to the media to increase news coverage of international development, but did not have specific editorial control, would that not still bias the resulting coverage? Would an organization a) get those grants if it were not already likely to do the cover the subject with at last a gentle, overall bias towards Chomsky's point of view, or b) continue to get grants for new projects if they widely ridiculed Chomsky's approach? It doesn't have to be Chomsky -- take your pick of someone with clearly identifiable positions on international issues, and you get the same picture. Do the communications staffers at the Gates Foundation need to personally review the story lines for this sort of bias to creep in? Of course not.

Which matters more: the bias or the increased coverage? For now I lean towards increased coverage, but this is up for debate. It's really important that the funding be disclosed (as I understand it has been). It would also be nice if there was enough public demand for coverage of international development that the media covered it in all its complexity and difficulty and nuance without needing support from a foundation, but that's not the world we live in for now. And maybe the funded coverage will ultimately result in more discussion of the structural and systemic roots of international inequality, rather than just "quick fixes."

[Other thoughts on Gates and media funding by Paul Fortner, the Chronicle of Philanthropy, and (older) LA Times.]

Randomizing in the USA

The NYTimes posted this article about a randomized trial in New York City:

It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without.

Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.

Dean Karlan at Innovations for Policy Action responds:

It always amazes me when people think resources are unlimited. Why is "scarce resource" such a hard concept to understand?

I think two of the most important points here are that a) there weren't enough resources for everyone to get the services anyway, so they're just changing the decision-making process for who gets the service from first-come-first-served (presumably) to randomized, and b) studies like this can be ethical when there is reasonable doubt about whether a program actually helps or not. If it were firmly established that the program is beneficial, then it's unethical to test it, which is why you can't keep testing a proven drug against placebo.

However, this is good food for thought for those who are interested in doing randomized trials of development initiatives in other countries. It shows the impact (and reactions) from individuals to being treated as "test subjects" here in the US -- and why should we expect people in other countries to feel differently? That said, a lot of randomized trials don't get this sort of pushback. I'm not familiar with this program beyond what I read in this article, but it's possible that more could have been done to communicate the purpose of the trial to the community, activists, and the media.

There are some interesting questions raised in the IPA blog comments as well.

Afraid

Here are two semi-related articles: one by William Easterly about how aid to Ethiopia is propping up an oppressive regime, and another by Rory Carroll on the pernicious but well-intentioned effects of aid tourism in Haiti. Basically, it's really hard to do things right, because international aid and development are not simple. Good intentions are not enough. You can mess up by funneling all your money through a central regime, or by having an uncoordinated, paternalistic mess.

A couple confessions. First, I'm a former "aid tourist." In high school and college I went on short-term trips to Mexico, Guyana, and Zambia (and slightly different experiences elsewhere). My church youth group went to Torreon, Mexico and helped build a church (problematize that). In Guyana and Zambia I was part of medical groups that ostensibly aimed to improve the health of the local people; in hindsight neither project could have possibly had any lasting effects on health, and likely fostered dependency.

Second, I'm an aspiring public health / development professional, and I'm afraid. I don't want to be the short-term, uncoordinated, reinventing-the-wheel, well-intention aid vacationer -- and I think given my education (and the experience I hope to continually gain) I'm more likely to avoid at least some of those shortcomings. But I'm scared that my work might prop up nasty regimes, or satiate a bloated aid industry that justifies its projects to sustain itself, or give me the false impression of doing good while actually doing harm.

I think the first step to doing better is being afraid of these things, but I'm still learning where to go from here.