Smartphones on the cheap

Here's a quick digression from global health that I thought might be interesting to to tech-minded folks. nsnippets, a fascinating link blog (found via Blattman) has a post called "China's 65 dollar smartphones" that caught my attention, because I (sort of) have one of these phones. That post is highlighting a  Technology Review piece: "Here's where they make China's cheap Android smartphones." And here's more on even cheaper phones.

Before moving to Ethiopia I was stuck in a Tmobile contract that was poor value for money with a glitchy phone. Since I'm only back in the US for about 5 months finishing my last semester of grad school I resolved to get an unlocked phone that I could use in the US or abroad, on whatever network I liked, and at a grad student price. I bought one on Amazon from "China Global Inc." and shipped by some third party directly from China. The exact model isn't available anymore but you can find similar phones by searching on Amazon for "Unlocked Quad Band Dual Sim Android 4.0 OS." It gets some incredible double-take reactions because it looks almost exactly like an iPhone in front, but on the back it has the Android logo and just says "Smartphone":

It cost just $135, and I use a $30/month prepaid plan (also Tmobile) with 100 minutes of talk (which is about right for my usage), unlimited text, and unlimited data -- and I'm not locked in at all. My annual cost for this Android smartphone: $495. If you buy an iPhone 5 on Verizon your annual costs are, depending on your contract, in the $920 to $1400 range! I'm sure for some the differences between what I have and a brand new iPhone 5 with 4G (my phone is 3G) are worth $500-1000 annually, but it works for texting, email, search, Twitter, music, games, and so forth -- everything I want.

I can't imagine that everyone with the latest smartphone actually 'needs it' -- in the sense that if they knew there were good alternatives they would think the difference is worth the value. American phone plans are generally incredibly overpriced, leaving you stuck in a cycle of buying premium products -- which are nice -- but ironically being locked into keeping them until they're well past premium.  I think what is happening is that as long as most of your friends have high-priced phones with expensive contracts, that's the norm and the price seems less absurd.

This beautiful graphic is not really that useful

This beautiful infographic from the excellent blog Information is Beautiful has been making the rounds. You can see a bigger version here, and it's worth poking around for a bit. The creators take all deaths from the 20th century (drawing from several sources) and represent their relative contribution with circles:

I appreciate their footnote that says the graphic has "some inevitable double-counting, broad estimation and ball-park figures." That's certainly true, but the inevitably approximate nature of these numbers isn't my beef.

The problem is that I don't think raw numbers of deaths tell us very much, and can actually be quite misleading. Someone who saw only this infographic might well end up less well-informed than if they didn't see it. Looking at the red circles you get the impression that non-communicable and infectious diseases were roughly equivalent in importance in the 20th century, followed by "humanity" (war, murder, etc) and cancer.

The root problem is that mortality is inevitable for everyone, everywhere. This graphic lumps together pneumonia deaths at age 1 with car accidents at age 20, and cancer deaths at 50 with heart disease deaths at 80. We typically don't  (and I would argue should't) assign the same weight to a death in childhood or the prime of life with one that comes at the end of a long, satisfying life.  The end result is that this graphic greatly overemphasizes the importance of non-communicable diseases in the 20th century -- that's the impression most laypeople will walk away with.

A more useful graphic might use the same circles to show the years of life lost (or something like DALYs or QALYs) because those get a bit closer at what we care about. No single number is actually  all that great, so we can get a better understanding if we look at several different outcomes (which is one problem with any visualization). But I think raw mortality numbers are particularly misleading.

To be fair, this graphic was commissioned by Wellcome as "artwork" for a London exhibition, so maybe it should be judged by a different standard...

First responses to DEVTA roll in

In my last post I highlighted the findings from the DEVTA trial of deworming in Vitamin A in India, noting that the Vitamin A results would be more controversial. I said I expected commentaries over the coming months, but we didn't have to wait that long after all. First is a BBC Health Check program features a discussion of DEVTA with Richard Peto, one of the study's authors. It's for a general audience so it doesn't get very technical, and because of that it really grated when they described this as a "clinical trial," as that has certain connotations of rigor that aren't reflected in the design of the study. If DEVTA is a clinical trial, then so was

Peto also says there were two reasons for the massive delay in publishing the trial, 1) time to check things and "get it straight," and 2) that they were " afraid of putting up a trial with a false negative." [An aside for those interested in publication bias issues: can you imagine an author with strong positive findings ever saying the same thing about avoiding false positives?!]

Peto ends by sounding fairly neutral re: Vitamin A (portraying himself in a middle position between advocates in favor and skeptics opposed) but acknowledges that with their meta-analysis results Vitamin A is still "cost-effective by many criteria."

Second is a commentary in The Lancet by Al Sommers, Keith West, and Reynaldo Martorell. A little history: Sommers ran the first big Vitamin A trials in Sumtra (published in 1986) and is the former dean of the Johns Hopkins School of Public Health.  (Sommers' long-term friendship with Michael Bloomberg, who went to Hopkins as an undergrad, is also one reason the latter is so big on public health.) For more background, here's a recent JHU story on Sommers' receiving a $1 million research prize in part for his work on Vitamin A.

Part of their commentary is excerpted below, with my highlights in bold:

But this was neither a rigorously conducted nor acceptably executed efficacy trial: children were not enumerated, consented, formally enrolled, or carefully followed up for vital events, which is the reason there is no CONSORT diagram. Coverage was ascertained from logbooks of overworked government community workers (anganwadi workers), and verified by a small number of supervisors who periodically visited randomly selected anganwadi workers to question and examine children who these workers gathered for them. Both anganwadi worker self-reports, and the validation procedures, are fraught with potential bias that would inflate the actual coverage.

To achieve 96% coverage in Uttar Pradesh in children found in the anganwadi workers' registries would have been an astonishing feat; covering 72% of children not found in the anganwadi workers' registries seems even more improbable. In 2005—06, shortly after DEVTA ended, only 6·1% of children aged 6—59 months in Uttar Pradesh were reported to have received a vitamin A supplement in the previous 6 months according to results from the National Family Health Survey, a national household survey representative at national and state level.... Thus, it is hard to understand how DEVTA ramped up coverage to extremely high levels (and if it did, why so little of this effort was sustained). DEVTA provided the anganwadi workers with less than half a day's training and minimal if any incentive.

They also note that the study funding was minimalist compared to more rigorous studies, which may be an indication of quality. And as an indication that there will almost certainly be alternative meta-analyses that weight the different studies differently:

We are also concerned that Awasthi and colleagues included the results from this study, which is really a programme evaluation, in a meta-analysis in which all of the positive studies were rigorously designed and conducted efficacy trials and thus represented a much higher level of evidence. Compounding the problem, Awasthi and colleagues used a fixed-effects analytical model, which dramatically overweights the results of their negative findings from a single population setting. The size of a study says nothing about the quality of its data or the generalisability of its findings.

I'm sure there will be more commentaries to follow. In my previous post I noted that I'm still trying to wrap my head around the findings, and I think that's still right. If I had time I'd dig into this a bit more, especially the relationship with the Indian National Family Health Survey. But for now I think it's safe to say that two parsimonious explanations for how to reconcile DEVTA with the prior research are emerging:

1. DEVTA wasn't all that rigorous and thus never achieved the high population coverage levels necessary to have a strong mortality impact; the mortality impact was attenuated by poor coverage, resulting in the lack of a statistically significant effect in line with prior results. Thus is shouldn't move our priors all that much. (Sommers et al. seem to be arguing for this.) Or,

2. There's some underlying change in the populations between the older studies and these newer studies that causes the effect of Vitamin A to decline -- this could be nutrition, vaccination status, shifting causes of mortality, etc. If you believe this, then you might discount studies because they're older.

(h/t to @karengrepin for the Lancet commentary.)

A massive trial, a huge publication delay, and enormous questions

It's been called the "largest clinical* trial ever": DEVTA (Deworming and Enhanced ViTamin A supplementation), a study of Vitamin A supplementation and deworming in over 2 million children in India, just published its results. "DEVTA" may mean "deity" or "divine being" in Hindi but some global health experts and advocates will probably think these results come straight from the devil. Why? Because they call into question -- or at least attenuate -- our estimates of the effectiveness of some of the easiest, best "bang for the buck" interventions out there. Data collection was completed in 2006, but the results were just published in The Lancet. Why the massive delay? According to the accompany discussion paper, it sounds like the delay was rooted in very strong resistance to the results after preliminary outcomes were presented at a conference in 2007. If it weren't for the repeated and very public shaming by the authors of recent Cochrane Collaboration reviews, we might not have the results even today. (Bravo again, Cochrane.)

So, about DEVTA. In short, this was a randomized 2x2 factorial trial, like so:

The results were published as two separate papers, one on Vitamin A and one on deworming, with an additional commentary piece:

The controversy is going to be more about what this trial didn't find, rather than what they did: the confidence interval on the Vitamin A study's mortality estimate (mortality ratio 0.96, 95% confidence interval of 0.89 to 1.03) is consistent with a mortality reduction as large as 11%, or as much as a 3% increase. The consensus from previous Vitamin A studies was mortality reductions of 20-30%, so this is a big surprise. Here's the abstract to that paper:

Background

In north India, vitamin A deficiency (retinol <0·70 μmol/L) is common in pre-school children and 2–3% die at ages 1·0–6·0 years. We aimed to assess whether periodic vitamin A supplementation could reduce this mortality.

Methods

Participants in this cluster-randomised trial were pre-school children in the defined catchment areas of 8338 state-staffed village child-care centres (under-5 population 1 million) in 72 administrative blocks. Groups of four neighbouring blocks (clusters) were cluster-randomly allocated in Oxford, UK, between 6-monthly vitamin A (retinol capsule of 200 000 IU retinyl acetate in oil, to be cut and dripped into the child’s mouth every 6 months), albendazole (400 mg tablet every 6 months), both, or neither (open control). Analyses of retinol effects are by block (36 vs36 clusters).

The study spanned 5 calendar years, with 11 6-monthly mass-treatment days for all children then aged 6–72 months.  Annually, one centre per block was randomly selected and visited by a study team 1–5 months after any trial vitamin A to sample blood (for retinol assay, technically reliable only after mid-study), examine eyes, and interview caregivers. Separately, all 8338 centres were visited every 6 months to monitor pre-school deaths (100 000 visits, 25 000 deaths at ages 1·0–6·0 years [the primary outcome]). This trial is registered at ClinicalTrials.gov, NCT00222547.

Findings

Estimated compliance with 6-monthly retinol supplements was 86%. Among 2581 versus 2584 children surveyed during the second half of the study, mean plasma retinol was one-sixth higher (0·72 [SE 0·01] vs 0·62 [0·01] μmol/L, increase 0·10 [SE 0·01] μmol/L) and the prevalence of severe deficiency was halved (retinol <0·35 μmol/L 6% vs13%, decrease 7% [SE 1%]), as was that of Bitot’s spots (1·4% vs3·5%, decrease 2·1% [SE 0·7%]).

Comparing the 36 retinol-allocated versus 36 control blocks in analyses of the primary outcome, deaths per child-care centre at ages 1·0–6·0 years during the 5-year study were 3·01 retinol versus 3·15 control (absolute reduction 0·14 [SE 0·11], mortality ratio 0·96, 95% CI 0·89–1·03, p=0·22), suggesting absolute risks of death between ages 1·0 and 6·0 years of approximately 2·5% retinol versus 2·6% control. No specific cause of death was significantly affected.

Interpretation

DEVTA contradicts the expectation from other trials that vitamin A supplementation would reduce child mortality by 20–30%, but cannot rule out some more modest effect. Meta-analysis of DEVTA plus eight previous randomised trials of supplementation (in various different populations) yielded a weighted average mortality reduction of 11% (95% CI 5–16, p=0·00015), reliably contradicting the hypothesis of no effect.

Note that instead of just publishing these no-effect results and leaving the meta-analysis to a separate publication, the authors go ahead and do their own meta-analysis of DEVTA plus previous studies and report that -- much attenuated, but still positive -- effect in their conclusion. I think that's a fair approach, but also reveals that the study's authors very much believe there are large Vitamin A mortality effects despite the outcome of their own study!

[The only media coverage I've seen of these results so far comes from the Times of India, which includes quotes from the authors and Abhijit Banerjee.]

To be honest, I don't know what to make of the inconsistency between these findings and previous studies, and am writing this post in part to see what discussion it generates. I imagine there will be more commentaries on these findings over the coming months, with some decrying the results and methodologies and others seeing vindication in them. In my view the best possible outcome is an ongoing concern for issues of external validity in biomedical trials.

What do I mean? Epidemiologists tend to think that external validity is less of an issue in randomized trials of biomedical interventions -- as opposed to behavioral, social, or organizational trials -- but this isn't necessarily the case. Trials of vaccine efficacy have shown quite different efficacy for the same vaccine (see BCG and rotavirus) in different locations, possibly due to differing underlying nutritional status or disease burdens. Our ability to interpret discrepant findings can only be as sophisticated as the available data allows, or as sophisticated as allowed by our understanding of the biological and epidemiologic mechanisms that matter on the pathway from intervention to outcome. We can't go back in time and collect additional information (think nutrition, immune response, baseline mortality, and so forth) on studies far in the past, but we can keep such issues in mind when designing trials moving forward.

All that to say, these results are confusing, and I look forward to seeing the global health community sort through them. Also, while the outcomes here (health outcomes) are different from those in the Kremer deworming study (education outcomes), I've argued before that lack of effect or small effects on the health side should certainly influence our judgment of the potential education outcomes of deworming.

*I think given the design it's not that helpful to call this a 'clinical' trial at all - but that's another story.

Note to job seekers

The first question I've had in several recent job interviews/conversations was "do you speak French?" (I don't.) Not that it's impossible to find work if you don't -- but it certainly seems to be a major asset. If you want to work in global health, take note.

Monday miscellany

On regressions and thinking

Thesis: thinking quantitatively changes the way we frame and answer questions in ways we often don't notice. James Robinson, of Acemoglu and Robinson fame (ie, Why Nations Fail@whynationsfailColonial Origins; Reversal of Fortune, and so forth), gave a talk at Princeton last week. It was a good talk, mostly about Why Nations Fail. My main thought during his presentation was that it's simply very difficult to develop a parsimonious theory that covers something as complicated as the long-term economic and political development of the entire world! As Robinson said (quoting someone else), in social science you can say "more and more about less and less, or less and less about more and more."

The talk was followed by some great discussion where several of the tougher questions came from sociologists and political economists. I think it's safe to say that a lot of the skepticism of the Why Nations Fail thesis is centered around the beef that East Asian economics, and especially China, don't fit neatly into it. A&R argue here on their blog -- not to mention in their book, which I've alas only had time to skim -- that China is not an exception to their theory, but I think that impression is still fairly widespread.

But my point isn't about the extent to which China fits into the theory (that's another debate altogether); it's about what it means if or when China doesn't fit into the theory. Is that a major failure or a minor one?  I think different answers to that question are ultimately rooted in a difference of methodological cultures in the social science world.

As social science becomes more quantitative, our default method for thinking about a subject can shift, and we might not even notice that it's happening. For example, if your main form of evidence for a theory is a series of cross-country regressions, then you automatically start to think of countries as the unit of analysis, and, importantly, as being more or less equally weighted. There are natural and arguably inevitable reasons why this will be the case: states are the clearest politicoeconomic units, and even if they weren't they're simply the unit for which we have the most data. While you might (and almost certainly should!) weight your data points by population if you were looking at something like health or individual wealth or well-being, it makes less sense when you're talking about country-level phenomena like economic growth rates. So you end up seeing a lot of arguments made with scatterplots of countries and fitted lines -- and you start to think that way intuitively.

When we switch back to narrative forms of thinking, this is less true: I think we all agree that all things being equal a theory that explains everything except Mauritius is better than a theory that explains everything except China. But it's a lot harder to think intuitively about these things when you have a bunch of variables in play at the same time, which is one reason why multiple regressions are so useful. And between the extremes of weighting all countries equally and weighting them by population are a lot of potentially more reasonable ways of balancing the two concerns, that unfortunately would involve a lot of arbitrary decisions regarding weighting...

This is a thought I've been stewing on for a while, and it's reinforced whenever I hear the language of quantitative analysis working its way into qualitative discussions -- for instance, Robinson said at one point that "all that is in the error term," when he wasn't actually talking about a regression. I do this sort of thing too, and don't think there's anything necessarily wrong with it -- until there is.  When questioned on China, Robinson answered briefly and then transitioned to talking about the Philippines, rather than just concentrating on China. If the theory doesn't explain China (at least to the satisfaction of many), a nation of 1.3 billion, then explaining a country of 90 million is less impressive. How impressive you find an argument depends in part on the importance you ascribe to the outliers, and that depends in part on whether you were trained in the narrative way of thinking, where huge countries are hugely important, or the regression way of thinking, where all countries are equally important units of analysis.

[The first half of my last semester of school is shaping up to be much busier than expected -- my course schedule is severely front-loaded -- so blogging has been intermittent. Thus I'll try and do more quick posts like this rather than waiting for the time to flesh out an idea more fully.]

Why did HIV decline in Uganda?

That's the title of an October 2012 paper (PDF) by Marcella Alsan and David Cutler, and a longstanding, much-debated question in global health circles . Here's the abstract:

Uganda is widely viewed as a public health success for curtailing its HIV/AIDS epidemic in the early 1990s. To investigate the factors contributing to this decline, we build a model of HIV transmission. Calibration of the model indicates that reduced pre-marital sexual activity among young women was the most important factor in the decline. We next explore what led young women to change their behavior. The period of rapid HIV decline coincided with a dramatic rise in girls' secondary school enrollment. We instrument for this enrollment with distance to school, conditional on a rich set of demographic and locational controls, including distance to market center. We find that girls' enrollment in secondary education significantly increased the likelihood of abstaining from sex. Using a triple-difference estimator, we find that some of the schooling increase among young women was in response to a 1990 affirmative action policy giving women an advantage over men on University applications. Our findings suggest that one-third of the 14 percentage point decline in HIV among young women and approximately one-fifth of the overall HIV decline can be attributed to this gender-targeted education policy.

This paper won't settle the debate over why HIV prevalence declined in Uganda, but I think it's interesting both for its results and the methodology. I particularly like the bit on using distance from schools and from market center in this way, the idea being that they're trying to measure the effect of proximity to schools while controlling for the fact that schools are likely to be closer to the center of town in the first place.

The same paper was previously published as an NBER working paper in 2010, and it looks to me as though the addition of those distance-to-market controls was the main change since then. [Pro nerd tip: to figure out what changed between two PDFs, convert them to Word via pdftoword.com, save the files, and use the 'Compare > two versions of a document' feature in the Review pane in Word.]

Also, a tip of the hat to Chris Blattman, who earlier highlighted Alsan's fascinating paper (PDF) on TseTse flies. I was impressed by the amount of biology in the tsetse fly paper; a level of engagement with non-economic literature that I thought was both welcome and unusual for an economics paper. Then I realized it makes sense given that the author has an MD, an MPH, and a PhD in economics. Now I feel inadequate.

Do they know it's Christmas? No, because it isn't.

Remember "Do they know it's Christmas?" That's right, the 1984 hit song intended to raise money for famine victims in Ethiopia.  If that's not ringing a bell (See what I did there?) then here's the video:

You probably didn't get very far, so here are some of the inane lyrics:

And there won't be snow in Africa this Christmas time The greatest gift they'll get this year is life Where nothing ever grows, no rain or rivers flow Do they know it's Christmas time at all?

In addition to reinforcing all sorts of stereotypes about Africa, this video gets one very important thing wrong: Do they know it's Christmas time? No, they don't, because Ethiopians are Orthodox Christians and don't celebrate Christmas until January 7th. So next time someone says they love this song, you now have an annoying know-it-all response to shut them down -- which you can consider your holiday gift from this blogger. Merry Christmas!

[On a more serious note, Ethiopia has made huge strides on food security since the fall of the Derg. If you want to read more on that, MoreAltitude (an aid blogger who recently relocated to Addis) has this take.]

The greatest country in the world

I've been in Ethiopia for six and a half months, and in that time span I have twice found myself explaining the United States' gun culture, lack of reasonable gun control laws, and gun-related political sensitivities to my colleagues and friends in the wake of a horrific mass shooting. When bad things happen in the US -- especially if they're related to some of our national moral failings that grate on me the most, e.g. guns, health care, and militarism -- I feel a sense of personal moral culpability, much stronger when I'm living in the US. I think having to explain how terrible and terribly preventable things could happen in my society, while living somewhere else, makes me feel this way. (This is by no means because people make me feel this way; folks often go out of their way to reassure me that they don't see me as synonymous with such things.)

I think that this enhanced feeling of responsibility is actually a good thing. Why? If being abroad sometimes puts the absurdity of situations at home into starker relief, maybe it will reinforce a drive to change. All Americans should feel some level of culpability for mass shootings, because we have collectively allowed a political system driven by gun fanatics,  a media culture unintentionally but consistently glorifying mass murderers, and a horribly deficient mental health system to persist, when their persistence has such appalling consequences.

After the Colorado movie theater shooting I told colleagues here that nothing much would happen, and sadly I was right. This time I said that maybe -- just maybe -- the combination of the timing (immediately post-election) and the fact that the victims were schoolchildren will result in somewhat tighter gun laws. But, attention spans are short so action would need to be taken soon. Hopefully the fact that the WhiteHouse.gov petition on gun control already has 138,000 signatures (making it the most popular petition in the history of the website) indicates that something could well be driven through. Even if that's the case, anything that could be passed now will be just the start and it will be long hard slog to see systematic changes.

As Andrew Gelman notes here, we are all part of the problem to some extent: "It’s a bit sobering, when lamenting problems with the media, to realize that we are the media too." He's talking about bloggers, but I think it extends further: every one of us that talks about gun control in the wake of a mass shooting but quickly lets it slip down our conversational and political priorities once the event fades from memory is part of the problem. I'm making a note to myself to write further about gun control and the epidemiology of violence in the future -- not just today -- because I think that entrenched problems require a conscious choice to break the cycle. In the meantime, Harvard School of Public Health provides some good places to start.

Monday miscellany

I'm outsourcing this week's link round-up to KirstyEvidence, a blog on research and international development I only recently discovered. Her Twelve Days of Evidence post starts with 12 non-fiction books worth buying, 11 tweeps to follow, and so forth. It's good stuff, so click through and enjoy some light holiday reading. (Plus, it gets me in more of a holiday mood than the Michael Bolton Christmas album playing in the Addis hotel lobby where I started writing this post. Ugh.)

 

Housekeeping

I recently updated the post categories on this blog, trying to clean things up a bit. Since a lot of my posts are link roundups, shorter commentary, or photography, I added a category called "prose" that includes all the slightly longer, more substantive things I've written. You can browse that category here.

Defaults

Alanna Shaikh took a few thing I said on Twitter and expanded them into this blog post. Basically I was noting -- and she in turn highlighted -- that on matters of paternalism vs. choice, economists' default is consumer choice, whereas the public health default is paternalism. This can and does result in lousy policies from both ends -- for example, see my long critique of Bill Easterly's rejection of effective but mildly paternalistic programs due to (in my view) relying too heavily on the economists' default position.

I was reminder of all this by a recent post on the (awesomely named) Worthwhile Canadian Initiative. The blogger, Frances Wooley, quotes a from a microeconomics textbook: "As a budding economist, you want to avoid lines of reasoning that suggest people habitually do things that make them worse off..." Can you imagine a public health textbook including that sentence? Hah! Wooley, responded, "The problem with this argument is that it flies in the face of the abundant empirical evidence that people habitually overeat, overspend, and do other things that make them worse off."

The historical excesses and abuses of public health are also rooted in this paternalistic streak, just as many of the absurdities of economics are rooted in its own defaults. I think most folks even in these two professions fall somewhere in between these extremes, but that a lot of disagreements (and lack of respect) between the fields stems from this fundamental difference in starting points.

(See also some related thoughts from Terence at Waylaid Dialectic that I saw after writing the initial version of this post.)

 

Alwyn Young just broke your regression

Alwyn Young -- the same guy whose paper carefully accounting for growth in East Asian was popularized by Krugman and sparked an enormous debate -- has been circulating a paper on African growth rates. Here's the 2009 version (PDF) and October 2012 version. The abstract of the latter paper:

Measures of real consumption based upon the ownership of durable goods, the quality of housing, the health and mortality of children, the education of youth and the allocation of female time in the household indicate that sub-Saharan living standards have, for the past two decades, been growing about 3.4 to 3.7 percent per annum, i.e. three and a half to four times the rate indicated in international data sets. (emphasis added)

The Demographic and Health Surveys are large-scale nationally-representative surveys of health, family planning, and related modules that tend to ask the same questions across different countries and over large periods of time. They have major limitations, but in the absence of high-quality data from governments they're often the best source for national health data. The DHS doesn't collect much economic data, but they do ask about ownership of certain durable goods (like TVs, toilets, etc), and the answers to these questions are used to construct a wealth index that is very useful for studies of health equity -- something I'm taking advantage of in my current work. (As an aside, this excellent report from Measure DHS (PDF) describes the history of the wealth index.)

What Young has done is to take this durable asset data from many DHS surveys and try to estimate a measure of GDP growth from actually-measured data, rather than the (arguably) sketchier methods typically used to get national GDP numbers in many African countries. Not all countries are represented at any given point in time in the body of DHS data, which is why he ends up with a very-unbalanced panel data set for "Africa," rather than being able to measure growth rates in individual countries. All the data and code for the paper are available here.

Young's methods themselves are certain to spark ongoing debate (see commentary and links from Tyler Cowen and Chris Blattman), so this is far from settled -- and may well never be. The takeaway is probably not that Young's numbers are right so much as that there's a lot of data out there that we shouldn't trust very much, and that transparency about the sources and methodology behind data, official or not, is very helpful. I just wanted to raise one question: if Young's data is right, just how many published papers are wrong?

There is a huge literature on cross-country growth 's empirics. A Google Scholar search for "cross-country growth Africa" turns up 62,400 results. While not all of these papers are using African countries' GDPs as an outcome, a lot of them are. This literature has many failings which have been duly pointed out by Bill Easterly and many others, to the extent that an up-and-coming economist is likely to steer away from this sort of work for fear of being mocked. Relatedly, in Acemoglu and Robinson's recent and entertaining take-down of Jeff Sachs, one of their insults criticisms is that Sachs only knows something because he's been running "kitchen sink growth regressions."

Young's paper just adds more fuel to that fire. If African GDP growth has been 3 1/2 to 4 times greater than the official data says, then every single paper that uses the old GDP numbers is now even more suspect.

Monday miscellany

  • First, a request: I remember recently reading the first report of sexual transmission of malaria, a case where someone acquired malaria from a well-traveled partner despite never traveling to malarial areas themselves. I thought maybe it was in MMWR but have scanned that and other publications and done a few searches and cannot locate this article. It's possible this was an elaborate dream -- epidemiologists think and write about weird things, so why not dream them too? But if anyone else remembers reading this or can find the article, please let me know! [Update: see comments]
  • A new paper:  "The Mean Lifetime of Famous People from Hammurabi to Einstein." (h/t to Economic Logic)
  •  I just revisited a blog post by World Bank health economist Adam Wagstaff: "How can health systems “systematic reviews” actually become systematic?" The post and the comments are a great conversation and reveal some of the differences that are revealed when working across disciplines. Also, I think you should be reading Wagstaff's posts (at the WB Let's Talk Development blog) because he's one of the fathers of health inequity research and I ended up citing him a bunch in my (in progress) Masters thesis, especially this World Bank report (PDF) on analyzing health equity using household survey data. Also, the companion page for that report has Stata .do files for each chapter, amongst other resources.
  • Also from Wagstaff: "Shocking facts about primary health care in India, and their implications." See also Amanda Glassman's take on the same paper.
  • Tyler Cowen reviews Ben Goldacre's new book Bad Pharma (which I blogged before). And then Goldacre showed up to argue in the comments about whether his policy suggestions would increase the cost of drug R&D.
  • One of my photos of Somaliland is featured in this article on investment in the country.
  • The NYT Opinionator blog highlights GiveWell's work in "Putting Charities to the Test."
  • Finally, the blog WanderLust has an interesting summary of 9 events that shaped the humanitarian industry.

On deworming

GiveWell's Alexander Berger just posted a more in-depth blog review of the (hugely impactful) Miguel and Kremer deworming study. Here's some background: the Cochrane reviewGivewell's first response to it, and IPA's very critical response. I've been meaning to blog on this since the new Cochrane review came out, but haven't had time to do the subject justice by really digging into all the papers. So I hope you'll forgive me for just sharing the comment I left at the latest GiveWell post, as it's basically what I was going to blog anyway:

Thanks for this interesting review — I especially appreciate that the authors [Miguel and Kremer] shared the material necessary for you [GiveWell] to examine their results in more depth, and that you talk through your thought process.

However, one thing you highlighted in your post on the new Cochrane review that isn’t mentioned here, and which I thought was much more important than the doubts about this Miguel and Kremer study, was that there have been so many other studies that did not find large effect on health outcomes! I’ve been meaning to write a long blog post about this when I really have time to dig into the references, but since I’m mid-thesis I’ll disclaim that this quick comment is based on recollection of the Cochrane review and your and IPA’s previous blog posts, so forgive me if I misremember something.

The Miguel and Kremer study gets a lot of attention in part because it had big effects, and in part because it measured outcomes that many (most?) other deworming studies hadn’t measured — but it’s not as if we believe these outcomes to be completely unrelated. This is a case where what we believe the underlying causal mechanism for the social effects to be is hugely important. For the epidemiologists reading, imagine this as a DAG (a directed acyclic graph) where the mechanism is “deworming -> better health -> better school attendance and cognitive function -> long-term social/economic outcomes.” That’s at least how I assume the mechanism is hypothesized.

So while the other studies don’t measure the social outcomes, it’s harder for me to imagine how deworming could have a very large effect on school and social/economic outcomes without first having an effect on (some) health outcomes — since the social outcomes are ‘downstream’ from the health ones. Maybe different people are assuming that something else is going on — that the health and social outcomes are somehow independent, or that you just can’t measure the health outcomes as easily as the social ones, which seems backwards to me. (To me this was the missing gap in the IPA blog response to GiveWell’s criticism as well.)

So continuing to give so much attention to this study, even if it’s critical, misses what I took to be the biggest takeaway from that review — there have been a bunch of studies that showed only small effects or none at all. They were looking at health outcomes, yes, but those aren’t unrelated to the long-term development, social, and economic effects. You [GiveWell] try to get at the external validity of this study by looking for different size effects in areas with different prevalence, which is good but limited. Ultimately, if you consider all of the studies that looked at various outcomes, I think the most plausible explanation for how you could get huge (social) effects in the Miguel Kremer study while seeing little to no (health) effects in the others is not that the other studies just didn’t measure the social effects, but that the Miguel Kremer study’s external validity is questionable because of its unique study population.

(Emphasis added throughout)

 

Friday photos: Gelada baboons

The Simien Mountains in Ethiopia's north are swarming with Gelada baboons (which aren't actually baboons). Below are some photos I took of them over Thanksgiving break:

And an interesting fact about the mountains, from Wikipedia:

Although the word Semien means "north" in Amharic, according to Richard Pankhurst the ancestral form of the word actually meant "south" in Ge'ez, because the mountains lay to the south of Aksum, which was at the time the center of Ethiopian civilization. But as over the following centuries the center of Ethiopian civilization itself moved to the south, these mountains came to be thought of as lying to the north, and the meaning of the word likewise changed.