Someone should study this: Addis housing edition

Attention development economists and any other researchers who have an interest in urban or housing policy in low-income countries: My office in Addis has about 25 folks working in it, and we have a daily lunch pool where we pay in 400 birr a month (about 22 USD) to cover costs and all get to eat Ethiopian food for lunch every day. It's been a great way to get to know my coworkers -- my work is often more solitary: editing, writing, and analyzing data -- and an even better way to learn about a whole variety of issues in Ethiopia.

addis construction

The conversation is typically in Amharic and mine is quite limited, so I'm lucky if I can figure out the topic being discussed.  [I usually know if they're talking about work because so many NGO-speak words aren't translated, for example: "amharic amharic amharic Health Systems Strengthening amharic amharic..."] But folks will of course translate things as needed.  One observation is that certain topics affect their daily lives a lot, and thus come up over and over again at lunch.

One subject that has come up repeatedly is housing. Middle class folks in Addis Ababa feel the housing shortage very acutely. Based on our conversations it seems the major limitation is in getting credit to buy or build a house.

The biggest source of good housing so far has been government-constructed condominiums, for which you pay a certain (I'm not sure how much) percentage down and then make payments over the years. (The government will soon launch a new "40/60 scheme" to which many folks are looking forward, in which anyone who can make a 40% down payment on a house will get a government mortgage for the remaining 60%.)

When my coworkers first mentioned that the government will offer the next round of condominiums by a public lottery, my thought was "that will solve someone's identification problem!" A large number of people -- many thousands -- have registered for the government lottery. I believe you have to meet a certain wealth or income threshold (i.e., be able to make the down payment), but after that condo eligibility will be determined randomly. I think that -- especially if someone organizes the study prior to the lottery -- this could yield very useful results on the impact of urban housing policy.

How (and how much) do individuals and families benefit from access to better housing? Are there changes in earnings, savings, investments? Health outcomes? Children's health and educational outcomes? How does it affect political attitudes or other life choices? It could also be an opportunity to study migration between different neighborhoods, amongst many other things.

A Google Scholar search for Ethiopia housing lottery turns up several mentions, but (in my very quick read) no evaluations taking advantage of the randomization. (I can't access this recent article in an engineering journal, but from the abstract assume that it's talking about a different kind of evaluation.) So, someone have at it? It's just not that often that large public policy schemes are randomized.

"As it had to fail"

My favorite line from the Anti-Politics Machine is a throwaway. The author, James Ferguson, an anthropologist, describes a World Bank agricultural development program in Lesotho, and also -- through that lens -- ends up describing development programs more generally. At one point he notes that the program failed "as it had to fail" -- not really due to bad intentions, or to lack of technical expertise, or lack of funds -- but because failure was written into the program from the beginning. Depressing? Yes, but valuable. I read in part because Chris Blattman keeps plugging it, and then shortly before leaving for Ethiopia I saw that a friend had a copy I could borrow. Somehow it didn't make it onto reading lists for any of my classes for either of my degrees, though it should be required for pretty much anyone wanting to work in another culture (or, for that matter, trying to foment change in your own). Here's Blattman's description:

People’s main assets [in Lesotho] — cattle — were dying in downturns for lack of a market to sell them on. Households on hard times couldn’t turn their cattle into cash for school fees and food. Unfortunately, the cure turned out to be worse than the disease.

It turns out that cattle were attractive investments precisely because they were hard to liquidate. With most men working away from home in South Africa, buying cattle was the best way to keep the family saving rather than spending. They were a means for men to wield power over their families from afar.

Ferguson’s point was that development organizations attempt to be apolitical at their own risk. What’s more, he argued that they are structured to remain ignorant of the historical, political and cultural context in which they operate.

And here's a brief note from Foreign Affairs:

 The book comes to two main conclusions. First is that the distinctive discourse and conceptual apparatus of development experts, although good for keeping development agencies in business, screen out and ignore most of the political and historical facts that actually explain Third World poverty-since these realities suggest that little can be accomplished by apolitical "development" interventions. Second, although enormous schemes like Thaba-Tseka generally fail to achieve their planned goals, they do have the major unplanned effect of strengthening and expanding the power of politically self-serving state bureaucracies. Particularly good is the discussion of the "bovine mystique," in which the author contrasts development experts' misinterpretation of "traditional" attitudes toward uneconomic livestock with the complex calculus of gender, cash and power in the rural Lesotho family.

The reality was that Lesotho was not really an idyllically-rural-but-poor agricultural economy, but rather a labor reserve more or less set up by and controlled by apartheid South Africa. The gulf between the actual political situation and the situation as envisioned by the World Bank -- where the main problems were lack of markets and technical solutions -- at the time was enormous. This lets Ferguson have a lot of fun showing the absurdities of Bank reports from the era, and once you realize what's going on it's quite frustrating to read how the programs turned out, and to wonder how no one saw it coming.

This contrast between rhetoric and reality is the book's greatest strength: because the situation is absurd, it illustrates Ferguson's points very well, that aid is inherently political, and that projects that ignore that reality have their future failure baked in from the start. But that contrast is a weakness too, as because the situation is extreme you're left wondering just how representative the case of Lesotho really was (or is). The 1970s-80s era World Bank certainly makes a great buffoon (if not quite a villain) in the story, and one wonders if things aren't at least a bit better today.

Either way, this is one of the best books on development I've read, as I find myself mentally referring to it on a regular basis. Is the rhetoric I'm reading (or writing) really how it is? Is that technical, apolitical sounding intervention really going to work? It's made me think more critically about the role outside groups -- even seemingly benevolent, apolitical ones -- have on local politics. On the other hand, the Anti-Politics Machine does read a bit like it was adapted from an anthropology dissertation (it was); I wish it could get a new edition with more editing to make it more presentable. And a less ugly cover. But that's no excuse -- if you want to work in development or international health or any related field, it should be high on your reading list.

Fugue

It took the World Bank 20 years to set up an evaluation outfit -- a new paper by Michele Alacevich tells the story of how that came to pass. It's a story about, amongst other things, the tension between academia and programs, between context-specific knowledge and generalizable lessons. The abstract:

Since its birth in 1944, the World Bank has had a strong focus on development projects. Yet, it did not have a project evaluation unit until the early 1970s. An early attempt to conceptualize project appraisal had been made in the 1960s by Albert Hirschman, whose undertaking raised high expectations at the Bank. Hirschman’s conclusions—published first in internal Bank reports and then, as a book in 1967—disappointed many at the Bank, primarily because they found it impractical. Hirschman wanted to offer the Bank a new vision by transforming the Bank’s approach to project design, project management and project appraisal. What the Bank expected from Hirschman, however, was not a revolution but an examination of the Bank’s projects and advice on how to make project design and management more measurable, controllable, and suitable for replication. The history of this failed collaboration provides useful insights on the unstable equilibrium between operations and evaluation within the Bank. In addition, it shows that the Bank actively participated in the development economics debates of the 1960s. This should be of interest for development economists today who reflect on the future of their discipline emphasizing the need for a non-dogmatic approach to development. It should also be of interest for the Bank itself, which is stressing the importance of evaluation for effective development policies. The history of the practice of development economics, using archival material, can bring new perspectives and help better understand the evolution of this discipline.

And this from the introduction:

Furthermore, the Bank all but ignored the final outcome of his project, the 1967 book, and especially disliked its first chapter.... In particular, Hirschman’s insistence on uncertainty as a structural element in the decision-making process did not fit in well with the operational drive of Bank economists and engineers.

Why'd they ignore it?

The Bank, Hirschman wrote, should avoid the “air of pat certainty” that emanated from project prospects and instead expose the uncertainties underlying them, exploring the whole range of possible outcomes. Moreover, the Bank should take into account the distributional and, more generally, the social and political effects of its lending.

It seems that one of the primary lessons of studying development economics is that many if not most of the biggest arguments you hear today already took place a generation ago. As with fashion, trends come and go, and ultimately come again. The arguments weren't necessarily solved, they were just pushed aside when something newer and shinier came along. Even the argument against bold centrally-planned strategies -- and in favor of facing up to the inherent uncertainty of complex systems -- has been made before. It failed to catch on, for reasons of politics and personality. Ultimately the systems in place may not want to hear results that downplay their importance and potency the grand scheme of things. On that note it seems that if history doesn't exactly repeat itself, it will at least continue to have some strong echoes of past debates.

Alacevich's paper is free to download here. H/t to Andres Marroquin, who reads and shares a ridiculous number of interesting things.

Still #1

Pop quiz: what's the leading killer of children under five? Before I answer, some background: my impression is that many if not most public health students and professionals don't really get politics. And specifically, they don't get how an issue being unsexy or just boring politics can results in lousy public policy. I was discussing this shortcoming recently over dinner in Addis with someone who used to work in public health but wasn't formally trained in it. I observed, and they concurred, that students who go to public health schools (or at least Hopkins, where this shortcoming may be more pronounced) are mostly there to get technical training so that they can work within the public health industry, and that more politically astute students probably go for some other sort of graduate training, rather than concentrating on epidemiology or the like.

The end result is that you get cadres of folks with lots of knowledge about relative disease burden and how to implement disease control programs, but who don't really get why that knowledge isn't acted upon. On the other hand, a lot of the more politically savvy folks who are in a position to, say, set the relative priority of diseases in global health programming -- may not know much about the diseases themselves. Or, maybe more likely, they do the best job they can to get the most money possible for programs that are both good for public health and politically popular.  But if not all diseases are equally "popular" this can result in skewed policy priorities.

Now, the answer to that pop quiz: the leading killer of kids under 5 is.... [drumroll]...  pneumonia!

If you already knew the answer to that question, I bet you either a) have public health training, or b) learned it due to recent, concerted efforts to raise pneumonia's public profile. On this blog the former is probably true (after all I have a post category called "methodological quibbles"), but today I want to highlight the latter efforts.

To date, most of the political class and policymakers get the pop quiz wrong, and badly so. At Hopkins' school of public health I took and enjoyed Orin Levine's vaccine policy class. (Incidentally, Orin just started a new gig with the Gates Foundation -- congrats!) In that class and elsewhere I've heard Orin tell the story of quizzing folks on Capitol Hill and elsewhere in DC about the top three causes of death for children under five and time and again getting the answer "AIDS, TB and malaria."

Those three diseases likely pop to mind because of the Global Fund, and because a lot of US funding for global health has been directed at them. And, to be fair, they're huge public health problems and the metric of under-five mortality isn't where AIDS hits hardest. But the real answer is pneumonia, diarrhea, and malnutrition. (Or malaria for #3 -- it depends in part on whether you count malnutrition as a separate cause  or a contributor to other causes). The end result of this lack of awareness -- and the prior lack of a domestic lobby -- of pneumonia is that it gets underfunded in US global health efforts.

So, how to improve pneumonia's profile? Today, November 12th, is the 4th annual World Pneumonia Day, and I think that's a great start. I'm not normally one to celebrate every national or international "Day" for some causes, but for the aforementioned reasons I think this one is extremely important. You can follow the #WPD2012 hashtag on Twitter, or find other ways to participate on WPD's act page. While they do encourage donations to the GAVI Alliance, you'll notice that most of the actions are centered around raising awareness. I think that makes a lot of sense. In fact, just by reading this blog post you've already participated -- though of course I hope you'll do more.

I think politically-savvy efforts like World Pneumonia Day are especially important because they bridge a gap between the technical and policy experts. Precisely because so many people on both sides (the somewhat-false-but-still-helpful dichotomy of public health technical experts vs. political operatives) mostly interact with like-minded folks, we badly need campaigns like this to popularize simple facts within policy circles.

If your reaction to this post -- and to another day dedicated to a good cause -- is to feel a bit jaded, please recognize that you and your friends are exactly the sorts of people the World Pneumonia Day organizers are hoping to reach. At the very least, mention pneumonia today on Twitter or Facebook, or with your policy friends the next time health comes up.

---

Full disclosure: while at Hopkins I did a (very small) bit of paid work for IVAC, one of the WPD organizers, re: social media strategies for World Pneumonia Day, but I'm no longer formally involved. 

Biological warfare: malaria edition

Did you know Germany used malaria as a biological weapon during World War II? I'm a bit of a WWII history buff, but wasn't aware of this at all until I dove into Richard Evans' excellent three-part history of Nazi Germany, which concludes with The Third Reich at War. Here's an excerpt, with paragraph breaks and some explanations and emphasis added:

Meanwhile, Allied troops continued to fight their way slowly up the [Italian] peninsula. In their path lay the Pontine marshes, which Mussolini had drained at huge expense during the 1930s, converting them into farmland, settling them with 100,000 First World War veterans and their families, and building five new towns and eighteen villages on the site. The Germans determined to return them to their earlier state, to slow the Allied advance and at the same time wreak further revenge on the treacherous [for turning against Mussolini and surrendering to the Allies] Italians.

Not long after the Italian surrender, the area was visited by Erich Martini and Ernst Rodenwaldt, two medical specialists in malaria who worked at the Military Medical Academy in Berlin. Both men were backed by Himmler’s Ancestral Heritage research organization in the SS; Martini was on the advisory board of its research institute at Dachau. The two men directed the German army to turn off the pumps that kept the former marshes dry, so that by the end of the winter they were covered in water to a depth of 30 centimetres once more. Then, ignoring the appeals of Italian medical scientists, they put the pumps into reverse, drawing sea-water into the area, and destroyed the tidal gates keeping the sea out at high tide.

On their orders German troops dynamited many of the pumps and carted off the rest to Germany, wrecked the equipment used to keep the drainage channels free of vegetation and mined the area around them, ensuring that the damage they caused would be long-lasting.

The purpose of these measures was above all to reintroduce malaria into the marshes, for Martini himself had discovered in 1931 that only one kind of mosquito could survive and breed equally well in salt, fresh or brackish water, namely anopheles labranchiae, the vector of malaria. As a result of the flooding, the freshwater species of mosquito in the Pontine marshes were destroyed; virtually all of the mosquitoes now breeding furiously in the 98,000 acres of flooded land were carriers of the disease, in contrast to the situation in 1940, when they were on the way to being eradicated.

Just to make sure the disease took hold, Martini and Rodenwaldt’s team had all the available stocks of quinine, the drug used to combat it, confiscated and taken to a secret location in Tuscany, far away from the marshes. In order to minimize the number of eyewitnesses, the Germans had evacuated the entire population of the marshlands, allowing them back only when their work had been completed. With their homes flooded or destroyed, many had to sleep in the open, where they quickly fell victim to the vast swarms of anopheles mosquitoes now breeding in the clogged drainage canals and bomb-craters of the area.

Officially registered cases of malaria spiralled from just over 1,200 in 1943 to nearly 55,000 the following year, and 43,000 in 1945: the true number in the area in 1944 was later reckoned to be nearly double the officially recorded figure. With no quinine available, and medical services in disarray because of the war and the effective collapse of the Italian state, the impoverished inhabitants of the area, now suffering from malnutrition as well because of the destruction of their farmland and food supplies, fell victim to malaria. It had been deliberately reintroduced as an act of biological warfare, directed not only at Allied troops who might pass through the region, but also against the quarter of a million Italians who lived there, people now treated by the Germans no longer as allies but as racial inferiors whose act of treachery in deserting the Axis cause deserved the severest possible punishment.

Obesity pessimism

I posted before on the massive increase in obesity in the US over the last couple decades, trying to understand the why of the phenomenal change for the worse. Seriously, take another look at those maps. A while back Matt Steinglass wrote a depressing piece in The Economist on the likelihood of the US turning this trend around:

I very much doubt America is going to do anything, as a matter of public health policy, that has any appreciable effect on obesity rates in the next couple of decades. It's not that it's impossible for governments to hold down obesity; France, which had rapidly rising childhood obesity early this century, instituted an aggressive set of public-health interventions including school-based food and exercise shifts, nurse assessments of overweight kids, visits to families where overweight kids were identified, and so forth. Their childhood obesity rates stabilised at a fraction of America's. The problem isn't that it's not possible; rather, it's that America is incapable of doing it.

America's national governing ideology is based almost entirely on the assertion of negative rights, with a few exceptions for positive rights and public goods such as universal elementary education, national defence and highways. But it's become increasingly clear over the past decade that the country simply doesn't have the political vocabulary that would allow it to institute effective national programmes to improve eating and exercise habits or culture. A country that can't think of a vision of public life beyond freedom of individual choice, including the individual choice to watch TV and eat a Big Mac, is not going to be able to craft public policies that encourage people to exercise and eat right. We're the fattest country on earth because that's what our political philosophy leads to. We ought to incorporate that into the way we see ourselves; it's certainly the way other countries see us.

On the other hand, it's notable that states where the public has a somewhat broader conception of the public interest, as in the north-east and west, tend to have lower obesity rates.

This reminds me that a classmate asked me a while back about my impression of Michelle Obama's Let's Move campaign. I responded that my impression is positive, and that every little bit helps... but that the scale of the problem is so vast that I find it hard seeing any real, measurable impact from a program like Let's Move. To really turn obesity around we'd need a major rethinking of huge swathes of social and political reality: our massive subsidization of unhealthy foods over healthy ones (through a number of indirect mechanisms), our massive subsidization of unhealthy lifestyles by supporting cars and suburbanization rather than walking and urban density, and so on and so forth. And, as Steinglass notes, the places with the greatest obesity rates are the least likely to implement such change.

Monday miscellany

  • "Have India’s poor become human guinea pigs?" -- a disturbing BBC report by Sue Lloyd-Roberts on lack of informed consent in drug trials in India. Powerful and necessary reporting, especially if the allegations are borne out, but one quibble: reporting on the absolute number of deaths of people in drug trials is not very informative; it's really more a measure of how many people are enrolled in trials (and what type of trials). Lots of people die during clinical trials -- and in fact for trial where mortality is at outcome they must die in the trial if we are ever to measure mortality effects! If you're enrolling people with heart disease or cancer or other serious diseases in a clinical trial, you might have a lot of deaths in both the treatment and control arms -- and the total number would still be large even if the trials are going well and showing huge benefits from new drugs, so just reporting that there were 438 deaths in clinical trials in 2011 is not very informative! The questions are whether a) people are dying at a higher rate than they would have without the trial, and b) regardless of deaths, whether they consented to be in the trial in the first place. The article seems to be mostly (and rightly) questioning the latter, but uses the death counts in an potentially alarmist way.
  • "The sea has neither sense nor pity: the earliest known cases of AIDS in the pre-AIDS era." This is a fascinating read from the blog Body Horrors, recounting the story of a Norwegian sailor who acquired HIV in the 1960s, and subsequently died from AIDS (along with his wife and daughter) before knew what AIDS was. One thing the piece doesn't point out is that while this is the earliest known case of AIDS, the earliest known case of HIV is from an (anonymous) blood sample from the Congo in 1959 -- background on that case in Nature here.
  • The British Medical Journal will require all clinical trials to share their data, starting in January. Hopefully other journals will follow their lead. This is big -- more soon.

Friday photos: Somaliland

I have lots of thoughts on my trip about one month ago to Somaliland, as it's a fascinating place -- highly recommended in particular for students of public policy or development. But those will have to wait for future posts as I'm swamped for now with work, my Masters thesis, and some other projects. In the meantime, this is Hargeisa:

Above, a major mosque. Below, the street scene downtown:

The animal market:

And here's me with a moneychanger and stacks of Somaliland shillings:

Friday photo: Wenchi Crater Lake

Wenchi Crater Lake is a long-ish day trip from Addis Ababa. The former volcanic cone is filled with a lake and hiking trails, and there's even a monastery on an island in the middle of the lake. Here's a panorama shot from near the top of the trail, made from five photos stitched together (click for higher resolution):

On efficiency

A friend of mine who is working in public health in a South American country writes great email updates about the specifics of her work, which often end up illustrating something universal. I thought this note -- about the latest delays in accomplishing a relatively simple task that has taken weeks when it should have taken hours, or maybe half a day at most -- nicely illustrates how much time can be wasted through the accumulation of minor inconveniences or annoyances. No single act is backed by poor intentions, but the final effect is that no one can get much done. Shared with permission:

Today’s visit to the Municipality of [...] was particularly silly.

It went like this: I arrived at 10:45am and went to the Budget Office. The Budget Office sent me to Provisions Office, which sent me back to Budget Office, which sent me backed to Provisions Office accompanied by a secretary.

The Provisions Office sent us to the Head Administration Office, which sent us to a different Administration office on a different floor, which sent us back down to the Provisions Office, which sent us back up to the Administration Office, which sent us back down to the Provisions Office. This all took an hour.

Then, I waited in the Provisions Office while they looked for the resolution that was supposed to be attached to my project, and they all blamed a different secretary in the office for why they couldn’t find it. After an hour of waiting in the Provisions Office, I got tired and hungry and had to pee, so I made up an excuse for why I had to leave and I asked when I should come back and where I should go.

They told me to come back in two days, which is what they always tell me. So, I’ll go back in two days.

Friday photos: Meskel

Last week Ethiopia celebrated Meskel, a major holiday that commemorates the discovery of the "one true cross" on which Jesus was crucified. Meskel Square in Addis is the place to be -- "meskel" means cross in Amharic. Orthodox priests and actors surround the cross (yes, the thing that looks like a Christmas tree to American eyes):

Everyone brings candles, and at dusk they're lit in a slow wave moving across the square:

The roar of the crowd grows until the cross is lit:

Documentation:

As the fire dies down the crowd scattered -- but this drumming and singing circle stuck around for quite a while:

Bad pharma

Ben Goldacre, author of the truly excellent Bad Science, has a new book coming out in January, titled Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients Goldacre published the foreword to the book on his blog here. The point of the book is summed up in one powerful (if long) paragraph. He says this (emphasis added):

So to be clear, this whole book is about meticulously defending every assertion in the paragraph that follows.

Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques which are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer. When trials throw up results that companies don’t like, they are perfectly entitled to hide them from doctors and patients, so we only ever see a distorted picture of any drug’s true effects. Regulators see most of the trial data, but only from early on in its life, and even then they don’t give this data to doctors or patients, or even to other parts of government. This distorted evidence is then communicated and applied in a distorted fashion. In their forty years of practice after leaving medical school, doctors hear about what works through ad hoc oral traditions, from sales reps, colleagues or journals. But those colleagues can be in the pay of drug companies – often undisclosed – and the journals are too. And so are the patient groups. And finally, academic papers, which everyone thinks of as objective, are often covertly planned and written by people who work directly for the companies, without disclosure. Sometimes whole academic journals are even owned outright by one drug company. Aside from all this, for several of the most important and enduring problems in medicine, we have no idea what the best treatment is, because it’s not in anyone’s financial interest to conduct any trials at all. These are ongoing problems, and although people have claimed to fix many of them, for the most part, they have failed; so all these problems persist, but worse than ever, because now people can pretend that everything is fine after all.

If that's not compelling enough already, here's a TED talk on the subject of the new book:

Monday miscellany

Best billboard ever?

I have no idea whether this is an effective ad... but:

(Also note the address at the bottom: there's no commonly-used system for designating addresses in Addis -- or most road names for that matter -- so directions often simply describe a general area close to some landmark.)

Hunger Games critiques

My Hunger Games survival analysis post keeps getting great feedback. The latest anonymous comment:

Nice effort on the analysis, but the data is not suitable for KM and Cox. In KM, Cox and practically almost everything that requires statistical inference on a population, your variable of interest should be in no doubt independent from sample unit to sample unit.

Since your variable of interest is life span during the game where increasing ones chances in a longer life means deterring another persons lifespan (i.e. killing them), then obviously your variable of interest is dependent from sample unit to sample unit.

Your test for determining whether the gamemakers rig the selection of tributes is inappropriate, since the way of selecting tributes is by district. In the way your testing whether the selection was rigged, you are assuming that the tributes were taken as a lot regardless of how many are taken from a district. And the way you computed the expected frequency assumes that the number of 12 year olds equals the number of 13 year olds and so on when it is not certain.

Thanks for the blog. It was entertaining.

And there's a lot more in the other comments.

Another type of mystification

A long time ago (in years, two more than the product of 10 and the length of a single American presidential term) John Siegfried wrote this First Lesson in Econometrics (PDF). It starts with this:

Every budding econometrician must learn early that it is never in good taste to express the sum of the two quantities in the form: "1 + 1 = 2".

... and just goes downhill from there. Read it.

(I wish I remembered where I first saw this so I could give them credit.)

Why we should lie about the weather (and maybe more)

Nate Silver (who else?) has written a great piece on weather prediction -- "The Weatherman is Not a Moron" (NYT) -- that covers both the proliferation of data in weather forecasting, and why the quantity of data alone isn't enough. What intrigued me though was a section at the end about how to communicate the inevitable uncertainty in forecasts:

...Unfortunately, this cautious message can be undercut by private-sector forecasters. Catering to the demands of viewers can mean intentionally running the risk of making forecasts less accurate. For many years, the Weather Channel avoided forecasting an exact 50 percent chance of rain, which might seem wishy-washy to consumers. Instead, it rounded up to 60 or down to 40. In what may be the worst-kept secret in the business, numerous commercial weather forecasts are also biased toward forecasting more precipitation than will actually occur. (In the business, this is known as the wet bias.) For years, when the Weather Channel said there was a 20 percent chance of rain, it actually rained only about 5 percent of the time.

People don’t mind when a forecaster predicts rain and it turns out to be a nice day. But if it rains when it isn’t supposed to, they curse the weatherman for ruining their picnic. “If the forecast was objective, if it has zero bias in precipitation,” Bruce Rose, a former vice president for the Weather Channel, said, “we’d probably be in trouble.”

My thought when reading this was that there are actually two different reasons why you might want to systematically adjust reported percentages ((ie, fib a bit) when trying to communicate the likelihood of bad weather.

But first, an aside on what public health folks typically talk about when they talk about communicating uncertainty: I've heard a lot (in classes, in blogs, and in Bad Science, for example) about reporting absolute risks rather than relative risks, and about avoiding other ways of communicating risks that generally mislead. What people don't usually discuss is whether the point estimates themselves should ever be adjusted; rather, we concentrate on how to best communicate whatever the actual values are.

Now, back to weather. The first reason you might want to adjust the reported probability of rain is that people are rain averse: they care more strongly about getting rained on when it wasn't predicted than vice versa. It may be perfectly reasonable for people to feel this way, and so why not cater to their desires? This is the reason described in the excerpt from Silver's article above.

Another way to describe this bias is that most people would prefer to minimize Type II Error (false negatives) at the expense of having more Type I error (false positives), at least when it comes to rain. Obviously you could take this too far -- reporting rain every single day would completely eliminate Type II error, but it would also make forecasts worthless. Likewise, with big events like hurricanes the costs of Type I errors (wholesale evacuations, cancelled conventions, etc) become much greater, so this adjustment would be more problematic as the cost of false positives increases. But generally speaking, the so-called "wet bias" of adjusting all rain prediction probabilities upwards might be a good way to increase the general satisfaction of a rain-averse general public.

The second reason one might want to adjust the reported probability of rain -- or some other event -- is that people are generally bad at understanding probabilities. Luckily though, people tend to be bad about estimating probabilities in surprisingly systematic ways! Kahneman's excellent (if too long) book Thinking, Fast and Slow covers this at length. The best summary of these biases that I could find through a quick Google search was from Lee Merkhofer Consulting:

 Studies show that people make systematic errors when estimating how likely uncertain events are. As shown in [the graph below], likely outcomes (above 40%) are typically estimated to be less probable than they really are. And, outcomes that are quite unlikely are typically estimated to be more probable than they are. Furthermore, people often behave as if extremely unlikely, but still possible outcomes have no chance whatsoever of occurring.

The graph from that link is a helpful if somewhat stylized visualization of the same biases:

In other words, people think that likely events (in the 30-99% range) are less likely to occur than they are in reality, that unlike events (in the 1-30% range) are more likely to occur than they are in reality, and extremely unlikely events (very close to 0%) won't happen at all.

My recollection is that these biases can be a bit different depending on whether the predicted event is bad (getting hit by lightning) or good (winning the lottery), and that the familiarity of the event also plays a role. Regardless, with something like weather, where most events are within the realm of lived experience and most of the probabilities lie within a reasonable range, the average bias could probably be measured pretty reliably.

So what do we do with this knowledge? Think about it this way: we want to increase the accuracy of communication, but there are two different points in the communications process where you can measure accuracy. You can care about how accurately the information is communicated from the source, or how well the information is received. If you care about the latter, and you know that people have systematic and thus predictable biases in perceiving the probability that something will happen, why not adjust the numbers you communicate so that the message -- as received by the audience -- is accurate?

Now, some made up numbers: Let's say the real chance of rain is 60%, as predicted by the best computer models. You might adjust that up to 70% if that's the reported risk that makes people perceive a 60% objective probability (again, see the graph above). You might then adjust that percentage up to 80% to account for rain aversion/wet bias.

Here I think it's important to distinguish between technical and popular communication channels: if you're sharing raw data about the weather or talking to a group of meteorologists or epidemiologists then you might take one approach, whereas another approach makes sense for communicating with a lay public. For folks who just tune in to the evening news to get tomorrow's weather forecast, you want the message they receive to be as close to reality as possible. If you insist on reporting the 'real' numbers, you actually draw your audience further from understanding reality than if you fudged them a bit.

The major and obvious downside to this approach is that people know this is happening, it won't work, or they'll be mad that you lied -- even though you were only lying to better communicate the truth! One possible way of getting around this is to describe the numbers as something other than percentages; using some made-up index that sounds enough like it to convince the layperson, while also being open to detailed examination by those who are interested.

For instance, we all the heat index and wind chill aren't the same as temperature, but rather represent just how hot or cold the weather actually feels. Likewise, we could report some like "Rain Risk" or "Rain Risk Index" that accounts for known biases in risk perception and rain aversion. The weather man would report a Rain Risk of 80%, while the actual probability of rain is just 60%. This would give us more useful information for the recipients, while also maintaining technical honesty and some level of transparency.

I care a lot more about health than about the weather, but I think predicting rain is a useful device for talking about the same issues of probability perception in health for several reasons. First off, the probabilities in rain forecasting are much more within the realm of human experience than the rare probabilities that come up so often in epidemiology. Secondly, the ethical stakes feel a bit lower when writing about lying about the weather rather than, say, suggesting physicians should systematically mislead their patients, even if the crucial and ultimate aim of the adjustment is to better inform them.

I'm not saying we should walk back all the progress we've made in terms of letting patients and physicians make decisions together, rather than the latter withholding information and paternalistically making decisions for patients based on the physician's preferences rather than the patient's. (That would be silly in part because physicians share their patients' biases.) The idea here is to come up with better measures of uncertainty -- call it adjusted risk or risk indexes or weighted probabilities or whatever -- that help us bypass humans' systematic flaws in understanding uncertainty.

In short: maybe we should lie to better tell the truth. But be honest about it.

Friday photos

These photos are of the construction site next to my office in Addis -- the quality isn't that great, but I still think they're interesting. Some observations on this site:

  1. progress is slow
  2. manual labor is substituted for capital-intensive technology wherever possible
  3. the scaffolding is made by hand on site
  4. there's absolutely no protective gear (no hard hats, no harnesses while hanging off the flimsy handmade scaffolding), and
  5. women are surprisingly well-represented (at least at this site).