Arkansas

I arrived back in my hometown of Searcy, Arkansas. I haven't been back in a year -- I was living in DC from January through June, then traveling in Guatemala for the summer, and most recently living in Baltimore -- so it's good to be back. Searcy is ~18,000 people in central Arkansas, where the flat plains of the Mississippi Delta meet the first foothills of the Ozarks. The town once put up a billboard that proclaimed "Searcy, where thousands live as millions wish they could." It's also the home of Harding University, a conservative Christian university affiliated with the Church of Christ. My dad's been a professor at Harding for decades, so Searcy has always been home and likely always will be. Because I lived in the same small town for the first 23 years of my life, moving to Washington, DC in May of 2008 was a huge change. I had culture shock, but mostly in positive ways. When people would ask me if I liked DC, I would answer "Yes! But... I don't really have anything to compare it to, because it's my first city." I was never sure if I just liked DC, or what I actually liked was the urban environment in general. (My friends from NYC laugh because DC hardly feels urban to them.) Now that I've lived for several months in my second city, Baltimore, I can say that I do like it, but maybe not as much as DC.

One realization I've had over the last year is that I believe the divide between urban and rural America (to dichotomize it) is as significant, or maybe more significant than the divide between liberal and conservative, religious and secular. Most of my friends from high school and college are rural, Southern, politically conservative (though often apathetic), married (some with kids on the way), and quite religious -- of the evangelical Christian persuasion. No Jews, Muslims, Hindus, or Buddhists here. All of those adjectives (except married) once described me, but now I'm a politically liberal, secular, single young professional living in a big city. Yes, these traits are correlated: there are relatively few very religious young professionals living in big cities, there are relatively few hardcore secularists in rural Southern towns. But I think the urban/rural divide has a bigger impact on my daily experience, and on shaping my views and actions, than any of the others traits.

I think I've become a thoroughly urban creature, but the small town roots linger. I like so many things about cities: the density that leads to so many people, so many jobs and so much food, culture, entertainment and transportation all being close. But I also like the space and beauty of the small town. That's kind of a universal American narrative in a way; we all like to think we were born -- and remain rooted -- in small towns, even though the majority of us live in cities. I appreciate having grown up in a small town, and it's nice to be back for an occasional visit, but it's hard for me to imagine coming back here to live.

-----------

A few random observations from my visit back to Arkansas so far:

1) Little Rock ain't that big, though it felt huge when I was growing up.

2) There's so much space around the roads and freeways, and within the towns. The space gives you a sense of openness, but it also means you have to drive everywhere.

3) I'm at my favorite coffeeshop in town (one of two, and the only other options for places to hang out are churches, a Hastings, and Wal-Mart) and the first person who walked in the door after me is wearing a t-shirt that says (only) "JESUS".

4) People are different. Strong Southern accents for one. A lot more baseball caps, and pickup trucks. Women are wearing more makeup. More overweight and obese people than you typically see on the streets in a city. Lots of white people, few of anything else.

5) Finally, today's lunch spot was the Flying Pig BBQ:

Culturomics

Several prominent scholars and the Google Books team have published a new paper that’s generating a lot of buzz (Google pun intended). The paper is in Science (available here) and (update) here's the abstract:

We constructed a corpus of digitized texts containing about 4% of all books ever printed. Analysis of this corpus enables us to investigate cultural trends quantitatively. We survey the vast terrain of "culturomics", focusing on linguistic and cultural phenomena that were reflected in the English language between 1800 and 2000. We show how this approach can provide insights about fields as diverse as lexicography, the evolution of grammar, collective memory, the adoption of technology, the pursuit of fame, censorship, and historical epidemiology. "Culturomics" extends the boundaries of rigorous quantitative inquiry to a wide array of new phenomena spanning the social sciences and the humanities.

It’s readable, thought-provoking, and touches on many fields of study, so I imagine it will be widely read and cited. Others have noted many of the highlights, so here are some brief bulleted thoughts:

  • The authors don’t explore the possible selection bias in their study. They note that the corpus of books they studied includes 4% of all published books. They specifically chose works that scanned better and have better metadata (author, date of publication, etc), so it seems quite likely that these works differ systematically from those that were scanned and not chosen, and differ even more from those not yet scanned. Will the conclusions hold up when new books are added? Since many of the results were based on random subsets of the books (or n-grams) they studied, will those results hold up when other scholars try and recreate them with separate randomly chosen subsets?
  • Speaking of metadata, I would love to see an analysis of social networks amongst authors and how that impacts word usage. If someone had a listing of, say, several hundred authors from one time period, and some measure of how well they knew each other, and combined that information with an analysis of their works, you might get some sense of how “original” various authors were, and whether originality is even important in becoming famous.
  • The authors are obviously going for a big splash and make several statements that are a bit provocative and likely to be quoted. It will be great to see these challenged and discussed in subsequent publications. One example that is quotable but may not be fully supported by the data they present: “We are forgetting our past faster with each passing year.” But is the frequency with which a year (their example is 1951) appears in books actually representative of collective forgetting?
  • I love the word plays. An example: “For instance, we found “found” (frequency: 5x10^-4) 200,000 times more often than we finded “finded.” In contrast, “dwelt” (frequency: 1x10^-5) dwelt in our data only 60 times as often as “dwelled” dwelled.”
  • The “n-grams” studied here (a collection of letters separate from others by spaces, which could be words, numbers, or typos) are too short for a study of copying and plagiarism, but similar approaches could yield insight into the commonality of copying or borrowing throughout history.

Randomizing in the USA

The NYTimes posted this article about a randomized trial in New York City:

It has long been the standard practice in medical testing: Give drug treatment to one group while another, the control group, goes without.

Now, New York City is applying the same methodology to assess one of its programs to prevent homelessness. Half of the test subjects — people who are behind on rent and in danger of being evicted — are being denied assistance from the program for two years, with researchers tracking them to see if they end up homeless.

Dean Karlan at Innovations for Policy Action responds:

It always amazes me when people think resources are unlimited. Why is "scarce resource" such a hard concept to understand?

I think two of the most important points here are that a) there weren't enough resources for everyone to get the services anyway, so they're just changing the decision-making process for who gets the service from first-come-first-served (presumably) to randomized, and b) studies like this can be ethical when there is reasonable doubt about whether a program actually helps or not. If it were firmly established that the program is beneficial, then it's unethical to test it, which is why you can't keep testing a proven drug against placebo.

However, this is good food for thought for those who are interested in doing randomized trials of development initiatives in other countries. It shows the impact (and reactions) from individuals to being treated as "test subjects" here in the US -- and why should we expect people in other countries to feel differently? That said, a lot of randomized trials don't get this sort of pushback. I'm not familiar with this program beyond what I read in this article, but it's possible that more could have been done to communicate the purpose of the trial to the community, activists, and the media.

There are some interesting questions raised in the IPA blog comments as well.

Monday Miscellany

  • More than I ever knew about Tycho Brahe. The possible death by mercury poisoning, the duel, the fake nose made of gold and silver, the clairvoyant dwarf jester under his table, and the pet elk.
  • Wikileaks is more a movement than a single site, as its model will outlive the shutdown of the site or event the death of its founder. From Foreign Policy: for better or worse, over 200 mirror sites have already been set up, and thousands of individuals have downloaded a heavily-encrypted "insurance" file with the State Department cables. Meanwhile, government workers were ordered not to read the cables...
  • The singularity is past.
  • Elizabeth Pisani on the myth of hypothesis-driven science.

The Changing Face of Epidemiology

Unlike many scientific disciplines, undergraduate training in epidemiology is fairly rare. I've met a lot of public health students over the past few months, but only a few majored as an undergrad in public health or something similar, and I haven't met anyone whose degree was in epidemiology. For the most part, people come to epidemiology from other fields: there are many physicians, and lots of pre-med student who decided they didn't want to be doctors (like me) or still want to be. This has many implications for the field, including a bias towards looking at problems primarily through a biomedical lens, rather than through sociological, political, economic, or anthropological ones. Another interesting consequence of this lack of (or only cursory) study of epidemiology before graduate school is that the introductory courses in epidemiology at most schools of public health are truly introductory. If you're a graduate student in biochemistry and molecular biology (my undergraduate field), my guess is that it's assumed you know something about the structure of nucleic acids, have drawn the Krebs cycle at some point, and may even have heard the PCR song.

In epidemiology we're essentially starting from scratch, so there's a need to move rapidly from having no common, shared knowledge, through learning basic vocabulary (odds ratios, relative risk differences, etc.), all the way to analyzing extremely complex research. This presents pedagogical difficulties, of course, and it also makes it easier to miss out on the "big picture" of the field of epidemiology.

For one of our earliest discussion labs in my epidemiologic methods course, we discussed a couple papers on smoking and lung cancer. While "everyone knows" today that smoking causes lung cancer, it's a useful exercise to go back and look at the papers that actually established that as a scientific fact. In terms of teaching, it's a great case-study for thinking about causality like an epidemiologist. After all,  most people who smoke never get lung cancer, and some people get lung cancer without ever smoking, so establishing causality requires a bit more thought. Two of the papers we read are great for illustrating some changes that have occurred as epidemiology has changed and matured over the last 50 years.

The first paper we looked at is "The Mortality of Doctors in Relation to Their Smoking Habits: A Preliminary Report," written by Richard Doll and Bradford Hill in the British Medical Journal in 1954. (Free PDF here)  Doll and Hill followed up their groundbreaking study with "Mortality in relation to smoking: 50 years' observations on male British doctors" in 2004 (available here).

A few observations: First, the 1954 paper is much shorter: around 4 1/2 pages of text compared to 8 1/2 in the 2004 article. The 1954 paper is much more readable as well: it's conversational and uses much less specialized vocabulary (possibly because some of that vocabulary simply didn't exist in 1954). The graphs are also crude and ugly compared to the computer-generated ones in 2004.

The 2004 paper also ends with this note: "Ethical approval: No relevant ethics committees existed in 1951, when the study began."

Beyond the differences in style, length, and external approval by an ethics committee, the changes in authorship are notable. The original paper was authored by merely two people: a physician and a statistician. The 2004 paper adds two additional authors for a total of 4 (still small compared to many papers) -- and notably, the two new authors are both female. During those 50 years there was of course great progress in terms of women's representation in scientific research.  While that record is still spotty in some areas, schools of public health today are producing many more female scholars than males -- for example, current public health students at Hopkins are 71% female.

There has been a definite shift from the small-scale collaboration resulting in a paper with an individual, conversational style to the large-scale collaboration resulting in an extremely institutional output. One excellent example of this is a paper I read for an assignment today: "Serum B Vitamin Levels and Risk of Lung Cancer" by Johansson et al. in JAMA, 2010 (available here).

The Johansson et al. paper has ~8 pages of text, 47 references, 2 tables and 2 figures (all of which are quite complicated) and a number of online supplements. Its 46 authors have between them (by my count) 33 PhDs, 27 MDs, 3 MPHs, and 6 other graduate degrees! It's hard to tell gender just by name, but by my count at least half of the authors are likely female.

Clearly, epidemiology has changed a lot in the last 50 years. Gone are the days of (at least explicit) male domination. Many of the problems with the field today are related to information management and large-scale collaborations. Gone are the days of one or two researchers publishing ground-breaking studies on their own -- many of the "easy" discoveries have been made. Yet many of the examples we learn from -- and role models young public health researchers may want to emulate -- are from an earlier era.

Results-Based Aid

Nancy Birsdall writes "On Not Being Cavalier About Results" about a recent critique of the UK's DFID (Department for International Development):

The fear about an insistence on results arises from confusion about what “results” are. A legitimate typical concern is that aid bureaucracies pressed for “results” will resort, more than already is the case, to projects that provide inputs that seem add up to easily measured “wins” (bednets delivered, books distributed, paramedics trained, vehicles or computers purchased, roads built) while neglecting “system” issues and “institution building”. But bednets and books and vehicles and roads are not results in any meaningful sense, and the connection between these inputs and real outcomes (healthier babies, better educated children, higher farmer income) goes through systems and institutions and is often lost....

Let us define results as measured gains in what children have learned by the end of primary school, or measured reductions in infant mortality or deforestation, or measured increases in the hours of electricity available, or annual increases in revenue from taxes paid by rich households in poor countries – or a host of other indicators that ultimately add up to the transformation of societies and the end of their dependence on outside aid. For a country to get results might not require more money but a reconfiguration of local politics, the cleaning up of bureaucratic red tape, local leadership in setting priorities or simply more exposure to the force of local public opinion. Let aid be more closely tied to well-defined results that recipient countries are aiming for; let donors and recipients start measuring and reporting those results to their own citizens; let there be continuous evaluation and learning about the mechanics of how recipient countries and societies get those results (their institutional shifts, their system reforms, their shifting politics and priorities), built on the transparency that Secretary Mitchell is often emphasizing.

(Emphasis added)

I'd also like to note that Birdsall is the founding director of the Center for Global Development, a nonprofit in DC that does a lot of work related to evidence-based aid. I relied fairly heavily on their report on "Closing the Evaluation Gap" on a recent dual degree app. The full report is worth the read.

Monday Miscellany

  • Alex Strick van Linschoten writes about "Five Things David Petraeus Wants You to Believe" about the war in Afghanistan. Van Linschoten's five things are: (spoiler: he doesn't think you should believe them) 1) The momentum has shifted in our favor, 2) "The Night Raids and Targeting of the Insurgency’s Leadership is an Effective Tool," 3) "The Military Effort is Subservient to Broader Political Goals," 4) "Mullah Mohammad Omar is irrelevant," 5) "Don’t mind the Afghan Government."
  • Patient safety is not improving at US hospitals despite lots of efforts to reverse the trend. Maybe the moral is that changing big institutions is really hard, even in a wealthy country that spends a huge chunk of its GDP on health care.
  • Roving Bandit writes about being censored for blogging about development work.
  • Chris Blattman on whether Brazil, China, India, and South Africa should get UN Security Council seats.
  • Ian Desai writes about the largely-hidden assistants who helped make Gandhi great.

Africa is Really Big

This has been going around the development blogs, but I think it's still worth posting in case you haven't seen it. The Mercator projection maps that we're so used to seeing greatly exaggerate the size of objects far from the equator while shrinking those close to it. Africa (at 11.7 million square miles) is larger than North America (9.5 million square miles) but appears roughly the same size as Greenland (a measly 0.8 million square miles). But this is the true size of Africa in comparison to the continental US, China, India, and most of Europe:

The True Size of Africa infographic

(h/t to the always fascinating Information is Beautiful)

This map also reminds me of an ODT map on display in one of the hallways at Hopkins. While it also uses the Mercator projection, it has the South as up and the North as down, showcasing just how arbitrary are designation of North as up really is. Something like this:

If any readers want to get me the Peters equal-area South-is-up map as the ultimate geek gift for the holidays, I would be eternally grateful.

Aid Workers vs. Journalists?

UPDATE: I mistakenly assumed the commenter name "ansel" was a pseudonym, so my comments on anonymity in the final paragraph may not be as applicable. Updates in brackets. Interesting debate going on at Tales From the Hood: First, J (the anonymous aid worker blogger behind Tales), wrote "Dear Journalists: What to look for in aid programs," which includes suggestions like "Understand that you cannot evaluate a project, program or organization during one-day visit....Ask about learning.... Ask about outcomes....Use logic... Understand ambiguity...[and] Understand that things are almost never the way they seem." The summary sounds pretty basic, but the details aren't necessarily as simple.

Which prompted a lengthy comment from someone named ansel [Ansel Herz of MediaHacker]: "Dear aid groups, Do not invite us on one-day tours of your programs and expect them to be useful to us in any way.... We need to be able to come out to where you’re working unannounced and talk with you – your people in the field....Do not send out press releases over and over simply listing off the sheer numbers of stuff you’ve distributed or have stocked in warehouses as if it indicates how much you’ve accomplished. Quality of life is not measured by those (nearly impossible to verify independently) numbers." Etc.

J responded at length with a follow-up post (that probably stand alone if you're only going to read one link). There are several points of agreement -- on NGOs needing to be more open, for example -- but the main disagreement is over "supply and demand" of lousy, feel-good information. Do NGOs give it to journalists because the journalists demand it, or do journalists take it from NGOs because it's all they can get? (I know, a bit simplified -- so check out the links.)

Of course, some of the debate  was prompted by J and [Ansel]'s tone, which is unfortunate. While I understand the necessity of anonymous blogging, I think this debate is one where the tones would have been slightly different -- and more productive -- had both writers been commenting under their own names. Still, seeing the [partially] anonymous back-and-forth gives you an idea of the animus that can exist between the different actors.

Readability

If you're a fan of reading long articles online (such as from Long Form) or just read a lot of different things on your web browser, I recommend checking out the excellent Readability plugin. Here's how Readability in Google Chrome renders this Atlantic article on parallels between mutations in genetic code and mutations in the text of hand-copied ancient manuscripts. The default version of the article:

Readability version of the article, with ads and other distractions removed, larger font, and more pleasing background color:

You can customize your settings, including font, font size, background color, and width of the text body. Check it out.

Afraid

Here are two semi-related articles: one by William Easterly about how aid to Ethiopia is propping up an oppressive regime, and another by Rory Carroll on the pernicious but well-intentioned effects of aid tourism in Haiti. Basically, it's really hard to do things right, because international aid and development are not simple. Good intentions are not enough. You can mess up by funneling all your money through a central regime, or by having an uncoordinated, paternalistic mess.

A couple confessions. First, I'm a former "aid tourist." In high school and college I went on short-term trips to Mexico, Guyana, and Zambia (and slightly different experiences elsewhere). My church youth group went to Torreon, Mexico and helped build a church (problematize that). In Guyana and Zambia I was part of medical groups that ostensibly aimed to improve the health of the local people; in hindsight neither project could have possibly had any lasting effects on health, and likely fostered dependency.

Second, I'm an aspiring public health / development professional, and I'm afraid. I don't want to be the short-term, uncoordinated, reinventing-the-wheel, well-intention aid vacationer -- and I think given my education (and the experience I hope to continually gain) I'm more likely to avoid at least some of those shortcomings. But I'm scared that my work might prop up nasty regimes, or satiate a bloated aid industry that justifies its projects to sustain itself, or give me the false impression of doing good while actually doing harm.

I think the first step to doing better is being afraid of these things, but I'm still learning where to go from here.

Rejection

As evidenced by my posting schedule, things have been busy. The quarter system is kind of fast -- since August we've already had first quarter midterms and finals and second quarter midterms and are now finishing those up and gearing up for finals. I have a lot of thoughts I want to share -- and a whole folder full of PDFs to post about -- but for now I'll just share this comic I drew in my biostats class:

Stats 101 for Policymakers

It's a problem that is easy to recognize, but hard to get around: policy isn't made by public health epidemiologists or statisticians. In between the researchers (who have their own biases) and the policy makers is a whole industry of  interest groups and advocates. Of course, I've been one of those advocates at times, as has pretty much everyone who has worked in public health or politics. Why am I thinking about this? I just read "Interpreting health statistics for policymaking: the story behind the headlines" in the Lancet (available for free here) by Neff Walker et al. The paper outlines this problem:

Estimates would be more credible if they come from technical groups that are independent of the organisations that implement programmes and advocate for funds.

Maternal mortality is much more inequitably distributed than neonatal mortality, but does that mean we should focus on it more? Of course, some of this comes down to fundamental philosophical differences concerning whether we should concentrate our health investments where they will make the most difference in terms of absolute numbers of lives saved, or where they will make the most difference in terms of reducing health care inequalities.

Statements like “Maternal mortality is 100-fold higher in many low-income countries than in high-income countries” sends a clear message with respect to inequities, but no information about the absolute magnitude of the problem. Statements more useful to decisionmakers are those that use a standard metric to provide sets of meaningful comparisons. For example, the ratio of inequity between low-income and high income countries for deaths from severe neonatal infections is far lower, at 11-fold. In absolute numbers, however, two to three times as many lives are lost to neonatal infections each year (1·4 million) in developing countries than to maternal mortality (500 000).

It's easy to criticize HIV/AIDS advocates because they're such, well, good advocates. Example 1:

For example, a common practice is to present an estimate at the global or regional level and then to elaborate on it by giving a specific and often unrepresentative example. HIV/AIDS advocates talking about the eff ect of AIDS on under-5 mortality often use as examples countries in southern Africa where AIDS accounts for 30–50% of deaths in under-5s. But for sub-Saharan Africa as a whole, AIDS is thought to account for less than 10% of under-5 deaths.

Example 2:

Advocates of funding for [HIV/AIDS] often quote the cumulative number of global deaths from HIV/AIDS since it was first identified. But, if historical estimates were used for other diseases, the number of HIV/AIDS deaths would be small in comparison. For example, if the same statistical procedures were applied for pneumonia as for HIV/AIDS, the cumulative deaths since 1975 would be about 60 million—almost three times the estimated cumulative deaths from AIDS in the same time period.

They end the paper with a list of recommendations for how policymakers should consider health stats coming from advocates or any other source.

Tuskegee in Guatemala

The news that a US government study in the 1940s involved injecting Guatemalans with syphilis has been circulating, and it makes my stomach turn. Susan Reverby -- the Wellesley historian who uncovered the fiasco -- has made the draft paper available on her website: "'Normal Exposure' and Inoculation Syphilis: A PHS 'Tuskegee' Doctor in Guatemala, 1946-48," which will be published in the Journal of Policy History in January.

From the introduction:

Policy is often made based on historical understandings of particular events, and the story of the “Tuskegee” Syphilis Study (the Study) has, more than any other medical research experiment, shaped policy surrounding human subjects. The forty-year study of “untreated syphilis in the male Negro” sparked outrage in 1972 after it became widely known, and inspired requirements for informed consent, the protection of vulnerable subjects, and oversight by institutional review boards.

When the story of the Study circulates, however, it often becomes mythical. In truth the United States Public Health Service (PHS) doctors who ran the Study observed the course of the already acquired and untreated late latent disease in hundreds of African American men in Macon County, Alabama. They provided a little treatment in the first few months in 1932 and then neither extensive heavy metals treatment nor penicillin after it proved a cure for the late latent stage of the disease in the 1950s. Yet much folklore asserts that the doctors went beyond this neglect, and that they secretly infected the men by injecting them with the bacteria that causes syphilis. This virally spread belief about the PHS’s intentional infecting appears almost daily in books, articles, talks, letters, websites, tweets, news broadcasts, political rhetoric, and above all in whispers and conversations. It is reinforced when photographs of the Study’s blood draws circulate, especially when they are cropped to show prominently a black arm and a white hand on a syringe that could, to an unknowing eye, be seen as an injection.

Historians of the Study have spent decades now trying to correct the misunderstandings in the public and the academy, and to make the facts as knowable as possible. The story is horrific enough, it is argued, without perpetuating misunderstanding over what really did happen and how many knew about it. What if, however, the PHS did conduct a somewhat secret study whose subjects were infected with syphilis by one of the PHS doctors who also worked in “Tuskegee?” How should this be acknowledged and affect how we discuss historical understandings that drive the need for human subject protection?

(Emphasis added.) And later:

Ironically, though, the mythic version of the “Tuskegee” Study may offer a better picture of mid-century PHS ethics than the seemingly more informed accounts. For Public Health Service researchers did, in fact, deliberately infect poor and vulnerable men and women with syphilis in order to study the disease. The mistake of the myth is to set that story in Alabama, when it took place further south, in Guatemala.

Interestingly, the episode happened during a period of hope in Guatemalan history -- one of elections and land reforms, before decades of civil war that followed our overthrow of the democratically elected government:

The United Fruit Company owned and controlled much of Guatemala, the quintessential “banana republic,” in the first half of the 20th century. When the PHS looked to Guatemala for its research in the immediate post-World War II years, it came into the country during the period known for its relative freedoms; between 1944 and the U.S. led CIA coup of the elected government in 1954, there were efforts made at labor protection laws, land reform, and democratic elections. The PHS was part of the effort to use Guatemala for scientific research as they presumed to transfer laboratory materials, skills, and knowledge to Guatemalan public health elite.

And one last tidbit:

In reporting to Cutler after he returned to the States, he explained that he had brought Surgeon General Thomas Parran up to date and that with a “merry twinkle [that] came into his eye…[he] said ‘You know, we couldn’t do such an experiment in this country.’”60

Read the whole thing.

Confronting ourselves

The Independent's Johann Hari interviews Gideon Levy, a controversial Israeli critic of Israel's actions in the Occupied Territories. An excerpt:

He reported that day on a killing, another of the hundreds he has documented over the years. As twenty little children pulled up in their school bus at the Indira Gandhi kindergarten, their 20 year-old teacher, Najawa Khalif, waved to them – and an Israel shell hit her and she was blasted to pieces in front of them. He arrived a day later, to find the shaking children drawing pictures of the chunks of her corpse. The children were "astonished to see a Jew without weapons. All they had ever seen were soldiers and settlers."

And another:

Levy uses a simple technique. He asks his fellow Israelis: how would we feel, if this was done to us by a vastly superior military power? Once, in Jenin, his car was stuck behind an ambulance at a checkpoint for an hour. He saw there was a sick woman in the back and asked the driver what was going on, and he was told the ambulances were always made to wait this long. Furious, he asked the Israeli soldiers how they would feel if it was their mother in the ambulance – and they looked bemused at first, then angry, pointing their guns at him and telling him to shut up.

"I am amazed again and again at how little Israelis know of what's going on fifteen minutes away from their homes," he says. "The brainwashing machinery is so efficient that trying [to undo it is] almost like trying to turn an omelette back to an egg. It makes people so full of ignorance and cruelty." He gives an example. During Operation Cast Lead, the Israel bombing of blockaded Gaza in 2008-9, "a dog – an Israeli dog – was killed by a Qassam rocket and it on the front page of the most popular newspaper in Israel. On the very same day, there were tens of Palestinians killed, they were on page 16, in two lines."

I'm trying to imagine how the American public would react if the front pages always carried news of the latest Afghan "collateral damage" -- not just the numbers, but real, humanizing stories. For that matter, if we saw graphic coverage of the damage done to US soldiers and contractors, might things change?

Certainly one reason the American polity has been able to happily go about its business while we've waged devastating wars in two countries is that, by and large, Americans don't hear about the damage we inflict. Yes, we see a bit of political analysis ("How will this affect the election?") and occasional stories about US casualties ("Three soldiers killed in a helicopter crash"), but we're not forced to confront the hundreds of civilian casualties from stray bombs and bullets and germs in any serious, compelling way. That complete lack of confrontation, more than any bias in the stories that do get coverage, allows the tragedy of our foreign adventures to continue.

On war journalism, truth-telling, and independence

I read a few things recently that I thought were worth highlighting. The first is a bit of historical background on the brutality of war: an Atlantic article from 1989 on World War II and how its reality differed from its presentation to civilians in propaganda back home. I wonder to what extent movies like Saving Private Ryan have changed this perception. I read it a few days ago, and was reminded of it when I read a letter to Andrew Sullivan from a combat vet:

"You see what you're sending us to do? You see who is dying because you support a war in a part of the world you know nothing about?" The ignorance of the population is so vast that when I was deploying to Iraq and (thankfully) coming back, as I passed through Atlanta-Hartfield, people would congratulate me and my fellow servicemembers, shake our hands, say thanks, etc., and all I wanted to do was scream at them, "Get educated you ignoramus! This isn't a great thing; it's futile!"

In tangentially related news, Pro Publica has a new report showing that contractor deaths are exceed military deaths in Iraq and Afghanistan. In other words, the number of casualties haven't necessary dropped because more of the jobs that would have traditionally been done by soldiers are now being done by contractors / mercenaries.

And even more tangentially, some historical context for how intertwined our media and military / intelligence establishments can be: more than 400 American journalists have carried out assignments for the CIA in the last 25 years. This sort of line-blurring is understandably problematic for both journalistic integrity and issues of access, somewhat analogous to how militaries co-opt the independence of humanitarian and public health workers in war zones.

Monday Miscellany

  • Stuxnet may be the first true cyberweapon -- a computer virus transmitted through USB drives (meaning that it can infect computers not connected to the 'Net) that targets computers controlling industrial systems. More here.
  • Which guy is more brave? The one who jumps on a grenade and shields his buddies from the explosion with his own body? Or the guy who jumps on the grenade to shield his friends, and then realized the grenade was a dud? How often do we judge actions using information we didn't have at the time?
  • Did you know that defenestration is the act of throwing something (or someone) out a window, and such acts sparked several wars? Wikipedia has a whole list of "notable defenestrations in history.
  • Dan Ariely writes about plagiarism and how he bought essays from several "essay mills."Teachers shouldn't be too worried about these, unless they can't distinguish text like this from their normal students' writing:

    "Cheating by healers. Healing is different. There is harmless healing, when healers-cheaters and wizards offer omens, lapels, damage to withdraw, the husband-wife back and stuff. We read in the newspaper and just smile. But these days fewer people believe in wizards."

Volcanoes & Panoramas

This summer I climbed Volcan San Maria, above Xela (Quetzaltenango), Guatemala, on my second day in town. I took a series of photos from the summit looking back towards Xela, and my dad stitched three of those together using Panorama Maker 4 by Arcsoft. The result (click for the full-size image):

And here are a couple shots from later in my trip, from on top of Volcan San Pedro looking down on the beautiful Lago de Antitlan: