January 12, 2007
Almost five years into the destruction
of Iraq, the orthodox rule of thumb for assessing statistical
tabulations of the civilian death toll is becoming clear: any
figure will do so long as it is substantially lower than that
computed by the Johns Hopkins researchers in their 2004 and 2006
studies. Their findings, based on the most orthodox sampling
methodology and published in the Lancet after extensive peer
review, estimated the post-invasion death toll by 2006 at about
655,000. Predictably, this shocking assessment drew howls of
ignorant abuse from self-interested parties, including George
Bush ("not credible") and Tony Blair.
Now we have a new result complied
by the Iraqi Ministry of Health under the sponsorship of the
World Health Organization and published in the once reputable
New England Journal of Medicine, (NEJM) estimating the
number of Iraqis murdered, directly or indirectly, by George
Bush and his willing executioners at 151,000--far less than the
most recent Johns Hopkins estimate. Due to its adherence to the
rule cited above, this figure has been greeted with respectful
attention in press reports, along with swipes at the Hopkins
effort as having, as the New York Times had to remind readers,
"come under criticism for its methodology."
However, as a careful and informed
reading makes clear, it is the new report that guilty of sloppy
methodology and tendentious reporting -- evidently inspired by
the desire to discredit the horrifying Hopkins findings, which,
the NEJM study triumphantly concludes "considerably overestimated
the number of violent deaths." In particular, while Johns
Hopkins reported that the majority of post invasion deaths were
due to violence, the NEJM serves up the comforting assessment
that only one sixth of deaths in this period have been due to
violence.
Among the many obfuscations
in this new report, the most fundamental is the blurred distinction
between it and the survey it sets out to discredit. The Johns
Hopkins project sought to enumerate the number of excess deaths
due to all causes in the period following the March 2003 invasion
as compared with the death rate prior to the invasion, thus giving
a number of people who died because Bush invaded. Post hoc, propter
hoc. This new study, on the other hand, explicitly sought to analyze
only deaths by violence, imposing a measure of subjectivity on
the findings from the outset. For example, does the child who
dies because the local health clinic has been looted in the aftermath
of the invasion count as a casualty of the war, or not? As CounterPunch's
statistical consultant Pierre Sprey reacted after reading the
full NEJM paper, "They don't say they are comparing entirely
different death rates. That's not science, it's politics."
Superficially at least, both
the Hopkins team and the new study followed the same methodology
in conducting their surveys: interviewing a random sample
of households drawn from randomly selected "clusters"
of houses around the country, in which the head of the household
was interviewed. While the Johns Hopkins team demanded death
certificates as confirmation of deaths and their cause, the NEJM
study had no such requirement. That survey was based on a sample
of 9345 households, while the 2006 Johns Hopkins report drew
on a sample of 1849 households. In reports on the NEJM study,
much respectful attention was paid the fact that their sample
was bigger, which, uninformed reporters assumed, had to mean
that it was more accurate. In fact, as their papers' own pollsters
could have told them, beyond a certain point the size of a sample
makes less and less difference to the accuracy of the information,
with accuracy increasing as a factor of the square root of the
ratio between the two.
Far, far more important than
the size of the sample, however, is the degree to which the overall
sample is truly random, that is, trule representative of the
population sampled and here is where the first of many serious
questions about the NEJM effort arise. As the authors themselves
admit, they did not visit a significant proportion of the original
designated clusters: "Of the 1086 originally selected clusters,
115 (10.6%) were not visited because of problems with security,"
meaning they were inconveniently situated in Anbar province,
Baghdad, and two other areas that were dangerous to visit, (especially
for Iraqi government employees from a Shia-controlled ministry.)
While such reluctance is understandable--one of those involved
was indeed killed during the survey--it also meant that areas
with very high death tolls were excluded from the survey.
To fill the gap, the surveyors
reached for the numbers advanced by the Iraqi Body Count, (IBC)
a U.K. based entity that relies entirely on newspaper reports
of Iraqi deaths to compile their figures. Due to IBC's policy
of posting minimum and maximum figures, currently standing at
80,419 and 87,834, their numbers carry a misleading air of scientific
precision. As the group itself readily concedes, the estimate
must be incomplete, since it omits deaths that do not make it
into the papers, a number that is likely to be high in a society
as violently chaotic as today's Baghdad, and higher still outside
Baghdad where it is even harder for journalists to operate.
Nevertheless, the NEJM study
happily adopted a formula in which they compared the ratio between
their figures from a province they did visit to the IBC number
for that province, and then used that ratio to adjust their own
figures for places they did not dare go. Interestingly, the last
line of the table on page 8 of the Supplementary Appendix to
the report, "adjustment for missing clusters using IBC data,"
reveals that in using the Body Count's dubious figures to fill
the holes in their Baghdad data, the formula they employ actually
revises downward the rate of violent deaths on what they
label "low mortality provinces."
A paragraph in the published
abstract of the report, blandly titled "Adjustment for Reporting
Bias" contains an implicit confession of the subjectivity
with which the authors reached their conclusions. As Sprey points
out, "they say 'the level of completeness in reporting of
death was 62%,' but they give no real explanation of how they
arrive at that figure." Les Roberts, one of the principal
authors of the Johns Hopkins studies, has commented: "We
confirmed our deaths with death certificates, they did not. As
the NEJM study's interviewers worked for one side in this conflict,
[the U.S.- sponsored government] it is likely that people would
be unwilling to admit violent deaths to the study workers."
The NEJM does cite an effort
to check information given by the household heads by also interviewing
household daughters about any deaths among their siblings. Again,
this data is rife with inconsistencies, particularly in the siblings'
reports of pre- and post- invasion deaths, incocnistencies egregious
enough thatthese interview results were not folded into the calculations
used to determine the report's conclusions.
Further evidence of tendentious
assessment surfaces in the section blandly titled "Response
Rates" in which the authors report that "Of the households
that did not respond, 0.7% were absent for an extended period
of time, and 1.1% of households were vacant dwellings."
Given current Iraqi conditions, houses are likely to be vacant,
or their owners absent for long periods, because something nasty
happened there--i.e. higher death rates. Yet, as Sprey points
out, there is no effort by the authors to account for this in
their conclusions.
As a statistician, Sprey is
most affronted by the enormities committed under the heading
"Statistical Analysis" in the NEJM paper, where it
is stated, "Robust confidence intervals were estimated with
the use of the jackknife procedure." The "confidence
interval" cited in the report is 104,000 to 223,000 with
a 95% uncertainty range. This does not mean, as many laypeople
assume, that there is an 85% chance that the "true number"
lies somewhere between those two figures.
Sprey explains its true meaning
this way:
"If you went out and did
the same study using the same methods and the same size sample,
but with different households, a thousand times, then 950 of
those studies would come up with a figure between 104,000 and
223,000. But the 'jackknife' they refer to is simply a procedure
by which you don't rely on data to estimate the confidence interval.
They admit in "Statistical
Analysis" that their confidence interval is simply a calculation
based on their numerical guesses quantifying the unknown size
of the three key uncertainties in their survey: the real death
rates in clusters they didn't dare visit; the percentage of unreported
deaths in their sample; the proportion of the Iraq population
that has fled since the invasion. So this is a computerized guess
on what the confidence interval might be without using any of
the actual data scatter from their survey. This in sharp contrast
to the Johns Hopkins team, who rigorously used the data of their
survey to estimate their confidence interval. To call it 'robust'
is simply a disgrace. It's not robust, it's simply speculation."
If any further confirmation
of the essential worthlessness of the NEJM effort, it comes in
the bizarre conclusion that violent deaths in the Iraqi population
have not increased over the course of the occupation. As Iraq
has descended into a bloody civil war during that time, it should
seem obvious to the meanest intelligence that violent deaths
have to have increased. Indeed, even Iraq Body Count tracks the
same rate of increase as the Hopkins survey, while NEJM settles
for a mere 7% in recent years. As Roberts points out: "They
roughly found a steady rate of violence from 2003 - 2006. Baghdad
morgue data, Najaf burial data, Pentagon attack data, and our
data all show a dramatic increase over 2005 and 2006."
These distortions come as less
of a surprise on examination of page 6 of the supplementary appendix,
an instructive table that reveals that the 279 men and women
engaged in collecting data for the survey labored under the supervision
of no fewer than 128 local, field, and central supervisors and
"editors." Senior supervisors were shipped to Amman
for a training course, though why the Iraqi government should
want to send its own officials abroad for training is unexplained,
unless of course some other government wanted a hand in the matter.
Finally, there is the matter
of the New England Journal of Medicine lending its imprimatur
to this farrago. Once upon a time, under the great editor Marsha
Angell, this was an organ unafraid to cock a snoot at power.
In particular, Angell refused to pander to the mendacities of
the drug companies, thereby earning their undying enmity. Much
has evidently changed, as the recruiting ad for the U.S. Army
on the home page of the current New England Journal reminds
us.
Andrew Cockburn is the author of Rumsfeld:
His Rise, Fall and Catastrophic Legacy.