At the weekend I was running a course, which included critical appraisal of scientific papers. The point was made that if you were presented with a number of studies about drug A in condition X, published over a number of years, there would be differing results in different papers. The drug company representatives would, of course, cherry pick the results that would favour their drugs. The important thing to remember was not to be swayed by any one paper, but to look at the accumulating evidence as a whole.
The same is true of the controversial Lancet study on the Iraq casualties. What some commentators seem to have a problem with, is that a study can be 100% correct in its use of statistics, yet at the same time skepticism can be expressed about the results – and criticism can be made of the sampling techniques. This is not the same as dismissing the results for partisan reasons.
The much-debated Lancet study [PDF] involved a cluster sample of 33 clusters of 30 households (990 households) to find compare mortality in the 17Â·8 months after the invasion with the 14Â·6-month period preceding it. They estimated that 98 000 more deaths than expected occurred, but had wide confidence intervals (95% CI 8000â€“194 000). It should be noted that cluster sampling is a technique used when resources are tight, or the environment is dangerous, and is known to be less accurate than other methods. Cluster sampling is known to have a higher sampling error than simple random sampling with the same sample size.
The UN development programme have released a report, which according to The Times, “states that it was 95 per cent confident that the toll during the war and the first year of occupation was 24,000, but could have been between 18,000 and 29,000.”This isn’t exactly what the report says, but see later.
The UN study looked at all governorates in Iraq. In each governate 1,100 households were selected for interview, with the exception of Baghdad, where 3,300 households were selected. The sample was 22,000 households – 21,668 were actually interviewed.
The section on war-related deaths is on page 55 of the analytical report [PDF]:
The number of deaths of civilians and military personnel in Iraq in the aftermath of the 2003 invasion is another set of figures that have raised controversy. The ILCS data indicates 24,000 deaths, with a 95 percent confidence interval from 18,000 to 29,000 deaths. The confidence interval was estimated using a linearisation technique (using SPSS Complex Samples, version 12).
Again, cluster sampling was used, but the bigger sample size would be more likely to yield a more accurate figure, as well as providing smaller confidence intervals.
So we have:
Lancet: 98,000 (95% CI 8000â€“194 000)
UN report: 24,000 (95% CI 18,000 to 29,000)
It is important to note that the confidence intervals of the Lancet report completely encapsulate the confidence intervals of the UN report, which means the two studies are not necessarily contradictory. However, the UN report has a larger sample size in its favour making it more likely to contain the true number.
Of course neither study is the last word, and that is part of the problem with the debate on this issue. There is a section of opinion that wants to be able to say those in favour of the war were responsible for the deaths of 100,000 people. However, no one study will give that certainty, just as no one drug trial will give the full picture of a drug’s efficacy or adverse effects.
Someone at Crooked Timber said of the Lancet study on Iraq “I think I ended every single Lancet post with the observation that you can tell a lot about peopleâ€™s character by observing the way in which they protect themselves from hostile information.” One can only hope that those who seemed to accept that a single study was the final answer will be open-minded enough to accept that the Lancet study was only one part of the puzzle. Discounting people, because they were skeptical of accepting the results of one study as the final word, as deficient in character is partisan in the extreme.
Perhaps more importantly neither study allows one to make a simplistic moral case for or against the Iraq war, but it has to be said a round figure of 100,000 makes a nice simplistic slogan.
UPDATE: Tim Lambert has a post about the two studies worth reading. In it, he makes the point that the since the Lancet Report also included excess deaths from increases in disease, accidents and murders the corresponding number of deaths from the Lancet study is in fact 33,000. There are also differences in the sampling period. I wonder about his use of the word vindicated. Also see Jim Lindgren’s comments on how even the ILCS report has confused the issue by suggesting the figures are comparable. I still stand by the point that to use the Lancet report to say the US has killed 100,000 people in the Iraq war is simplistic twaddle, as do supporters of the Lancet study “As Iâ€™ve mentioned before, I donâ€™t like this 100,000 number, and it is irksome that the Lancetâ€™s lasting legacy has been that the â€œ100,000 dead!â€ factoid has become a commonly used stick for antiwar hacks to beat prowar hacks with”