A Puerto Rican judge has ordered the government to release detailed information on deaths during and after Hurricane Maria, which ravaged the US territory in September 2017. The judge’s decision follows a saga that began when several sources challenged the official death toll of 64, estimating there had actually been more than 1,000 hurricane-related deaths.
Puerto Rico’s governor, Ricardo Rosselló, then attempted to kneecap further independent research by suspending further data releases. This led to a lawsuit by CNN and the Center for Investigative Journalism demanding access to detailed data.
The media coverage of the unofficial studies has, unfortunately, fuelled the confusion over how many people actually died. A typical headline about one widely covered Harvard survey misleadingly blared: “Study Hikes Hurricane Maria Death Toll to 4,645.”
But the pressure this publicity generated led to a more accurate estimate of around 1,400 more deaths than would normally be expected for the time period of September to December. This solid number is less than one-third of the much-hyped 4,645 estimate from the Harvard study, an unsurprising update of the independent studies including mine and more than 20 times the official figure of 64. US lawmakers have now proposed a bill to standardise the way natural disaster death tolls are counted to prevent such disparities in the future.
But why was there so much confusion? Drawing on material from my new online course about accounting for war deaths, it’s possible to sort through the discrepancies and learn some vital lessons for how we should think about death tolls and the problems that arise with them more generally.
Survey estimates are inherently uncertain
The Harvard researchers interviewed 3,299 households and found roughly 15 deaths beyond predictions based on death rates in pre-hurricane years. They then scaled up this number to estimate that among the entire population between 793 and 8,498 people had died either directly or indirectly because of the hurricane, with the midpoint of this range being the 4,645 estimate.
The 793 to 8,498 range is known as a 95% uncertainty interval. Broadly, this means that there is a 95% chance that a random sample will represent the population well enough so that the actual figure is within this stated range. To put it mildly, this is a lot of uncertainty. It’s as if a public opinion poll came out estimating that Donald Trump’s approval rating stood at 40%, with a margin of error of plus or minus 33 percentage points. There would be laughter.
But while the media tends to be savvy about error margins in opinion polls, it lowered its standards for the Harvard study. Even high-quality outlets treated uncertainty as an afterthought or characterized the 4,645 midpoint as a minimum without even mentioning uncertainty. These reports look especially silly compared to the 1,400 figure we now have.
Don’t expect a definitive list of victims
Above, I was a little unkind to Governor Rosselló, because he partially offset his decision to shut down the data flow by hiring researchers from George Washington University to investigate all post-hurricane deaths. But we shouldn’t expect this team to divide these deaths neatly into two groups of those that were caused by the hurricane and those that weren’t. If they try to do this then the work will not be compelling.
I have no idea what the GWU team is actually doing but I think they should find some direct deaths, some fairly clear indirect deaths and some murkier candidates for indirect deaths. The direct category should include, for example, people killed instantly by flying debris.
The clear indirect category might cover heart attack victims who died at home because the phone network was down and they couldn’t get to hospital. Even this type of classification must involve judgements that can be challenged. For example, some heart attack victims may have died anyway, even under optimal conditions.
There will be still murkier cases where something related to the hurricane might be just one among several potential causes of death. For example, our heart attack victim could die after his ambulance was a little late and the available nurse was a little inexperienced but makes no major error. Both these factors could be related to the hurricane but it would be difficult to prove they were the ultimate cause of the death.
Transparency is key
The official death count comes from hurricane-related causes of deaths listed on death certificates. But “hurricane” is not a standard cause-of-death classfication and hurricane-related factors may not even be visible to a person filling out a death certificate.
For example, a doctor may just see a heart attack and correctly note this on a death certificate. Adding that the hurricane was a factor might require an investigation that would have to be conducted under adverse circumstances.
So it is not surprising that hurricane-related factors would not appear in many death certificates even when they might have really been present. The main mistake here is not that the tally of 64 exists in the first place but, rather, the idea that it might cover all the hurricane-related deaths
It was a bad idea for the the Puerto Rican government to stand behind the official death count, a worse idea for Governor Rosselló to suspend data releases and an atrocious idea to remove the independence of the Puerto Rico Institute of Statistics. This authoritarian secrecy temporarily deprived us of data and bred suspicion and a truth-seeking impulse.
The Harvard study and the CNN-CIJ law suit both attempted to fill the data void. We now have the monthly data and soon should have all the death certificates and perhaps, other detailed information that was made available to the GWU team. Such forced openness will improve our understanding of Maria and future disasters, but these benefits come late and the damage to public trust will endure. I suspect that we will never quite shake a suspicion that there are 3,000 missing bodies that have not been found, but whose existence was proved by a crack Harvard team.