Reality and Risk in Our Mortality Study of the Peruvian TRC

TL;DR: we respond to S. Rendón’s new analysis and critique of the 2003 Peruvian Truth and Reconciliation Commission’s mortality study. We contend that his proposed is technically unsound, and that his conclusions are untenable.

Our paper: Manrique-Vallier and Ball (2019), “Reality and Risk: a Refutation of S. Rendon’s Analysis of the Peruvian Truth and Reconciliation Commission’s Mortality Study.

And the our technical appendix is here.

Peru experienced a terrible internal armed conflict during the period 1980–2000 between the Maoist guerrillas of the Shining Path and agents of the Peruvian state (EST). In 2003, HRDAG researchers and analysts at Peru’s Truth and Reconciliation Commission (TRC) estimated conflict mortality due to violence using Capture-Recapture (CR) methods by combining the TRC’s information with five other databases. Estimates were stratified by location and perpetrator. The findings — that approximately 69,000 people were killed in the conflict, and the perpetrator most responsible for the violence were the Maoist guerrillas called the Shining Path — shocked Peru.

Killings committed by the Shining Path were infrequently documented by the non-TRC databases, and the lack of overlap makes our estimation procedure (called Capture-Recapture or Multiple Systems Estimation) difficult or impossible. Therefore, instead of obtaining direct estimates for the Shining Path, the TRC first estimated a total including both killings by the state and the Shining Path, then estimated the state alone, and estimated the Shining Path killings by subtraction.

In a new article, Silvio Rendón criticizes this procedure as “unusual,” and proposes instead estimating the 9 (out of 59) strata with enough data to allow it, and extrapolating the results to the remaining 50. Professor Daniel Manrique-Vallier and I have written a response. See a draft of our paper here and a more detailed technical supplement here. Our paper is under review at the same journal that published Rendón’s article, and we hope it will be published in the next few weeks. We are publishing the draft here with permission of the journal editor.

We think Rendón’s analysis is wrong. There are three bases for our rejection of Rendón’s methods and findings:

(i) his results are inconsistent with data that have been collected since the 2003 publication of the TRC report: he estimates lower numbers of people killed than the number of killings documented in interviews, which means his estimates cannot be correct;

(ii) the logic of his method is flawed: the lack of data forces him to cherry-pick unrepresentative locations and extrapolate to the rest, but cherry-picking biases his results. We note that this was the reason why the TRC dismissed this strategy in 2003;

and (iii) when we compare the two competing methods, using the appropriate tools from statistical theory, we find that Rendón’s method performs consistently more poorly than the original TRC approach.

We agree that the TRC’s estimates could be improved. The work was done in 2003, and there are aspects of the analysis we think could be done differently. Furthermore, we have substantial new data and greatly advanced statistical theory since then.

However, noting improvements that could be made to TRC’s work does not by itself justify new results. To improve on the TRC’s work, alternatives should, at minimum, stand on their own: they have to be statistically sound, and should produce plausible results. In addition, if they are to contribute anything to the discussion, they should also have some advantage over the original other than merely appearing more obvious. Rendón’s work fails all three of these requirements.