14 Questions about Counting Casualties in Syria

In early 2012, HRDAG was commissioned by the UN Office of the High Commissioner for Human Rights (OHCHR) to do an enumeration project, essentially a count of all of the reported casualties in the Syrian conflict. HRDAG has published two analyses so far, the first in January 2013, and the second in June 2013. In this post, HRDAG scientists Anita Gohdes, Megan Price, and Patrick Ball answer questions about that project.

Checkpoint, Damascus / Elizabeth Arrott VOA News

Checkpoint, Damascus / Elizabeth Arrott VOA News

So, how many people have been killed in the Syrian conflict?
This is a complicated question. As of our last report, in June 2013, we know that there have been at least 93,000 reported, identifiable conflict-related casualties. The word “reported” is important because it includes only the victims who were reported to groups collecting information about killings. The word “identifiable” is important because it includes only victims who’ve been identified by name and date and location of their death. And the phrase “conflict-related casualties” is important because it includes only the deaths that were, essentially, killings. (Naturally, during conflict, there are many deaths that are not killings.) In conflicts like this, there are killings that go unreported, or that don’t get reported until years after the conflict is over. We have no idea how many people have actually been killed in the conflict; in the enumeration analyses, we are studying the killings that have been reported and identified. For more on this topic, watch Megan Price’s talk at the Strata conference.

Is HRDAG on the ground in Syria?
Nope. We’re scientists working in the U.S. and Germany. We work with Syrians—some of them are on the ground in Syria, and others have fled the country but remain in close contact with local networks in Syria. We also work with UN liaisons in Geneva, as well as  other NGOs in Europe and North America.

If you’re not on the ground, how do you know how many deaths have been reported?
During conflicts like this, we can’t rely on governments to report casualties. So we rely on what we call “documentation groups” on the ground to collect data and give it to the UN, which then gives it to us for analysis. Some of these partners have included the Violations Documentation Centre (VDC), the 15 March group, the Syrian Revolution General Council, the Syrian Shuhada website, the Syrian Observatory for Human Rights, and the Syrian Network for Human Rights. Currently, we are working with four partner documentation groups. They give us their datasets, and we analyze them. One of the groups we started this project with—the Syrian Observatory for Human Rights—has declined to continue sharing data.

What do you mean when you say you “analyze” the data? Aren’t you just adding up all the data?
If only it were as easy as tallying! Part of what we do is cross-reference every datapoint (i.e., every named victim) to make sure that we are not counting anyone twice (or more). This is called “de-duplicating;” you can read all about it here. Every documentation group we work with collects different data, which is only natural—they each have their own particular niche, network on the ground, and trusted sources—but sometimes they count the same killings. So, for example, if we just added up all the records for the period we were enumerating, we would arrive at 260,000 killings—but after identifying duplicates, we arrived at about 93,000.

Why are there so many different counts?
Each documentation group collects slightly different data. In essence, each group has a different snapshot of reality, and we have to keep that in mind when analyzing data. Each group has different access to different geographic regions and groups of people, and this changes over time for a lot of reasons: the security situation could change, the people doing the data collection could change, the control over an area may change hands. Also, each documentation group has its own mission and goals, which can have a big impact on what kind of violations they document, and even the language they use to define the violations. This VDC webpage offers a great example of the complexity of how one documentation group records information about killings. A really concrete example of this is the chemical weapons attack from last August: France relied solely on confirmed, identified victims from hospitals and morgues, while the U.S. relied on reports of bodies from various sources, and so on. (Tufts University professor Kelly Greenhill explains more on this BBC More or Less report.) All of the documentation groups we work with emphasize their neutrality, but some are generally viewed as supporting the opposition (or, at least not supporting the regime), so they are having a hard time accessing information in regime-controlled areas. Alternatively, many opposition-controlled areas go through periods without power, which makes it difficult for local communities in those areas to report up the documentation network about any victims in that area.

Are any of these counts any better than another?
Probably not. Although some critics have raised concerns about the reliability of some data sources, as far as we can tell, there are no substantive differences among the sources we have reviewed. So far, there’s no scientific evidence that says a definite Yes or No, either way. In our 20-plus years of experience, we’ve found that it’s much more important to collect information from a variety of sources than to identify a single “best source.” The more information we get from different sources, the better job we can do piecing together an accurate picture.

So, if it takes so long (or never) to get all the casualties reported, how does the world ever know how many people actually died?
This is what data analysis is for! At HRDAG, we use a method called multiple systems estimation (MSE) to calculate good approximations of how many people are not being counted. (Counting the uncountable is a big part of what we do at HRDAG.)

So basically, you’re guessing?
Some people ask us if “estimates” are the same thing as “guesses.” They’re not!  Estimates are built from scientific analysis that can be replicated by other scientists and stands up to peer-review. (You can read more about MSE
here.) Estimates contain a measure of uncertainty, too, to help us keep straight how “sure” we are of what we know, and what we don’t know.

Why do you have to “analyze” the data? Can’t we just use the raw data to compare counts from month to month or region to region?
This is a great question, and the short answer is, No, we cannot just look at the raw data. Another short answer, slightly longer, is that the raw data is an extremely non-random sample—and non-random samples can be very misleading when used for these kinds of comparisons. (What’s raw data useful for? Getting individual details, context, and other qualitative information.) Non-random samples only tell a biased fraction of the story instead of the story at large. For more on that, please check out this blogpost, where we go into much more detail, or this article.

Of the 93,000 reported casualties, most are all young males, so those are mostly combatants, right?
Most of the reported victims are young males, but the data cannot tell us if they’re combatants. These young men might be combatants. Alternately, they might be the people that families send out on the street during fighting to buy food. Or they could be the people left behind to defend homes and property after everyone else has gone into exile. Or they could be the people who are targeted specifically for killings, as opposed to other violations such as rape or kidnapping. There are many possibilities, or hypotheses, that would explain why so many reported casualties are young males, and they’re all very sad; we’ve seen all of these narratives in our 20 years of experience. In statistical reasoning, we consider hypotheses like these and then conduct analyses to determine how our data align with each hypothesis.

How does the Syrian conflict compare to other conflicts, in terms of violence?
The short answer is that in terms of per-capita casualties, this conflict is bad, but not as bad as, for example, Rwanda. But we don’t really like that answer. We prefer to answer this question with another question: Why do we need to quantify how “bad” it is? When we try to contemplate the difference between 100,000 killings and 1 million killings, we know there’s an extra zero on one of those numbers, but they’re all unfathomably tragic and terrible. Also, conflict-related horror goes way beyond killings. There are many types of violations, and a lot of suffering that results from them, lasting for decades. There’s no way to quantify suffering. Nor is there a way to quantify our political responsibility for it, or our obligation to intervene. That’s not a job for statisticians.

Why did the UN put the Syria casualty-counting project on hold?
In January 2014 the UN Office of the High Commissioner for Human Rights (OHCHR) announced that it was calling a halt to the enumeration (i.e., casualty-counting) project in Syria. The office cited extreme difficulty, complexity, and danger as the reasons, saying that they didn’t feel confident about being able to provide an accurate accounting of the true scope of the tragedy on the ground. That’s all we know.

In January, a trove of documents was discovered in Syria, potentially detailing deaths and torture by the al-Assad government. Do you think these documents are legit? And will HRDAG be able to work with the docs?
First, we’d like to say that those documents, if legit, support our theory that the number of actual casualties is far greater than the number of reported casualties. As for getting our hands on those documents, we love to roll up our sleeves with this kind of work, as we have in our long-term project at the Historic Archive of the National Police in Guatemala. We’ll see what happens with those documents, and whose hands they end up in.

Who is Razan Zatouneh?
Razan Zatouneh is a Syrian human rights lawyer who co-founded the Violations Documentation Centre (VDC), one of the documentation groups on the ground in Syria that provides us with datasets. Ms. Zaitouneh was kidnapped in Douma in mid-December, 2013, along with three colleagues, Wael Hamadeh, Nazem al-Hammadi, and Samira al-Khalil. We continue to hope for their safe return. You can read about Ms. Zatouneh and the kidnapping here, here, and here.

[CC BY-NC-SA, including image] [Photo: Elizabeth Arrott, Voice of America News, 2012]

Our work has been used by truth commissions, international criminal tribunals, and non-governmental human rights organizations. We have worked with partners on projects on five continents.

Donate