US Flag


Following the murder of George Floyd by Minneapolis police officers in May, 2020, we watched the United States erupt in sorrow and anger. But the killing of George Floyd, Breonna Taylor, and Ahmaud Arbery were only the most recent in America’s long history of violence against black bodies. In June, our director of research Patrick Ball refreshed an article he wrote in 2016 for Granta, “Violence in Blue,” which was largely a response to the demonstrations demanding justice for the 2014 police killing of Michael Brown in Ferguson, Missouri. In the new introduction, Patrick writes, “This cannot be acceptable in democracy.”

Police Car

For the last five years, HRDAG has been deepening the national conversation about police violence in the United States. We started by examining “predictive policing” software, and we learned that racial bias is baked into the algorithms. We broadened our investigations to examine the impact of bail, and we found that setting bail increases the likelihood of a defendant being found guilty. We broadened even further and investigated the risk assessment tools that judges use to make decisions about bail, and we found evidence of racial bias in the tools.

At HRDAG, we know that perpetrators of human rights violations and their apologists create false narratives and hide in gaps in data. And we know that careful statistical analysis can illuminate those gaps and help hold perpetrators accountable. To our Black friends and colleagues: we see you, we hear you, we are proud to partner with you. We reaffirm our commitments to our partners at the Invisible Institute, Silicon Valley De-Bug, Million Dollar Hoods, the San Francisco Public Defender’s Office, and The Legal Aid Society. We will continue to support every organizer working to document police abuses and build human rights-based practices.

photo by Flickr user John Liu, CC-BY-2.0, modified by David Peters

Investigating Risk Assessment

In many judicial processes, judges rely on pre-trial risk assessment software tools to help them decide the fates of arrested persons. Will she be released on her own recognisance until the trial? Will she be given the option to pay bail in order to secure her release before the trial? Or will she be denied bail (and release)? The software programs that give judges a risk assessment rating are touted as “bias-free,” because they are algorithms and not humans—but our research shows that the data used to “train” the algorithms encode racial bias into the tools. 

  • Recent research (2020) addresses the question of how police officer booking decisions affect pre-trial risk assessment tools relied upon by judges to make pre-trial release decisions, in collaboration with San Francisco’s District Attorney, Chesa Boudin, and it examines a popular risk assessment tool known as the Arnold Public Safety Assessment (PSA).
  • Research done in 2019 by HRDAG evaluates a specific risk assessment tool used in New York City, in collaboration with the Criminal Justice Agency
  • A 2019 collaboration with The Safety + Justice Challenge produced a primer on pre-trial risk assessment tools for judges, prosecutors, and defense attorneys.

Examining the Impact of Bail

In collaboration with partners at

Legal Aid Society

In collaboration with the New York Legal Aid Society (NYLAS) and Stanford University assistant professor Mike Baiocchi, HRDAG has examined the effect of setting bail on a defendant’s likelihood of a guilty finding, either through a jury’s determination at trial or by taking a guilty plea in advance of trial. Spearheading this collaboration for HRDAG, lead statistician Kristian Lum has used NYLAS datasets and found, with our partners, that setting bail increases defendants’ likelihood of a guilty finding, usually through a plea deal, compared to the outcomes of defendants who are released or remanded.

Bail bonds sign

photo by Flickr user Daniel Schwen, CC-BY-4.0, modified by David Peters

The Problem with Predictive Policing

Among many law enforcement communities, predictive policing software tools such as PredPol are being adopted in an attempt to increase policing effectiveness and to eliminate bias among their agents. The claim made by the enforcement agencies and the software developers is that because predictive policing software uses data—and not human judgment—the tools are free of racial bias.

But HRDAG research shows that the use of “big data” reinforces racial bias in these tools, because the tools use data generated by racist policing practices.


Patrick Ball (2020). Refresh of Violence in Blue (Granta, March 2016) with new introduction. 8 June, 2020. © 2020 Granta.

Kristian Lum, Chesa Boudin and Megan Price (2020). The impact of overbooking on a pre-trial risk assessment tool. FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. January 2020. Pages 482–491. ©ACM, Inc., 2020.

Kristian Lum and Tarak Shah (2019). Measures of Fairness for New York City’s Supervised Release Risk Assessment Tool. Human Rights Data Analysis Group. 1 October 2019. © HRDAG 2019.

Sarah L. Desmarais and Evan M. Lowder (2019). Pretrial Risk Assessment Tools: A Primer for Judges, Prosecutors, and Defense AttorneysSafety and Justice Challenge, February 2019. © 2019 Safety and Justice Challenge. <<HRDAG’s Kristian Lum and Tarak Shah served as Project Members and made significant contributions to the primer.>>

Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook and Julie Ciccolini (2018). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior. November 23, 2018. © 2018 Sage Journals. All rights reserved.

William Isaac and Kristian Lum (2018). Setting the Record Straight on Predictive Policing and Race. In Justice Today. 3 January 2018. © 2018 In Justice Today / Medium.

Kristian Lum, Erwin Ma and Mike Baiocchi (2017). The causal impact of bail on case outcomes for indigent defendants in New York City. Observational Studies 3 (2017) 39-64. 31 October 2017. © 2017 Institute of Mathematical Statistics.

Kristian Lum (2017). Limitations of mitigating judicial bias with machine learning. Nature. 26 June 2017. © 2017 Macmillan Publishers Limited. All rights reserved. DOI 10.1038/s41562-017-0141.

Laurel Eckhouse (2017). Big data may be reinforcing racial bias in the criminal justice system. Washington Post. 10 February 2017. © 2017 Washington Post.

William Isaac and Kristian Lum (2016). Predictive policing violates more than it protects. USA Today. December 2, 2016. © USA Today.

Kristian Lum and William Isaac (2016). To predict and serve? Significance. October 10, 2016. © 2016 The Royal Statistical Society. [related blogpost]

Patrick Ball (2016). Violence in Blue. Granta, March 2016. © 2016 Granta.

Kristian Lum and Patrick Ball (2015). Estimating Undocumented Homicides with Two Lists and List Dependence. HRDAG, April 2015.

Kristian Lum and Patrick Ball (2015). How many police homicides in the US? A reconsideration. HRDAG, April 2015.

Kristian Lum and Patrick Ball (2015). BJS Report on Arrest-Related Deaths: True Number Likely Much Greater. HRDAG, March 2015.

If you’d like to support HRDAG in this project, please consider making a donation via Our Donate page.

Our work has been used by truth commissions, international criminal tribunals, and non-governmental human rights organizations. We have worked with partners on projects on five continents.