Policing

If you’d like to support HRDAG in this project, please consider making a donation via Our Donate page.

Over the last year, HRDAG has deepened the national conversation about homicides by police, predictive policing software, and the role that bail plays in the criminal justice system. Our studies describe how the racial bias inherent in police practice becomes data input to predictive policing tools. In another project, we are shining light on the iniquities of bail decisions.

TEAM

Click each team member’s photo for full bio. Here’s the team on Twitter.

patrick

Patrick Ball

foto-hrdag-dixon-andi-2016-180square

Andi Dixon

Laurel Eckhouse

Laurel Eckhouse

William Isaac

William Isaac

foto-hrdag-lum-kristian-2016-180x180

Kristian Lum – Project Lead

Sam Sinyangwe

Sam Sinyangwe

Examining the Impact of Bail

When a defendant is detained before trial, she will face one of three possible pre-trial fates. The first is remand, in which case she must remain in detention until her trial and there is no amount of money (bail) that can be paid to gain her pre-trial release. The second is a supervised release, in which she is released on her own recognisance until her trial. The third fate is bail, in which a judge decides how much money can be paid to secure her release until her trial.

In collaboration with the New York Legal Aid Society (NYLAS) and Stanford University assistant professor Mike Baiocchi, HRDAG is examining the effect of setting bail on a defendant’s likelihood of a guilty finding, either through a jury’s determination at trial or by taking a guilty plea in advance of trial. Spearheading this collaboration for HRDAG, lead statistician Kristian Lum has used NYLAS datasets and found, with our partners, that setting bail increases defendants’ likelihood of a guilty finding, usually through a plea deal, compared to the outcomes of defendants who are released or remanded.

The analysis, which took more than a year to complete, relied on near-far matching, a method in which our partner Mike specializes. In near-far matching, defendants with similar profiles (near) are matched, with the significant variable in their cases being the wide range of leniency (far) among the judges who set bail. Because the pairings of defendants to judges is essentially random, the bail-setting process is essentially random, and the data provide a natural experiment for teasing out randomization and determining the impact of bail on a guilty or not guilty finding.

The Problem with Predictive Policing

Among many law enforcement communities, predictive policing software tools such as PredPol are being adopted in an attempt to increase policing effectiveness and to eliminate bias among their agents. The claim made by the enforcement agencies and the software developers is that because predictive policing software uses data—and not human judgment—the tools are free of racial bias.

HRDAG investigates to explain how these claims are untrue: the algorithms use data, but the data are derived from human judgments and decisions, and they tend to be systematically biased. For example, an algorithm may make predictions about future drug crimes by using data about drug-related arrests in a community over the last 10 years, a majority of which occurred in low-income, minority communities. Those arrests are hardly impartial; there are very good reasons to believe that those arrests were made in targeted communities because of any number of biases held by the arresting officers. This is “bias” in a statistical sense because police are observing the drug crimes in the targeted communities while ignoring drug crimes at universities and in middle class communities. We help our partners understand how data are only as unbiased as the systems and people who create the records.

Our contributions to the national conversation have been made by associates Laurel Eckhouse, William Isaac and Kristian Lum in USA Today, Nature and The Washington Post.

Investigating the Fairness of Risk Assessment

In many judicial processes, judges rely on risk assessments to decide the fates of defendants and convicts. For example, a judge may rely on a prediction—i.e., risk assessment—about a person’s likelihood of being a flight risk to determine remand or bail, or the judge may make a decision about parole based on a prediction about the defendant’s likelihood of being re-arrested. These predictions about a person’s future criminal behavior can affect sentencing, as well.

HRDAG is beginning work to examine the fairness of these recommendations, and is currently engaged with partners in securing datasets and writing up findings based on the data.

PUBLICATIONS

Kristian Lum (2017). Limitations of mitigating judicial bias with machine learning. Nature. 26 June 2017. © 2017 Macmillan Publishers Limited. All rights reserved. DOI 10.1038/s41562-017-0141.

Laurel Eckhouse (2017). Big data may be reinforcing racial bias in the criminal justice system. Washington Post. 10 February 2017. © 2017 Washington Post.

William Isaac and Kristian Lum (2016). Predictive policing violates more than it protects. USA Today. December 2, 2016. © USA Today.

Kristian Lum and William Isaac (2016). To predict and serve? Significance. October 10, 2016. © 2016 The Royal Statistical Society. [related blogpost]

Patrick Ball (2016). Violence in Blue. Granta, March 2016. © 2016 Granta.

Kristian Lum and Patrick Ball (2015). Estimating Undocumented Homicides with Two Lists and List Dependence. HRDAG, April 2015.

Kristian Lum and Patrick Ball (2015). How many police homicides in the US? A reconsideration. HRDAG, April 2015.

Kristian Lum and Patrick Ball (2015). BJS Report on Arrest-Related Deaths: True Number Likely Much Greater. HRDAG, March 2015.

OTHER RESOURCES

William Isaac presenting predictive policing simulation at Data & Society, April 2016.

Patrick Ball at Data & Society Research Institute, April 2016.

If you’d like to support HRDAG in this project, please consider making a donation via Our Donate page.