How Pretrial Risk Assessment Tools Perpetuate Unfairness

When a person is arrested in the United States, their path through the legal system often hinges on a critical juncture: the pretrial phase. Judges weigh whether to release defendants outright, set financial bail, or order detention based on perceived risks of flight or danger. For those unable to afford bail, the consequences can be  stark and may include prolonged incarceration in overcrowded jails, strains on family ties and professional obligations, and heightened pressure to plead guilty, regardless of innocence. Amid growing scrutiny of these systemic inequities, pretrial risk assessment tools have emerged as a potential reform. 

Many jurisdictions across the United States are now using data-driven algorithms to help judges in their decisions about whether an accused individual faces pretrial incarceration. These algorithms, called pretrial risk assessment tools, are often advertised as bias-free, and are seen by some as an improvement upon the cash bail system. In collaborations with legal advocates, NGOs, and researchers, HRDAG data scientists delved into pretrial risk assessment to interrogate whether these tools live up to their promise of fairness—or perpetuate the biases they aim to dismantle.

As outlined in “Pretrial Risk Assessment Tools: A Primer for Judges, Prosecutors, and Defense Attorneys” (2019), a publication with significant contributions from HRDAG’s lead statistician Kristian Lum and data scientist Tarak Shah, pretrial risk assessment tools are designed to estimate the likelihood that a defendant will fail to appear in court or be arrested for  new crimes if released. Proponents argue that these tools could reduce unnecessary detention. But critics, including civil rights coalitions, warn that poorly designed tools may have racial disparities baked into the algorithms and result in greater restriction of vulnerable populations.

The primer cites a 2018 survey of public defenders to reveal stark skepticism: over 80% believed these tools worsened racial disparities in the justice system. This mirrors findings from 2016 analysis done by investigative journalism organization ProPublica, which showed that Compas, a widely used algorithm, was disproportionately labeling  Black defendants as high-risk compared to white defendants—a pattern critics argue stems from biased policing and arrest practices that feed into risk assessment data. As the primer notes, even validated tools may produce skewed results if inputs like criminal history reflect “systemic inequities” rather than individual risk.

Proponents argue risk assessment tools improve consistency, but HRDAG’s research emphasizes their limitations. For example, the primer acknowledges that risk categories like “low” or “high” are policy decisions, not scientific facts. A defendant labeled “20% likely to fail to appear” might face detention based on a jurisdiction’s tolerance for risk—a threshold often shaped by politics, not evidence. Worse, jurisdictions rarely validate tools locally, risking mismatches between risk estimates and actual outcomes.

HRDAG’s work underscores how these tools can amplify harm. While the primer states that risk assessments should “inform, not replace” judicial discretion, studies show judges often over-rely on algorithmic scores, especially when pressed for time. The primer also warns that pretrial tools offer no solutions for defendants’ unmet needs, like homelessness or substance use. Instead, they streamline decisions without addressing root causes. As one public defender noted, “They don’t ask why someone might miss court—they just tally risk factors.” This reductionist approach risks punishing poverty and instability, much like cash bail, implicating pretrial risk assessment tools as yet another tool that leans too easily toward bias.

Further reading

Safety and Justice Challenge. Sarah Desmarais and Evan Lowder. February, 2019.
Pretrial Risk Assessment Tools: A Primer for Judges, Prosecutors, and Defense Attorneys.

fast.ai. Rachel Thomas. 7 August, 2018.
What HBR Gets Wrong about Algorithms and Bias.

The Guardian. Stephen Buranyi. 8 August, 2017.
Rise of the racist robots – how AI is learning all our worst impulses

HRDAG. Enzo Metsopoulos. 16 February, 2025.
How Causal Analysis Confirmed Impact of Cash Bail on Verdicts

HRDAG. Christine Grillo. 5 March, 2019.
Primer to inform discussions about bail reform

Related publications

Kristian Lum, Chesa Boudin and Megan Price (2020). The impact of overbooking on a pre-trial risk assessment tool. FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. January 2020. Pages 482–491. ©ACM, Inc., 2020.

Kristian Lum and Tarak Shah (2019). Measures of Fairness for New York City’s Supervised Release Risk Assessment Tool. Human Rights Data Analysis Group. 1 October 2019. © HRDAG 2019.

Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook and Julie Ciccolini (2018). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior. November 23, 2018. © 2018 Sage Journals

Kristian Lum, Erwin Ma and Mike Baiocchi (2017). The causal impact of bail on case outcomes for indigent defendants in New York City. Observational Studies 3 (2017) 39-64. 31 October 2017. © 2017 Institute of Mathematical Statistics.

Kristian Lum (2017). Limitations of mitigating judicial bias with machine learning. Nature. 26 June 2017. © 2017 Macmillan Publishers Limited. All rights reserved.

Related videos + podcasts

Podcast: Risk Assessment Biases | Stats + Stories, Episode 147 | Tarak Shah | 2020. 28 minutes.

Lifelong curiosity and Criminal Justice Reform through data. Origins, Episode 19.  Kristian Lum. 2020. 52 minutes.

FAT* 2018 Translation Tutorial: Understanding the Context and Consequences of Pre-trial Detention. Elizabeth Bender (Decarceration Project at The Legal Aid Society of NYC), Kristian Lum (Human Rights Data Analysis Group), and Terrence Wilkerson (entrepreneur). 18 April, 2018. 52 minutes.

Acknowledgments

HRDAG was supported in this work by MacArthur Foundation, Ford Foundation, and Open Society Foundations.

Image: David Peters, 2025.

See more from Exposing algorithmic discrimination


Our work has been used by truth commissions, international criminal tribunals, and non-governmental human rights organizations. We have worked with partners on projects on five continents.

Donate