How Predictive Policing Reinforces Bias

aerial view of police car bathed in blue light

#HRDAG research challenged claims that predictive policing algorithms are fair.#

Law enforcement communities are searching for ways to become more effective and eliminate bias. Unfortunately, many of the “predictive policing” tools on the market do not deliver on either of those promises.

While software developers market tools such as PredPol, a program for police departments that predicts hotspots where future crime might occur, with claims that software can’t be biased. Perhaps they imagine that software can exist independently of humans. But have humans actually been removed from the process? No, they have not. The datasets that are fed into programs to create, or train, the algorithms are datasets that have been created by people—in this case, police officers. Historical biases in the decisions police officers make to arrest certain types of people, in certain types of neighborhoods, flow into the datasets. HRDAG research shows that the use of “big data” reinforces racial bias in these tools, potentially targeting people of color and the communities where they live. 

Kristian Lum and William Isaac  showed that PredPol was “learning” from, or trained on, a cache of previous crime reports. And because of the bias baked into all the circumstances that generated those reports, the algorithm could potentially get stuck in a feedback loop of over-policing communities of color. 

Further readings

The Guardian. Sam Levin. 29 March, 2019.
‘Bias deep inside the code’: the problem with AI ‘ethics’ in Silicon Valley
Kristian Lum and HRDAG are mentioned.

CNBC. Katie Brigham. 17 March, 2019.
Courts and police departments are turning to AI to reduce bias, but some argue it’ll make the problem worse.
Kristian Lum and HRDAG are featured.

Tech Crunch. Megan Rose Dickey. 30 September, 2018.
Unbiased algorithms can still be problematic.
Kristian Lum, Patrick Ball, and HRDAG are mentioned.

Bloomberg. Ellen Huet. 15 May, 2018.
The data scientist helping to create ethical robots.
Kristian Lum is profiled.

The Guardian. Stephen Buranyi. 8 August, 2017.
Rise of the racist robots – how AI is learning all our worst impulses
Kristian Lum, Samuel Sinyangwe, and HRDAG are mentioned.

ScienceNewsExplores. Kathiann Kowalski. 28 February, 2017.
What happens when you look at crime by the numbers
Kristian Lum and HRDAG are mentioned.

Related publications

William Isaac and Kristian Lum (2018). Setting the Record Straight on Predictive Policing and Race. In Justice Today. 3 January 2018. © 2018 In Justice Today / Medium.

Laurel Eckhouse (2017). Big data may be reinforcing racial bias in the criminal justice system. Washington Post. 10 February 2017. © 2017 Washington Post.

William Isaac and Kristian Lum (2016). Predictive policing violates more than it protects. USA Today. December 2, 2016. © USA Today.

Kristian Lum and William Isaac (2016). To predict and serve? Significance. October 10, 2016. © 2016 The Royal Statistical Society.

Related videos

Predictive policing and machine learning | Skoll Foundation | Megan Price | 2018 (11 minutes)

Predictive Policing | Data Society and Research Institute | Kristian Lum | 2016 (56 minutes)

Tyranny of the Algorithm? Predictive Analytics and Human Rights | NYU School of Law | Patrick Ball and Kristian Lum | 2016 (51 minutes)

Acknowledgments

HRDAG was supported in this work by MacArthur Foundation, Oak Foundation, Open Society Foundations, and the Sigrid Rausing Trust.

Image: ChrisGoldNY, CC BY-NC 2.0, modified by David Peters


Our work has been used by truth commissions, international criminal tribunals, and non-governmental human rights organizations. We have worked with partners on projects on five continents.

Donate