609 results for search: %ED%99%8D%EB%B3%B4%ED%8C%80%E2%98%8E%EC%B9%B4%ED%86%A1adgogo%E2%98%8E%EC%99%95%EC%8B%AD%EB%A6%AC%EC%97%AD%EC%A3%BC%EC%A0%90%E3%83%A5%ED%99%8D%EB%B3%B4%E2%94%82%ED%8C%80%E2%97%95%EC%99%95%EC%8B%AD%EB%A6%AC%EC%97%AD%E8%B1%B7%EC%A3%BC%EC%A0%90%E8%B1%88endocardium
Courts and police departments are turning to AI to reduce bias, but some argue it’ll make the problem worse
Kristian Lum: “The historical over-policing of minority communities has led to a disproportionate number of crimes being recorded by the police in those locations. Historical over-policing is then passed through the algorithm to justify the over-policing of those communities.”
Beautiful game, ugly truth?
Megan Price (2022). Beautiful game, ugly truth? Significance, 19: 18-21. December 2022. © The Royal Statistical Society. https://doi.org/10.1111/1740-9713.01702
Big data may be reinforcing racial bias in the criminal justice system
Laurel Eckhouse (2017). Big data may be reinforcing racial bias in the criminal justice system. Washington Post. 10 February 2017. © 2017 Washington Post.
Syria’s status, the migrant crisis and talking to ISIS
In this week’s “Top Picks,” IRIN interviews HRDAG executive director Patrick Ball about giant data sets and whether we can trust them. “No matter how big it is, data on violence is always partial,” he says.
Truth Commissioner
From the Guatemalan military to the South African apartheid police, code cruncher Patrick Ball singles out the perpetrators of political violence.
Predictive policing violates more than it protects
William Isaac and Kristian Lum. Predictive policing violates more than it protects. USA Today. December 2, 2016. © USA Today.
First Things First: Assessing Data Quality Before Model Quality.
Anita Gohdes and Megan Price (2013). Journal of Conflict Resolution, Volume 57 Issue 6 December 2013. © 2013 Journal of Conflict Resolution. All rights reserved. Reprinted with permission of SAGE. [online abstract]DOI: 10.1177/0022002712459708.
To predict and serve?
Kristian Lum and William Isaac (2016). To predict and serve? Significance. October 10, 2016. © 2016 The Royal Statistical Society.
HRDAG and Boston PD SWAT Reports
Making the Case: The Role of Statistics in Human Rights Reporting.
Patrick Ball. “Making the Case: The Role of Statistics in Human Rights Reporting.” Statistical Journal of the United Nations Economic Commission for Europe. 18(2-3):163-174. 2001.
Selection Bias and the Statistical Patterns of Mortality in Conflict.
Megan Price and Patrick Ball. 2015. Statistical Journal of the IAOS 31: 263–272. doi: 10.3233/SJI-150899. © IOS Press and the authors. All rights reserved. Creative Commons BY-NC-SA.
South Africa
Palantir Has Secretly Been Using New Orleans to Test Its Predictive Policing Technology
One of the researchers, a Michigan State PhD candidate named William Isaac, had not previously heard of New Orleans’ partnership with Palantir, but he recognized the data-mapping model at the heart of the program. “I think the data they’re using, there are serious questions about its predictive power. We’ve seen very little about its ability to forecast violent crime,” Isaac said.
The True Dangers of AI are Closer Than We Think
William Isaac is quoted.
Here’s how an AI tool may flag parents with disabilities
HRDAG contributed to work by the ACLU showing that a predictive tool used to guide responses to alleged child neglect may forever flag parents with disabilities. “These predictors have the effect of casting permanent suspicion and offer no means of recourse for families marked by these indicators,” according to the analysis from researchers at the ACLU and the nonprofit Human Rights Data Analysis Group. “They are forever seen as riskier to their children.”
Data-driven crime prediction fails to erase human bias
Work by HRDAG researchers Kristian Lum and William Isaac is cited in this article about the Policing Project: “While this bias knows no color or socioeconomic class, Lum and her HRDAG colleague William Isaac demonstrate that it can lead to policing that unfairly targets minorities and those living in poorer neighborhoods.”
Quantifying Injustice
“In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol. … Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population.”