638 results for search: %E5%A4%A7%E5%85%AC%E5%8F%B8%E7%9A%84%E4%BC%98%E5%8A%BF%E5%92%8C%E5%8A%A3%E5%8A%BF-%E3%80%90%E2%9C%94%EF%B8%8F%E6%8E%A8%E8%8D%90KK37%C2%B7CC%E2%9C%94%EF%B8%8F%E3%80%91-%E5%90%8C%E6%B2%BB%E6%AF%94%E5%85%89%E7%BB%AA%E5%A4%A7%E5%87%A0%E5%B2%81-%E5%A4%A7%E5%85%AC%E5%8F%B8%E7%9A%84%E4%BC%98%E5%8A%BF%E5%92%8C%E5%8A%A3%E5%8A%BFzo9xn-%E3%80%90%E2%9C%94%EF%B8%8F%E6%8E%A8%E8%8D%90KK37%C2%B7CC%E2%9C%94%EF%B8%8F%E3%80%91-%E5%90%8C%E6%B2%BB%E6%AF%94%E5%85%89%E7%BB%AA%E5%A4%A7%E5%87%A0%E5%B2%81p2dn-%E5%A4%A7%E5%85%AC%E5%8F%B8%E7%9A%84%E4%BC%98%E5%8A%BF%E5%92%8C%E5%8A%A3%E5%8A%BFblgpy-%E5%90%8C%E6%B2%BB%E6%AF%94%E5%85%89%E7%BB%AA%E5%A4%A7%E5%87%A0%E5%B2%81r26q/feed/rss2/chad-photo-essay
Predictive Policing Reinforces Police Bias
Quantifying Police Misconduct in Louisiana
Timor-Leste Op-Ed
Quantifying Injustice
“In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol. … Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population.”
Celebrating Women in Statistics
In her work on statistical issues in criminal justice, Lum has studied uses of predictive policing—machine learning models to predict who will commit future crime or where it will occur. In her work, she has demonstrated that if the training data encodes historical patterns of racially disparate enforcement, predictions from software trained with this data will reinforce and—in some cases—amplify this bias. She also currently works on statistical issues related to criminal “risk assessment” models used to inform judicial decision-making. As part of this thread, she has developed statistical methods for removing sensitive information from training data, guaranteeing “fair” predictions with respect to sensitive variables such as race and gender. Lum is active in the fairness, accountability, and transparency (FAT) community and serves on the steering committee of FAT, a conference that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
Palantir Has Secretly Been Using New Orleans to Test Its Predictive Policing Technology
One of the researchers, a Michigan State PhD candidate named William Isaac, had not previously heard of New Orleans’ partnership with Palantir, but he recognized the data-mapping model at the heart of the program. “I think the data they’re using, there are serious questions about its predictive power. We’ve seen very little about its ability to forecast violent crime,” Isaac said.
Press Release, Timor-Leste, February 2006
How Data Processing Uncovers Misconduct in Use of Force in Puerto Rico
New results for the identification of municipalities with clandestine graves in Mexico
How Review of Police Data Verified Neglect of Missing Black Women
HRDAG Wins the Rafto Prize
Momentous Verdict against Hissène Habré
Can the Armed Conflict Become Part of Colombia’s History?
Evaluation of the Kosovo Memory Book at Pristina
Rise of the racist robots – how AI is learning all our worst impulses
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.
Learning a Modular, Auditable and Reproducible Workflow
Syria’s celebrations muted by evidence of torture in Assad’s notorious prisons
The Human Rights Data Analysis Group, an independent scientific human rights organization based in San Francisco, has counted at least 17,723 people killed in Syrian custody from 2011 to 2015 — around 300 every week — almost certainly a vast undercount, it says.
HRDAG Adds Three New Board Members
Funding
The Allegheny Family Screening Tool’s Overestimation of Utility and Risk
Anjana Samant, Noam Shemtov, Kath Xu, Sophie Beiers, Marissa Gerchick, Ana Gutierrez, Aaron Horowitz, Tobi Jegede, Tarak Shah (2023). The Allegheny Family Screening Tool’s Overestimation of Utility and Risk. Logic(s). 13 December, 2023. Issue 20.