558 results for search: Cotizar seguro automotriz Colton CA llama ahora al 888-430-8975 Que es el premio en un seguro automotor Automoviles de aseguradora Presupuesto de seguro de coche Rastreador de seguros Precio seguro furgoneta Consultar seguro coche
Patrick Ball wins the Karl E. Peace Award
How much faith can we place in coronavirus antibody tests?
Celebrating Women in Statistics
In her work on statistical issues in criminal justice, Lum has studied uses of predictive policing—machine learning models to predict who will commit future crime or where it will occur. In her work, she has demonstrated that if the training data encodes historical patterns of racially disparate enforcement, predictions from software trained with this data will reinforce and—in some cases—amplify this bias. She also currently works on statistical issues related to criminal “risk assessment” models used to inform judicial decision-making. As part of this thread, she has developed statistical methods for removing sensitive information from training data, guaranteeing “fair” predictions with respect to sensitive variables such as race and gender. Lum is active in the fairness, accountability, and transparency (FAT) community and serves on the steering committee of FAT, a conference that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
What HBR Gets Wrong About Algorithms and Bias
“Kristian Lum… organized a workshop together with Elizabeth Bender, a staff attorney for the NY Legal Aid Society and former public defender, and Terrence Wilkerson, an innocent man who had been arrested and could not afford bail. Together, they shared first hand experience about the obstacles and inefficiencies that occur in the legal system, providing valuable context to the debate around COMPAS.”
Unbiased algorithms can still be problematic
“Usually, the thing you’re trying to predict in a lot of these cases is something like rearrest,” Lum said. “So even if we are perfectly able to predict that, we’re still left with the problem that the human or systemic or institutional biases are generating biased arrests. And so, you still have to contextualize even your 100 percent accuracy with is the data really measuring what you think it’s measuring? Is the data itself generated by a fair process?”
HRDAG Director of Research Patrick Ball, in agreement with Lum, argued that it’s perhaps more practical to move it away from bias at the individual level and instead call it bias at the institutional or structural level. If a police department, for example, is convinced it needs to police one neighborhood more than another, it’s not as relevant if that officer is a racist individual, he said.
Rise of the racist robots – how AI is learning all our worst impulses
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.
Using MSE to Estimate Unobserved Events
Claudia Carolina López Taks
El Salvador
Policy or Panic? The Flight of Ethnic Albanians from Kosovo, March–May, 1999.
Patrick Ball. Policy or Panic? The Flight of Ethnic Albanians from Kosovo, March–May, 1999. © 2000 American Association for the Advancement of Science, Science and Human Rights Program. [pdf – English][html – English][html – shqip (Albanian)] [html – srpski (Serbian)]
Uncovering Police Violence in Chicago: A collaboration between HRDAG and Invisible Institute
Liberia 2009 – Coding Testimony to Determine Accountability for War Crimes
Collaboration between the Colombian Truth Commission, the Special Jurisdiction for Peace, and HRDAG (Dataset)
The Colombian Truth Commission (CEV), the Special Jurisdiction for Peace (JEP), and the Human Rights Data Analysis Group (HRDAG) have worked together to integrate data and calculate statistical estimates of the number of victims of the armed conflict, including homicides, forced disappearances, kidnapping, and the recruitment of child soldiers. Data are available through National Administrative Department of Statistics (DANE), the Truth Commission, and GitHub.
Pretrial Risk Assessment Tools
Sarah L. Desmarais and Evan M. Lowder (2019). Pretrial Risk Assessment Tools: A Primer for Judges, Prosecutors, and Defense Attorneys. Safety and Justice Challenge, February 2019. © 2019 Safety and Justice Challenge. <<HRDAG’s Kristian Lum and Tarak Shah served as Project Members and made significant contributions to the primer.>>
Inside Syria’s prisons, where an estimated 17,723 have died since 2011
Excerpt from the article: The estimate is based on reports from four organizations investigating deaths in Syria from March 15, 2011, to December, 31, 2015. From those cases, the Human Rights Data Analysis Group identified 12,270 cases with sufficient information to confirm the person was killed in detention. Using a statistical method to estimate how many victims they do not yet know about, the group came up with 17,723 cases.
New Estimate Of Killings By Police Is Way Higher — And Still Too Low
Carl Bialik of 538 Politics interviews HRDAG executive director Patrick Ball in an article about the recently released Bureau of Justice Statistics report about the number of annual police killings, both reported and unreported. As Bialik writes, this is a math puzzle with real consequences.
Measures of Fairness for New York City’s Supervised Release Risk Assessment Tool
Kristian Lum and Tarak Shah (2019). Measures of Fairness for New York City’s Supervised Release Risk Assessment Tool. Human Rights Data Analysis Group. 1 October 2019. © HRDAG 2019.