231 results for search: www.77m.kr/
New publication in BIOMETRIKA
FAT* Conference 2018
Where Stats and Rights Thrive Together
The UDHR Turns 70
Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment
Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook and Julie Ciccolini (2018). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior. November 23, 2018. © 2018 Sage Journals. All rights reserved. https://doi.org/10.1177/0093854818811379
Low-risk population size estimates in the presence of capture heterogeneity
James Johndrow, Kristian Lum and Daniel Manrique-Vallier (2019). Low-risk population size estimates in the presence of capture heterogeneity. Biometrika, asy065, 22 January 2019. © 2019 Biometrika Trust. https://doi.org/10.1093/biomet/asy065
Data-driven crime prediction fails to erase human bias
Work by HRDAG researchers Kristian Lum and William Isaac is cited in this article about the Policing Project: “While this bias knows no color or socioeconomic class, Lum and her HRDAG colleague William Isaac demonstrate that it can lead to policing that unfairly targets minorities and those living in poorer neighborhoods.”
Rise of the racist robots – how AI is learning all our worst impulses
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.
‘Bias deep inside the code’: the problem with AI ‘ethics’ in Silicon Valley
Kristian Lum, the lead statistician at the Human Rights Data Analysis Group, and an expert on algorithmic bias, said she hoped Stanford’s stumble made the institution think more deeply about representation.
“This type of oversight makes me worried that their stated commitment to the other important values and goals – like taking seriously creating AI to serve the ‘collective needs of humanity’ – is also empty PR spin and this will be nothing more than a vanity project for those attached to it,” she wrote in an email.
Applications of Multiple Systems Estimation in Human Rights Research
Lum, Kristian, Megan Emily Price, and David Banks. 2013. The American Statistician 67, no. 4: 191-200. doi: 10.1080/00031305.2013.821093. © 2013 The American Statistician. All rights reserved. [free eprint may be available].
Counting Civilian Casualties: An Introduction to Recording and Estimating Nonmilitary Deaths in Conflict
ed. by Taylor B. Seybolt, Jay D. Aronson, and Baruch Fischhoff. Oxford University Press. © 2013 Oxford University Press. All rights reserved.
The following four chapters are included:
— Todd Landman and Anita Gohdes (2013). “A Matter of Convenience: Challenges of Non-Random Data in Analyzing Human Rights Violations in Peru and Sierra Leone.”
— Jeff Klingner and Romesh Silva (2013). “Combining Found Data and Surveys to Measure Conflict Mortality.”
— Daniel Manrique-Vallier, Megan E. Price, and Anita Gohdes (2013). “Multiple-Systems Estimation Techniques for Estimating Casualties in Armed Conflict.”
— Jule Krüger, Patrick Ball, Megan Price, and Amelia Hoover Green (2013). “It Doesn’t Add Up: Methodological and Policy Implications of Conflicting Casualty Data.”
Measuring Elusive Populations with Bayesian Model Averaging for Multiple Systems Estimation: A Case Study on Lethal Violations in Casanare, 1998-2007
Kristian Lum, Megan Price, Tamy Guberek, and Patrick Ball. “Measuring Elusive Populations with Bayesian Model Averaging for Multiple Systems Estimation: A Case Study on Lethal Violations in Casanare, 1998-2007,” Statistics, Politics, and Policy. 1(1) 2010. All rights reserved.
The Data Scientist Helping to Create Ethical Robots
Kristian Lum is focusing on artificial intelligence and the controversial use of predictive policing and sentencing programs.
What’s the relationship between statistics and AI and machine learning?
AI seems to be a sort of catchall for predictive modeling and computer modeling. There was this great tweet that said something like, “It’s AI when you’re trying to raise money, ML when you’re trying to hire developers, and statistics when you’re actually doing it.” I thought that was pretty accurate.
What HBR Gets Wrong About Algorithms and Bias
“Kristian Lum… organized a workshop together with Elizabeth Bender, a staff attorney for the NY Legal Aid Society and former public defender, and Terrence Wilkerson, an innocent man who had been arrested and could not afford bail. Together, they shared first hand experience about the obstacles and inefficiencies that occur in the legal system, providing valuable context to the debate around COMPAS.”
Courts and police departments are turning to AI to reduce bias, but some argue it’ll make the problem worse
Kristian Lum: “The historical over-policing of minority communities has led to a disproportionate number of crimes being recorded by the police in those locations. Historical over-policing is then passed through the algorithm to justify the over-policing of those communities.”
What happens when you look at crime by the numbers
Kristian Lum’s work on the HRDAG Policing Project is referred to here: “In fact, Lum argues, it’s not clear how well this model worked at depicting the situation in Oakland. Those data on drug crimes were biased, she now reports. The problem was not deliberate, she says. Rather, data collectors just missed some criminals and crime sites. So data on them never made it into her model.”
Quantifying Injustice
“In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol. … Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population.”
A Data Double Take: Police Shootings
“In a recent article, social scientist Patrick Ball revisited his and Kristian Lum’s 2015 study, which made a compelling argument for the underreporting of lethal police shootings by the Bureau of Justice Statistics (BJS). Lum and Ball’s study may be old, but it bears revisiting amid debates over the American police system — debates that have featured plenty of data on the excessive use of police force. It is a useful reminder that many of the facts and figures we rely on require further verification.”