231 results for search: www.77m.kr/


New publication in BIOMETRIKA

New paper in Biometrika, co-authored by HRDAG's Kristian Lum and James Johndrow: Theoretical limits of microclustering in record linkage.

FAT* Conference 2018

Kristian Lum spoke about "Understanding the Context and Consequences of Pre-Trial Detention" at the Conference on Fairness, Accountability, and Transparency (FAT*).

Where Stats and Rights Thrive Together

Everyone I had the pleasure of interacting with enriched my summer in some way.

The UDHR Turns 70

We're thinking about how rigorous analysis can fortify debates about components of our criminal justice system such as cash bail, pretrial risk assessment and fairness in general.

Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment

Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook and Julie Ciccolini (2018). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior. November 23, 2018. © 2018 Sage Journals. All rights reserved. https://doi.org/10.1177/0093854818811379

Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook and Julie Ciccolini (2018). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior. November 23, 2018. © 2018 Sage Journals. All rights reserved. https://doi.org/10.1177/0093854818811379


Low-risk population size estimates in the presence of capture heterogeneity

James Johndrow, Kristian Lum and Daniel Manrique-Vallier (2019). Low-risk population size estimates in the presence of capture heterogeneity. Biometrika, asy065, 22 January 2019. © 2019 Biometrika Trust. https://doi.org/10.1093/biomet/asy065

James Johndrow, Kristian Lum and Daniel Manrique-Vallier (2019). Low-risk population size estimates in the presence of capture heterogeneityBiometrika, asy065, 22 January 2019. © 2019 Biometrika Trust. https://doi.org/10.1093/biomet/asy065


Data-driven crime prediction fails to erase human bias

Work by HRDAG researchers Kristian Lum and William Isaac is cited in this article about the Policing Project: “While this bias knows no color or socioeconomic class, Lum and her HRDAG colleague William Isaac demonstrate that it can lead to policing that unfairly targets minorities and those living in poorer neighborhoods.”


Rise of the racist robots – how AI is learning all our worst impulses

“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.


‘Bias deep inside the code’: the problem with AI ‘ethics’ in Silicon Valley

Kristian Lum, the lead statistician at the Human Rights Data Analysis Group, and an expert on algorithmic bias, said she hoped Stanford’s stumble made the institution think more deeply about representation.

“This type of oversight makes me worried that their stated commitment to the other important values and goals – like taking seriously creating AI to serve the ‘collective needs of humanity’ – is also empty PR spin and this will be nothing more than a vanity project for those attached to it,” she wrote in an email.


Applications of Multiple Systems Estimation in Human Rights Research

Lum, Kristian, Megan Emily Price, and David Banks. 2013. The American Statistician 67, no. 4: 191-200. doi: 10.1080/00031305.2013.821093. © 2013 The American Statistician. All rights reserved. [free eprint may be available].


Counting Civilian Casualties: An Introduction to Recording and Estimating Nonmilitary Deaths in Conflict

ed. by Taylor B. Seybolt, Jay D. Aronson, and Baruch Fischhoff. Oxford University Press. © 2013 Oxford University Press. All rights reserved.

The following four chapters are included:

— Todd Landman and Anita Gohdes (2013). “A Matter of Convenience: Challenges of Non-Random Data in Analyzing Human Rights Violations in Peru and Sierra Leone.”

— Jeff Klingner and Romesh Silva (2013). “Combining Found Data and Surveys to Measure Conflict Mortality.”

— Daniel Manrique-Vallier, Megan E. Price, and Anita Gohdes (2013). “Multiple-Systems Estimation Techniques for Estimating Casualties in Armed Conflict.”

— Jule Krüger, Patrick Ball, Megan Price, and Amelia Hoover Green (2013). “It Doesn’t Add Up: Methodological and Policy Implications of Conflicting Casualty Data.”


Measuring Elusive Populations with Bayesian Model Averaging for Multiple Systems Estimation: A Case Study on Lethal Violations in Casanare, 1998-2007

Kristian Lum, Megan Price, Tamy Guberek, and Patrick Ball. “Measuring Elusive Populations with Bayesian Model Averaging for Multiple Systems Estimation: A Case Study on Lethal Violations in Casanare, 1998-2007,” Statistics, Politics, and Policy. 1(1) 2010. All rights reserved.


The Data Scientist Helping to Create Ethical Robots

Kristian Lum is focusing on artificial intelligence and the controversial use of predictive policing and sentencing programs.

What’s the relationship between statistics and AI and machine learning?

AI seems to be a sort of catchall for predictive modeling and computer modeling. There was this great tweet that said something like, “It’s AI when you’re trying to raise money, ML when you’re trying to hire developers, and statistics when you’re actually doing it.” I thought that was pretty accurate.


What HBR Gets Wrong About Algorithms and Bias

“Kristian Lum… organized a workshop together with Elizabeth Bender, a staff attorney for the NY Legal Aid Society and former public defender, and Terrence Wilkerson, an innocent man who had been arrested and could not afford bail. Together, they shared first hand experience about the obstacles and inefficiencies that occur in the legal system, providing valuable context to the debate around COMPAS.”


Courts and police departments are turning to AI to reduce bias, but some argue it’ll make the problem worse

Kristian Lum: “The historical over-policing of minority communities has led to a disproportionate number of crimes being recorded by the police in those locations. Historical over-policing is then passed through the algorithm to justify the over-policing of those communities.”


What happens when you look at crime by the numbers

Kristian Lum’s work on the HRDAG Policing Project is referred to here: “In fact, Lum argues, it’s not clear how well this model worked at depicting the situation in Oakland. Those data on drug crimes were biased, she now reports. The problem was not deliberate, she says. Rather, data collectors just missed some criminals and crime sites. So data on them never made it into her model.”


Quantifying Injustice

“In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol.  … Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population.”


A Data Double Take: Police Shootings

“In a recent article, social scientist Patrick Ball revisited his and Kristian Lum’s 2015 study, which made a compelling argument for the underreporting of lethal police shootings by the Bureau of Justice Statistics (BJS). Lum and Ball’s study may be old, but it bears revisiting amid debates over the American police system — debates that have featured plenty of data on the excessive use of police force. It is a useful reminder that many of the facts and figures we rely on require further verification.”


Covid-19 Research and Resources

HRDAG is identifying and interpreting the best science we can find to shed light on the global crisis brought on by the novel coronavirus, about which we still know so little. Right now, most of the data on the virus SARS-CoV-2 and Covid-19, the condition caused by the virus, are incomplete and unrepresentative, which means that there is a great deal of uncertainty. But making sense of imperfect datasets is what we do. HRDAG is contributing to a better understanding with explainers, essays, and original research, and we are highlighting trustworthy resources for those who want to dig deeper. Papers and articles by HRDAG .ugb-bbeb275 .ugb-blo...

Counting the Dead in Sri Lanka

ITJP and HRDAG are urging groups inside and outside Sri Lanka to share existing casualty lists.

Our work has been used by truth commissions, international criminal tribunals, and non-governmental human rights organizations. We have worked with partners on projects on five continents.

Donate