231 results for search: www.77m.kr/


FAQs on Predictive Policing and Bias

Last month Significance magazine published an article on the topic of predictive policing and police bias, which I co-authored with William Isaac. Since then, we've published a blogpost about it and fielded a few recurring questions. Here they are, along with our responses. Do your findings still apply given that PredPol uses crime reports rather than arrests as training data? Because this article was meant for an audience that is not necessarily well-versed in criminal justice data and we were under a strict word limit, we simplified language in describing the data. The data we used is a version of the Oakland Police Department’s crime report...

Press Release, Timor-Leste, February 2006

SILICON VALLEY GROUP USES TECHNOLOGY TO HELP THE TRUTH COMMISSION ANSWER DISPUTED QUESTIONS ABOUT MASSIVE POLITICAL VIOLENCE IN TIMOR-LESTE Palo Alto, CA, February 9, 2006 – The Benetech® Initiative today released a statistical report detailing widespread and systematic violations in Timor-Leste during the period 1974-1999. Benetech's statistical analysis establishes that at least 102,800 (+/- 11,000) Timorese died as a result of the conflict. Approximately 18,600 (+/- 1000) Timorese were killed or disappeared, while the remainder died due to hunger and illness in excess of what would be expected due to peacetime mortality. The magnitude of deaths ...

Celebrating Women in Statistics

kristian lum headshot 2018In her work on statistical issues in criminal justice, Lum has studied uses of predictive policing—machine learning models to predict who will commit future crime or where it will occur. In her work, she has demonstrated that if the training data encodes historical patterns of racially disparate enforcement, predictions from software trained with this data will reinforce and—in some cases—amplify this bias. She also currently works on statistical issues related to criminal “risk assessment” models used to inform judicial decision-making. As part of this thread, she has developed statistical methods for removing sensitive information from training data, guaranteeing “fair” predictions with respect to sensitive variables such as race and gender. Lum is active in the fairness, accountability, and transparency (FAT) community and serves on the steering committee of FAT, a conference that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.


HRDAG – 25 Years and Counting

Today is a very special day for all of us at HRDAG. This is, of course, the 68th anniversary of the Universal Declaration of Human Rights—but this day also marks our 25th year of using statistical science to support the advancement of human rights. It started 25 years ago, in December 1991, in San Salvador, when Patrick Ball was invited to work with the Salvadoran Lutheran Church to design a database to keep track of human rights abuses committed by the military in El Salvador. That work soon migrated to the NGO Human Rights Commission (CDHES). Fueled by thin beer and pupusas, Patrick dove into the deep world of data from human rights testimonies, ...

Fourth ALGO story

This is the fourth ALGO story.

In Pursuit of Excellent Data Processing

With help from HRDAG, Roman Rivera built the data backbone for the Invisible Institute's Citizens Police Data Project.

HRDAG To Join the Partnership on AI

HRDAG is joining Partnership on AI to Benefit People and Society (PAI).

Home Alt

<iframe src="https://www.youtube.com/embed/MfThopD7L1Y?list=PL8mc5VYrQH_Nx9jSWF2rYEgQhHyVobk6s" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe>Kristian Lum, lead statistician at HRDAG | <em>Predictive Policing: Bias In, Bias Out</em> | 56 mins

Uncertainty in COVID Fatality Rates

In this Granta article, HRDAG explains that neither the infectiousness nor the deadliness of the disease is set in stone.

HRDAG Retreat 2018

What follows is an elaborate criss-crossing of collaborations—retreat is a time to embrace the productivity that comes with being in the same room.

How Many People Will Get Covid-19?

HRDAG has authored two articles in Significance that add depth to discussions around infection rates.

Disrupt San Francisco TechCrunch 2018

On September 7, 2018, Kristian Lum and Patrick Ball participated in a panel at Disrupt San Francisco by TechCrunch. The talk was titled "Dismantling Algorithmic Bias." Brian Brackeen of Kairos was part of the panel as well, and the talk was moderated by TechCrunch reporter Megan Rose Dickey. From the TechCrunch website, "Disrupt is a 3-day conference focused on breaking technology news and developments with big-name thought leaders who are making waves in the industry." Video of the talk is available here, and Megan Rose Dickey's coverage is here.

100 Women in AI Ethics

We live in very challenging times. The pervasiveness of bias in AI algorithms and autonomous “killer” robots looming on the horizon, all necessitate an open discussion and immediate action to address the perils of unchecked AI. The decisions we make today will determine the fate of future generations. Please follow these amazing women and support their work so we can make faster meaningful progress towards a world with safe, beneficial AI that will help and not hurt the future of humanity.

53. Kristian Lum @kldivergence


Primer to Inform Discussions about Bail Reform

The primer addresses what pretrial risk assessment is and what the research supports.

A Model to Estimate SARS-CoV-2-Positive Americans

We’ve built a model for estimating the true number of positives, using what we have determined to be the most reliable datasets—deaths.

Justice by the Numbers

Wilkerson was speaking at the inaugural Conference on Fairness, Accountability, and Transparency, a gathering of academics and policymakers working to make the algorithms that govern growing swaths of our lives more just. The woman who’d invited him there was Kristian Lum, the 34-year-old lead statistician at the Human Rights Data Analysis Group, a San Francisco-based non-profit that has spent more than two decades applying advanced statistical models to expose human rights violations around the world. For the past three years, Lum has deployed those methods to tackle an issue closer to home: the growing use of machine learning tools in America’s criminal justice system.


Report on Measures of Fairness in NYC Risk Assessment Tool

The report tries to answer the question of whether a particular risk assessment model reinforces racial inequalities in the criminal justice system.

Trump’s “extreme-vetting” software will discriminate against immigrants “Under a veneer of objectivity,” say experts

Kristian Lum, lead statistician at the Human Rights Data Analysis Group (and letter signatory), fears that “in order to flag even a small proportion of future terrorists, this tool will likely flag a huge number of people who would never go on to be terrorists,” and that “these ‘false positives’ will be real people who would never have gone on to commit criminal acts but will suffer the consequences of being flagged just the same.”


Our Story

Dec 10, 1991 HRDAG is born when Patrick Ball begins database design at the Human Rights Office of the Salvadoran Lutheran Church. The work soon moves to the non-governmental Human Rights Commission (CDHES). The database analysis identified the 100 worst officers in the Salvadoran military — who were forced to resign as part of the peace process. 1994 Patrick publishes A Definit...

Featured Video

Kristian Lum, lead statistician at HRDAG | Predictive Policing: Bias In, Bias Out | 56 mins

Our work has been used by truth commissions, international criminal tribunals, and non-governmental human rights organizations. We have worked with partners on projects on five continents.

Donate