705 results for search: %EB%A7%88%EC%BC%80%ED%8C%85%EC%A0%84%EB%AC%B8%E2%96%B6%EC%B9%B4%ED%86%A1adgogo%E2%96%B6%EA%B5%90%ED%95%98%EB%8F%99%ED%83%80%ED%88%AC%E3%84%85%EB%A7%88%EC%BC%80%ED%8C%85%E2%94%BE%EC%A0%84%EB%AC%B8%E2%99%A1%EA%B5%90%ED%95%98%EB%8F%99%E5%A5%A9%ED%83%80%ED%88%AC%E6%AF%8Econductor/feed/content/india/privacy
Quantifying Injustice
“In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol. … Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population.”
Celebrating Women in Statistics
In her work on statistical issues in criminal justice, Lum has studied uses of predictive policing—machine learning models to predict who will commit future crime or where it will occur. In her work, she has demonstrated that if the training data encodes historical patterns of racially disparate enforcement, predictions from software trained with this data will reinforce and—in some cases—amplify this bias. She also currently works on statistical issues related to criminal “risk assessment” models used to inform judicial decision-making. As part of this thread, she has developed statistical methods for removing sensitive information from training data, guaranteeing “fair” predictions with respect to sensitive variables such as race and gender. Lum is active in the fairness, accountability, and transparency (FAT) community and serves on the steering committee of FAT, a conference that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
At Toronto’s Tamil Fest, human rights group seeks data on Sri Lanka’s civil war casualties
Earlier this year, the Canadian Tamil Congress connected with HRDAG to bring its campaign to Toronto’s annual Tamil Fest, one of the largest gatherings of Canada’s Sri Lankan diaspora.
Ravichandradeva, along with a few other volunteers, spent the weekend speaking with festival-goers in Scarborough about the project and encouraging them to come forward with information about deceased or missing loved ones and friends.
“The idea is to collect thorough, scientifically rigorous numbers on the total casualties in the war and present them as a non-partisan, independent organization,” said Michelle Dukich, a data consultant with HRDAG.
Sierra Leone TRC Data and Statistical Appendix
Data Mining for Good: CJA Drink + Think
Social Science Scholars Award for HRDAG Book
How much faith can we place in coronavirus antibody tests?
Syria 2012 – Modeling Multiple Datasets in an Ongoing Conflict
In Syria, Uncovering the Truth Behind a Number
The Limits of Observation for Understanding Mass Violence.
Limitations of mitigating judicial bias with machine learning
Kristian Lum (2017). Limitations of mitigating judicial bias with machine learning. Nature. 26 June 2017. © 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. Nature Human Behavior. DOI 10.1038/s41562-017-0141.
Media Contact
Publications
Donate with Cryptocurrency
Rise of the racist robots – how AI is learning all our worst impulses
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.
Tech Note – improving LLM-driven info extraction
Asia
Trove to IPFS
The killings of social movement leaders and human rights defenders in Colombia 2018 – 2023: an estimate of the universe.
Valentina Rozo Ángel and Patrick Ball. 2024. The killings of social movement leaders and human rights defenders in Colombia 2018 – 2023: an estimate of the universe. Human Rights Data Analysis Group. 18 December 2024. © HRDAG 2024. Creative Commons International license 4.0.
Hunting for Mexico’s mass graves with machine learning
“The model uses obvious predictor variables, Ball says, such as whether or not a drug lab has been busted in that county, or if the county borders the United States, or the ocean, but also includes less-obvious predictor variables such as the percentage of the county that is mountainous, the presence of highways, and the academic results of primary and secondary school students in the county.”