675 results for search: %ED%83%80%EC%9D%B4%EB%A7%88%EC%82%AC%EC%A7%80%EB%A7%88%EC%BC%80%ED%8C%85%EB%AC%B8%EC%9D%98%E2%99%A5%EC%B9%B4%ED%86%A1%40adgogo%E2%99%A5%ED%83%80%EC%9D%B4%EB%A7%88%EC%82%AC%EC%A7%80%E3%82%9D%EA%B0%95%EC%B6%94%E2%94%92%EB%A7%88%EC%BC%80%ED%8C%85%ED%9A%8C%EC%82%AC%DB%A9%EB%B0%94%EC%9D%B4%EB%9F%B4%EB%8C%80%ED%96%89%EC%82%AC%E4%AB%A0%ED%83%80%EC%9D%B4%EB%A7%88%EC%82%AC%EC%A7%80%E7%9C%8Crabidity/feed/content/india/ensaaf-summary-visual.pdf
Documents of war: Understanding the Syrian Conflict
Megan Price, Anita Gohdes, and Patrick Ball. 2015. Significance 12, no. 2 (April): 14–19. doi: 10.1111/j.1740-9713.2015.00811.x. © 2015 The Royal Statistical Society. All rights reserved. [online abstract]
Using Math and Science to Count Killings in Syria
Setting the Record Straight on Predictive Policing and Race
William Isaac and Kristian Lum (2018). Setting the Record Straight on Predictive Policing and Race. In Justice Today. 3 January 2018. © 2018 In Justice Today / Medium.
verdata: An R package for analyzing data from the Truth Commission in Colombia
Maria Gargiulo, María Julia Durán, Paula Andrea Amado, and Patrick Ball (2024). verdata: An R package for analyzing data from the Truth Commission in Colombia. The Journal of Open Source Software. 6 January, 2024. 9(93), 5844, https://doi.org/10.21105/joss.05844. Creative Commons Attribution 4.0 International License.
Celebrating Women in Statistics
In her work on statistical issues in criminal justice, Lum has studied uses of predictive policing—machine learning models to predict who will commit future crime or where it will occur. In her work, she has demonstrated that if the training data encodes historical patterns of racially disparate enforcement, predictions from software trained with this data will reinforce and—in some cases—amplify this bias. She also currently works on statistical issues related to criminal “risk assessment” models used to inform judicial decision-making. As part of this thread, she has developed statistical methods for removing sensitive information from training data, guaranteeing “fair” predictions with respect to sensitive variables such as race and gender. Lum is active in the fairness, accountability, and transparency (FAT) community and serves on the steering committee of FAT, a conference that brings together researchers and practitioners interested in fairness, accountability, and transparency in socio-technical systems.
Quantifying Injustice
“In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol. … Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population.”
Here’s how an AI tool may flag parents with disabilities
HRDAG contributed to work by the ACLU showing that a predictive tool used to guide responses to alleged child neglect may forever flag parents with disabilities. “These predictors have the effect of casting permanent suspicion and offer no means of recourse for families marked by these indicators,” according to the analysis from researchers at the ACLU and the nonprofit Human Rights Data Analysis Group. “They are forever seen as riskier to their children.”
At Toronto’s Tamil Fest, human rights group seeks data on Sri Lanka’s civil war casualties
Earlier this year, the Canadian Tamil Congress connected with HRDAG to bring its campaign to Toronto’s annual Tamil Fest, one of the largest gatherings of Canada’s Sri Lankan diaspora.
Ravichandradeva, along with a few other volunteers, spent the weekend speaking with festival-goers in Scarborough about the project and encouraging them to come forward with information about deceased or missing loved ones and friends.
“The idea is to collect thorough, scientifically rigorous numbers on the total casualties in the war and present them as a non-partisan, independent organization,” said Michelle Dukich, a data consultant with HRDAG.
Social Science Scholars Award for HRDAG Book
Syria 2012 – Modeling Multiple Datasets in an Ongoing Conflict
Talks & Discussions
In Syria, Uncovering the Truth Behind a Number
Direct procès Habré: le taux de mortalité dans les centres de détention, au menu des débats
Statisticien, Patrick Ball est à la barre ce vendredi matin. L’expert est entendu sur le taux de mortalité dans les centres de détention au Tchad sous Habré. Désigné par la chambre d’accusation, il dira avoir axé ses travaux sur des témoignages, des données venant des victimes et des documents de la DDS (Direction de la Documentation et de la Sécurité).
Limitations of mitigating judicial bias with machine learning
Kristian Lum (2017). Limitations of mitigating judicial bias with machine learning. Nature. 26 June 2017. © 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. Nature Human Behavior. DOI 10.1038/s41562-017-0141.
Publications
Media Contact
Donate with Cryptocurrency
CIIDH Data – Variables List
Rise of the racist robots – how AI is learning all our worst impulses
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.