675 results for search: %ED%83%80%EC%9D%B4%EB%A7%88%EC%82%AC%EC%A7%80%EB%A7%88%EC%BC%80%ED%8C%85%EB%AC%B8%EC%9D%98%E2%99%A5%EC%B9%B4%ED%86%A1%40adgogo%E2%99%A5%ED%83%80%EC%9D%B4%EB%A7%88%EC%82%AC%EC%A7%80%E3%82%9D%EA%B0%95%EC%B6%94%E2%94%92%EB%A7%88%EC%BC%80%ED%8C%85%ED%9A%8C%EC%82%AC%DB%A9%EB%B0%94%EC%9D%B4%EB%9F%B4%EB%8C%80%ED%96%89%EC%82%AC%E4%AB%A0%ED%83%80%EC%9D%B4%EB%A7%88%EC%82%AC%EC%A7%80%E7%9C%8Crabidity/feed/rss2/copyright
Limitations of mitigating judicial bias with machine learning
Kristian Lum (2017). Limitations of mitigating judicial bias with machine learning. Nature. 26 June 2017. © 2017 Macmillan Publishers Limited, part of Springer Nature. All rights reserved. Nature Human Behavior. DOI 10.1038/s41562-017-0141.
Media Contact
Publications
Donate with Cryptocurrency
Rise of the racist robots – how AI is learning all our worst impulses
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.
Colombia
Ouster of Guatemala’s Attorney General
Using Data and Statistics to Bring Down Dictators
In this story, Guerrini discusses the impact of HRDAG’s work in Guatemala, especially the trials of General José Efraín Ríos Montt and Colonel Héctor Bol de la Cruz, as well as work in El Salvador, Syria, Kosovo, and Timor-Leste. Multiple systems estimation and the perils of using raw data to draw conclusions are also addressed.
Megan Price and Patrick Ball are quoted, especially in regard to how to use raw data.
“From our perspective,” Price says, “the solution to that is both to stay very close to the data, to be very conservative in your interpretation of it and to be very clear about where the data came from, how it was collected, what its limitations might be, and to a certain extent to be skeptical about it, to ask yourself questions like, ‘What is missing from this data?’ and ‘How might that missing information change these conclusions that I’m trying to draw?’”
HRDAG Welcomes Two New Scholars
New Report Raises Questions Over CPD’s Approach to Missing Persons Cases
In this video, Trina Reynolds-Tyler of Invisible Institute talks about her work with HRDAG on the missing persons project in Chicago and Beneath the Surface.
Human Rights Violations: How Do We Begin Counting the Dead?
Gaza: Why is it so hard to establish the death toll?
HRDAG director of research Patrick Ball is quoted in this Nature article about how body counts are a crude measure of the war’s impact and more reliable estimates will take time to compile.
Investigating Boston Police Department SWAT Raids from 2012 to 2020
HRDAG collaborated with Data for Justice Project on a tool tool allowing members of the public to visualize and analyze nearly a decade of Boston Police Department SWAT team after-action reports. Tarak Shah of HRDAG is named in the acknowledgments.
“El reto de la estadística es encontrar lo escondido”: experto en manejo de datos sobre el conflicto
In this interview with Colombian newspaper El Espectador, Patrick Ball is quoted as saying “la gente que no conoce de álgebra nunca debería hacer estadísticas” (people who don’t know algebra should never do statistics).
Weapons of Math Destruction
Weapons of Math Destruction: invisible, ubiquitous algorithms are ruining millions of lives. Excerpt:
As Patrick once explained to me, you can train an algorithm to predict someone’s height from their weight, but if your whole training set comes from a grade three class, and anyone who’s self-conscious about their weight is allowed to skip the exercise, your model will predict that most people are about four feet tall. The problem isn’t the algorithm, it’s the training data and the lack of correction when the model produces erroneous conclusions.
The Data Scientist Helping to Create Ethical Robots
Kristian Lum is focusing on artificial intelligence and the controversial use of predictive policing and sentencing programs.
What’s the relationship between statistics and AI and machine learning?
AI seems to be a sort of catchall for predictive modeling and computer modeling. There was this great tweet that said something like, “It’s AI when you’re trying to raise money, ML when you’re trying to hire developers, and statistics when you’re actually doing it.” I thought that was pretty accurate.