679 results for search: o %EA%B0%81%EC%A2%85%EB%94%94%EB%B9%84%E2%85%B8%E2%80%98%ED%85%94%EB%A0%88sEiN07%EF%BC%BD%D0%AB%EA%B0%81%EC%A2%85%EB%94%94%EB%B9%84%ED%8C%9D%EB%8B%88%EB%8B%A4%20%EA%B0%81%EC%A2%85DB%EA%B5%AC%EB%A7%A4%20%EA%B0%81%EC%A2%85%EB%94%94%EB%B9%84%ED%8C%9D%EB%8B%88%EB%8B%A4%E3%81%88%EA%B0%81%EC%A2%85%EB%94%94%EB%B9%84%ED%8C%90%EB%A7%A4%ED%95%A9%EB%8B%88%EB%8B%A4
All the Dead We Cannot See
Ball, a statistician, has spent the last two decades finding ways to make the silence speak. He helped pioneer the use of formal statistical modeling, and, later, machine learning—tools more often used for e-commerce or digital marketing—to measure human rights violations that weren’t recorded. In Guatemala, his analysis helped convict former dictator General Efraín Ríos Montt of genocide in 2013. It was the first time a former head of state was found guilty of the crime in his own country.
Existe la posibilidad de que no se estén documentando todos los asesinatos contra líderes sociales
En ocasiones, las discusiones sobre ese fenómeno se centran más sobre cuál es la cifra real, mientras que el diagnóstico es el mismo: en las regiones la violencia no cede y no se avizoran políticas efectivas para ponerle fin. En medio de este complejo panorama, el Centro de Estudios de Derecho, Justicia y Sociedad (Dejusticia) y el Human Rights Data Analysis Group, publicaron este miércoles la investigación Asesinatos de líderes sociales en Colombia en 2016–2017: una estimación del universo.
Hat-Tip from Guatemala Judges on HRDAG Evidence
Trips to and from Guatemala
South Africa
HRDAG Drops Dropbox
Sierra Leone TRC Data and Statistical Appendix
Data Mining for Good: Thoreau Center Lunch + Learn
Rise of the racist robots – how AI is learning all our worst impulses
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.
Weapons of Math Destruction
Weapons of Math Destruction: invisible, ubiquitous algorithms are ruining millions of lives. Excerpt:
As Patrick once explained to me, you can train an algorithm to predict someone’s height from their weight, but if your whole training set comes from a grade three class, and anyone who’s self-conscious about their weight is allowed to skip the exercise, your model will predict that most people are about four feet tall. The problem isn’t the algorithm, it’s the training data and the lack of correction when the model produces erroneous conclusions.
Can ‘predictive policing’ prevent crime before it happens?
HRDAG analyst William Isaac is quoted in this article about so-called crime prediction. “They’re not predicting the future. What they’re actually predicting is where the next recorded police observations are going to occur.”
Recognising Uncertainty in Statistics
In Responsible Data Reflection Story #7—from the Responsible Data Forum—work by HRDAG affiliates Anita Gohdes and Brian Root is cited extensively to make the point about how quantitative data are the result of numerous subjective human decisions. An excerpt: “The Human Rights Data Analysis Group are pioneering the way in collecting and analysing figures of killings in conflict in a responsible way, using multiple systems estimation.”
Trump’s “extreme-vetting” software will discriminate against immigrants “Under a veneer of objectivity,” say experts
Kristian Lum, lead statistician at the Human Rights Data Analysis Group (and letter signatory), fears that “in order to flag even a small proportion of future terrorists, this tool will likely flag a huge number of people who would never go on to be terrorists,” and that “these ‘false positives’ will be real people who would never have gone on to commit criminal acts but will suffer the consequences of being flagged just the same.”
Calculating US police killings using methodologies from war-crimes trials
Cory Doctorow of Boing Boing writes about HRDAG director of research Patrick Ball’s article “Violence in Blue,” published March 4 in Granta. From the post: “In a must-read article in Granta, Ball explains the fundamentals of statistical estimation, and then applies these techniques to US police killings, merging data-sets from the police and the press to arrive at an estimate of the knowable US police homicides (about 1,250/year) and the true total (about 1,500/year). That means that of all the killings by strangers in the USA, one third are committed by the police.”
The ghost in the machine
“Every kind of classification system – human or machine – has several kinds of errors it might make,” [Patrick Ball] says. “To frame that in a machine learning context, what kind of error do we want the machine to make?” HRDAG’s work on predictive policing shows that “predictive policing” finds patterns in police records, not patterns in occurrence of crime.