694 results for search: %E3%80%88%ED%95%98%EC%95%882%EB%8F%99%EB%8F%99%EC%95%84%EB%A6%AC%E3%80%89%20WWW-MEDA-PW%20%20%EA%B9%80%EC%A0%9C%EB%8C%81%EC%86%8C%EA%B0%9C%ED%8C%85%EC%96%B4%ED%94%8C%20%EA%B9%80%EC%A0%9C%EB%8C%81%EC%86%8C%EC%85%9C%D1%86%EA%B9%80%EC%A0%9C%EB%8C%81%EC%86%94%EB%A1%9C%D1%8B%EA%B9%80%EC%A0%9C%EB%8C%81%EC%88%9C%EC%9C%84%E3%8B%B2%E3%82%87%E8%92%80secretory/feed/rss2/chad-photo-essay/privacy
Kristian Lum in Bloomberg
Reflections: A Meaningful Partnership between HRDAG and Benetech
Reflections: It Began In Bogotá
Reflections: HRDAG Was Born in Washington
HRDAG Names New Board Member Margot Gerritsen
Nonprofits Are Taking a Wide-Eyed Look at What Data Could Do
Stay informed about our work
A Definition of Database Design Standards for Human Rights Agencies.
Patrick Ball. “A Definition of Database Design Standards for Human Rights Agencies.” © 1994 American Association for the Advancement of Science. [pdf]
Quantifying Police Misconduct in Louisiana
Death rate in Habre jails higher than for Japanese POWs, trial told
Patrick Ball of the California-based Human Rights Data Analysis Group said he had calculated the mortality rate of political prisoners from 1985 to 1988 using reports completed by Habre’s feared secret police.
Rise of the racist robots – how AI is learning all our worst impulses
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.
Welcoming our new Technical Lead
Hunting for Mexico’s mass graves with machine learning
“The model uses obvious predictor variables, Ball says, such as whether or not a drug lab has been busted in that county, or if the county borders the United States, or the ocean, but also includes less-obvious predictor variables such as the percentage of the county that is mountainous, the presence of highways, and the academic results of primary and secondary school students in the county.”
A better statistical estimation of known Syrian war victims
Researchers from Rice University and Duke University are using the tools of statistics and data science in collaboration with Human Rights Data Analysis Group (HRDAG) to accurately and efficiently estimate the number of identified victims killed in the Syrian civil war.
…
Using records from four databases of people killed in the Syrian war, Chen, Duke statistician and machine learning expert Rebecca Steorts and Rice computer scientist Anshumali Shrivastava estimated there were 191,874 unique individuals documented from March 2011 to April 2014. That’s very close to the estimate of 191,369 compiled in 2014 by HRDAG, a nonprofit that helps build scientifically defensible, evidence-based arguments of human rights violations.
Weapons of Math Destruction
Weapons of Math Destruction: invisible, ubiquitous algorithms are ruining millions of lives. Excerpt:
As Patrick once explained to me, you can train an algorithm to predict someone’s height from their weight, but if your whole training set comes from a grade three class, and anyone who’s self-conscious about their weight is allowed to skip the exercise, your model will predict that most people are about four feet tall. The problem isn’t the algorithm, it’s the training data and the lack of correction when the model produces erroneous conclusions.
The Data Scientist Helping to Create Ethical Robots
Kristian Lum is focusing on artificial intelligence and the controversial use of predictive policing and sentencing programs.
What’s the relationship between statistics and AI and machine learning?
AI seems to be a sort of catchall for predictive modeling and computer modeling. There was this great tweet that said something like, “It’s AI when you’re trying to raise money, ML when you’re trying to hire developers, and statistics when you’re actually doing it.” I thought that was pretty accurate.