715 results for search: o %EA%B0%81%EC%A2%85%EB%94%94%EB%B9%84%E2%85%B8%E2%80%98%ED%85%94%EB%A0%88sEiN07%EF%BC%BD%D0%AB%EA%B0%81%EC%A2%85%EB%94%94%EB%B9%84%ED%8C%9D%EB%8B%88%EB%8B%A4%20%EA%B0%81%EC%A2%85DB%EA%B5%AC%EB%A7%A4%20%EA%B0%81%EC%A2%85%EB%94%94%EB%B9%84%ED%8C%9D%EB%8B%88%EB%8B%A4%E3%81%88%EA%B0%81%EC%A2%85%EB%94%94%EB%B9%84%ED%8C%90%EB%A7%A4%ED%95%A9%EB%8B%88%EB%8B%A4/feed/content/colombia/copyright
‘Bias deep inside the code’: the problem with AI ‘ethics’ in Silicon Valley
Kristian Lum, the lead statistician at the Human Rights Data Analysis Group, and an expert on algorithmic bias, said she hoped Stanford’s stumble made the institution think more deeply about representation.
“This type of oversight makes me worried that their stated commitment to the other important values and goals – like taking seriously creating AI to serve the ‘collective needs of humanity’ – is also empty PR spin and this will be nothing more than a vanity project for those attached to it,” she wrote in an email.
Calculating US police killings using methodologies from war-crimes trials
Cory Doctorow of Boing Boing writes about HRDAG director of research Patrick Ball’s article “Violence in Blue,” published March 4 in Granta. From the post: “In a must-read article in Granta, Ball explains the fundamentals of statistical estimation, and then applies these techniques to US police killings, merging data-sets from the police and the press to arrive at an estimate of the knowable US police homicides (about 1,250/year) and the true total (about 1,500/year). That means that of all the killings by strangers in the USA, one third are committed by the police.”
Megan Price: Life-Long ‘Math Nerd’ Finds Career in Social Justice
“I was always a math nerd. My mother has a polaroid of me in the fourth grade with my science fair project … . It was the history of mathematics. In college, I was a math major for a year and then switched to statistics.
I always wanted to work in social justice. I was raised by hippies, went to protests when I was young. I always felt I had an obligation to make the world a little bit better.”
The ghost in the machine
“Every kind of classification system – human or machine – has several kinds of errors it might make,” [Patrick Ball] says. “To frame that in a machine learning context, what kind of error do we want the machine to make?” HRDAG’s work on predictive policing shows that “predictive policing” finds patterns in police records, not patterns in occurrence of crime.
Death March
A mapped representation of the scale and spread of killings in Syria. HRDAG’s director of research, Megan Price, is quoted.
Improving the estimate of U.S. police killings
Cory Doctorow of Boing Boing writes about HRDAG executive director Patrick Ball and his contribution to Carl Bialik’s article about the recently released Bureau of Justice Statistics report on the number of annual police killings, both reported and unreported, in 538 Politics.
Can ‘predictive policing’ prevent crime before it happens?
HRDAG analyst William Isaac is quoted in this article about so-called crime prediction. “They’re not predicting the future. What they’re actually predicting is where the next recorded police observations are going to occur.”
A Human Rights Statistician Finds Truth In Numbers
The tension started in the witness room. “You could feel the stress rolling off the walls in there,” Patrick Ball remembers. “I can remember realizing that this is why lawyers wear sport coats – you can’t see all the sweat on their arms and back.” He was, you could say, a little nervous to be cross-examined by Slobodan Milosevic.
Weapons of Math Destruction
Weapons of Math Destruction: invisible, ubiquitous algorithms are ruining millions of lives. Excerpt:
As Patrick once explained to me, you can train an algorithm to predict someone’s height from their weight, but if your whole training set comes from a grade three class, and anyone who’s self-conscious about their weight is allowed to skip the exercise, your model will predict that most people are about four feet tall. The problem isn’t the algorithm, it’s the training data and the lack of correction when the model produces erroneous conclusions.