397 results for search: Aseguradoras de coches Calexico CA llama ahora al 888-430-8975 Costo de seguro para auto Empresas aseguradoras de autos Comprar seguro carro Contratar seguro coche por meses Constancia para seguro automotriz Consultar seguro de vehiculo
Tech Note – using LLMs for structured info extraction
Quantifying Injustice
“In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol. … Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population.”
The John Maddox Prize for Patrick Ball
Outreach at Toronto TamilFest for Counting the Dead
Counting the Dead in Sri Lanka
Mexico
Reflections: Minding the Gap
Our Thoughts on #metoo
FAT* Conference 2018
Kristian Lum in Bloomberg
Videos
.Rproj Considered Harmful
Mortality in the DDS Prisons in Chad, 1985–1988
Patrick Ball (2014). Human Rights Data Analysis Group. August 22, 2014. © 2014 HRDAG. Creative Commons BY-NC-SA.
Hunting for Mexico’s mass graves with machine learning
“The model uses obvious predictor variables, Ball says, such as whether or not a drug lab has been busted in that county, or if the county borders the United States, or the ocean, but also includes less-obvious predictor variables such as the percentage of the county that is mountainous, the presence of highways, and the academic results of primary and secondary school students in the county.”
Reflections: The People Who Make the Data
Film: Solving for X
HRDAG Names New Board Member William Isaac
100 Women in AI Ethics
We live in very challenging times. The pervasiveness of bias in AI algorithms and autonomous “killer” robots looming on the horizon, all necessitate an open discussion and immediate action to address the perils of unchecked AI. The decisions we make today will determine the fate of future generations. Please follow these amazing women and support their work so we can make faster meaningful progress towards a world with safe, beneficial AI that will help and not hurt the future of humanity.
53. Kristian Lum @kldivergence
Rise of the racist robots – how AI is learning all our worst impulses
“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.
Existe la posibilidad de que no se estén documentando todos los asesinatos contra líderes sociales
En ocasiones, las discusiones sobre ese fenómeno se centran más sobre cuál es la cifra real, mientras que el diagnóstico es el mismo: en las regiones la violencia no cede y no se avizoran políticas efectivas para ponerle fin. En medio de este complejo panorama, el Centro de Estudios de Derecho, Justicia y Sociedad (Dejusticia) y el Human Rights Data Analysis Group, publicaron este miércoles la investigación Asesinatos de líderes sociales en Colombia en 2016–2017: una estimación del universo.