223 results for search: 〔섹시VR〕 WWW.SEXYVR.CO.KR 아로마라인TV다시보기 아로마라인TV보기≤아로마라인TV스토리■아로마라인TV썰Ⓞㄝ氉ithyphallic


The UDHR Turns 70

We're thinking about how rigorous analysis can fortify debates about components of our criminal justice system such as cash bail, pretrial risk assessment and fairness in general.

Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment

Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook and Julie Ciccolini (2018). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior. November 23, 2018. © 2018 Sage Journals. All rights reserved. https://doi.org/10.1177/0093854818811379

Laurel Eckhouse, Kristian Lum, Cynthia Conti-Cook and Julie Ciccolini (2018). Layers of Bias: A Unified Approach for Understanding Problems With Risk Assessment. Criminal Justice and Behavior. November 23, 2018. © 2018 Sage Journals. All rights reserved. https://doi.org/10.1177/0093854818811379


Low-risk population size estimates in the presence of capture heterogeneity

James Johndrow, Kristian Lum and Daniel Manrique-Vallier (2019). Low-risk population size estimates in the presence of capture heterogeneity. Biometrika, asy065, 22 January 2019. © 2019 Biometrika Trust. https://doi.org/10.1093/biomet/asy065

James Johndrow, Kristian Lum and Daniel Manrique-Vallier (2019). Low-risk population size estimates in the presence of capture heterogeneityBiometrika, asy065, 22 January 2019. © 2019 Biometrika Trust. https://doi.org/10.1093/biomet/asy065


Data-driven crime prediction fails to erase human bias

Work by HRDAG researchers Kristian Lum and William Isaac is cited in this article about the Policing Project: “While this bias knows no color or socioeconomic class, Lum and her HRDAG colleague William Isaac demonstrate that it can lead to policing that unfairly targets minorities and those living in poorer neighborhoods.”


Rise of the racist robots – how AI is learning all our worst impulses

“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods. The program was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.


Applications of Multiple Systems Estimation in Human Rights Research

Lum, Kristian, Megan Emily Price, and David Banks. 2013. The American Statistician 67, no. 4: 191-200. doi: 10.1080/00031305.2013.821093. © 2013 The American Statistician. All rights reserved. [free eprint may be available].


What happens when you look at crime by the numbers

Kristian Lum’s work on the HRDAG Policing Project is referred to here: “In fact, Lum argues, it’s not clear how well this model worked at depicting the situation in Oakland. Those data on drug crimes were biased, she now reports. The problem was not deliberate, she says. Rather, data collectors just missed some criminals and crime sites. So data on them never made it into her model.”


Measuring Elusive Populations with Bayesian Model Averaging for Multiple Systems Estimation: A Case Study on Lethal Violations in Casanare, 1998-2007

Kristian Lum, Megan Price, Tamy Guberek, and Patrick Ball. “Measuring Elusive Populations with Bayesian Model Averaging for Multiple Systems Estimation: A Case Study on Lethal Violations in Casanare, 1998-2007,” Statistics, Politics, and Policy. 1(1) 2010. All rights reserved.


The Data Scientist Helping to Create Ethical Robots

Kristian Lum is focusing on artificial intelligence and the controversial use of predictive policing and sentencing programs.

What’s the relationship between statistics and AI and machine learning?

AI seems to be a sort of catchall for predictive modeling and computer modeling. There was this great tweet that said something like, “It’s AI when you’re trying to raise money, ML when you’re trying to hire developers, and statistics when you’re actually doing it.” I thought that was pretty accurate.


What HBR Gets Wrong About Algorithms and Bias

“Kristian Lum… organized a workshop together with Elizabeth Bender, a staff attorney for the NY Legal Aid Society and former public defender, and Terrence Wilkerson, an innocent man who had been arrested and could not afford bail. Together, they shared first hand experience about the obstacles and inefficiencies that occur in the legal system, providing valuable context to the debate around COMPAS.”


Courts and police departments are turning to AI to reduce bias, but some argue it’ll make the problem worse

Kristian Lum: “The historical over-policing of minority communities has led to a disproportionate number of crimes being recorded by the police in those locations. Historical over-policing is then passed through the algorithm to justify the over-policing of those communities.”


Counting Civilian Casualties: An Introduction to Recording and Estimating Nonmilitary Deaths in Conflict

ed. by Taylor B. Seybolt, Jay D. Aronson, and Baruch Fischhoff. Oxford University Press. © 2013 Oxford University Press. All rights reserved.

The following four chapters are included:

— Todd Landman and Anita Gohdes (2013). “A Matter of Convenience: Challenges of Non-Random Data in Analyzing Human Rights Violations in Peru and Sierra Leone.”

— Jeff Klingner and Romesh Silva (2013). “Combining Found Data and Surveys to Measure Conflict Mortality.”

— Daniel Manrique-Vallier, Megan E. Price, and Anita Gohdes (2013). “Multiple-Systems Estimation Techniques for Estimating Casualties in Armed Conflict.”

— Jule Krüger, Patrick Ball, Megan Price, and Amelia Hoover Green (2013). “It Doesn’t Add Up: Methodological and Policy Implications of Conflicting Casualty Data.”


A Data Double Take: Police Shootings

“In a recent article, social scientist Patrick Ball revisited his and Kristian Lum’s 2015 study, which made a compelling argument for the underreporting of lethal police shootings by the Bureau of Justice Statistics (BJS). Lum and Ball’s study may be old, but it bears revisiting amid debates over the American police system — debates that have featured plenty of data on the excessive use of police force. It is a useful reminder that many of the facts and figures we rely on require further verification.”


‘Bias deep inside the code’: the problem with AI ‘ethics’ in Silicon Valley

Kristian Lum, the lead statistician at the Human Rights Data Analysis Group, and an expert on algorithmic bias, said she hoped Stanford’s stumble made the institution think more deeply about representation.

“This type of oversight makes me worried that their stated commitment to the other important values and goals – like taking seriously creating AI to serve the ‘collective needs of humanity’ – is also empty PR spin and this will be nothing more than a vanity project for those attached to it,” she wrote in an email.


Quantifying Injustice

“In 2016, two researchers, the statistician Kristian Lum and the political scientist William Isaac, set out to measure the bias in predictive policing algorithms. They chose as their example a program called PredPol.  … Lum and Isaac faced a conundrum: if official data on crimes is biased, how can you test a crime prediction model? To solve this technique, they turned to a technique used in statistics and machine learning called the synthetic population.”


Covid-19 Research and Resources

HRDAG is identifying and interpreting the best science we can find to shed light on the global crisis brought on by the novel coronavirus, about which we still know so little. Right now, most of the data on the virus SARS-CoV-2 and Covid-19, the condition caused by the virus, are incomplete and unrepresentative, which means that there is a great deal of uncertainty. But making sense of imperfect datasets is what we do. HRDAG is contributing to a better understanding with explainers, essays, and original research, and we are highlighting trustworthy resources for those who want to dig deeper. Papers and articles by HRDAG .ugb-bbeb275 .ugb-blo...

Counting the Dead in Sri Lanka

ITJP and HRDAG are urging groups inside and outside Sri Lanka to share existing casualty lists.

FAQs on Predictive Policing and Bias

Last month Significance magazine published an article on the topic of predictive policing and police bias, which I co-authored with William Isaac. Since then, we've published a blogpost about it and fielded a few recurring questions. Here they are, along with our responses. Do your findings still apply given that PredPol uses crime reports rather than arrests as training data? Because this article was meant for an audience that is not necessarily well-versed in criminal justice data and we were under a strict word limit, we simplified language in describing the data. The data we used is a version of the Oakland Police Department’s crime report...

Press Release, Timor-Leste, February 2006

SILICON VALLEY GROUP USES TECHNOLOGY TO HELP THE TRUTH COMMISSION ANSWER DISPUTED QUESTIONS ABOUT MASSIVE POLITICAL VIOLENCE IN TIMOR-LESTE Palo Alto, CA, February 9, 2006 – The Benetech® Initiative today released a statistical report detailing widespread and systematic violations in Timor-Leste during the period 1974-1999. Benetech's statistical analysis establishes that at least 102,800 (+/- 11,000) Timorese died as a result of the conflict. Approximately 18,600 (+/- 1000) Timorese were killed or disappeared, while the remainder died due to hunger and illness in excess of what would be expected due to peacetime mortality. The magnitude of deaths ...

Justice by the Numbers

Wilkerson was speaking at the inaugural Conference on Fairness, Accountability, and Transparency, a gathering of academics and policymakers working to make the algorithms that govern growing swaths of our lives more just. The woman who’d invited him there was Kristian Lum, the 34-year-old lead statistician at the Human Rights Data Analysis Group, a San Francisco-based non-profit that has spent more than two decades applying advanced statistical models to expose human rights violations around the world. For the past three years, Lum has deployed those methods to tackle an issue closer to home: the growing use of machine learning tools in America’s criminal justice system.


Our work has been used by truth commissions, international criminal tribunals, and non-governmental human rights organizations. We have worked with partners on projects on five continents.

Donate