243 results for search: www.xn--299aqd11rg1lb34as7a.net/feed/privacy


Why top funders back this small human rights organization with a global reach

Eric Sears, a director at the MacArthur Foundation who leads the grantmaker’s Technology in the Public Interest program, worked at Human Rights First and Amnesty International before joining MacArthur, and has been following HRDAG’s work for years. … One of HRDAG’s strengths is the long relationships it maintains with partners around the globe. “HRDAG is notable in that it really develops deep relationships and partnerships and trust with organizations and actors in different parts of the world,” Sears said. “I think they’re unique in the sense that they don’t parachute into a situation and do a project and leave. They tend to stick with organizations and with issues over the long term, and continually help build cases around evidence and documentation to ensure that when the day comes, when accountability is possible, the facts and the evidence are there.”


Watch now: “In the Face of Tyranny,” a Webinar with HRDAG

The Human Rights Data Analysis Group recently hosted a conversation about the significant threats facing human rights researchers and scientific NGOs in the United States. We are posting the first part of this conversation on YouTube so that others may watch: In addition to this community conversation, HRDAG put out a statement outlining our specific concerns about the targeting of the human rights and research community. Read the statement and our blog post. As HRDAG Executive Director Dr. Megan Price explained, this is not a departure for HRDAG. As a scientific organization grounded in evidence, HRDAG remains fundamentally nonpartisan. ...

How public involvement can improve the science of AI

Proceedings of the National Academy of Sciences of the United States of America

As AI systems from decision-making algorithms to generative AI are deployed more widely, computer scientists and social scientists alike are being called on to provide trustworthy quantitative evaluations of AI safety and reliability. These calls have included demands from affected parties to be given a seat at the table of AI evaluation. What, if anything, can public involvement add to the science of AI? In this perspective, we summarize the sociotechnical challenge of evaluating AI systems, which often adapt to multiple layers of social context that shape their outcomes. We then offer guidance for improving the science of AI by engaging lived-experience experts in the design, data collection, and interpretation of scientific evaluations.

Nathan Matias and Megan Price (2025). How public involvement can improve the science of AI. Proceedings of the National Academy of Sciences of the United States of America, Vol. 122, No. 48. 14 November, 2025. © 2025 National Academy of Sciences. https://doi.org/10.1073/pnas.2421111122


Our work has been used by truth commissions, international criminal tribunals, and non-governmental human rights organizations. We have worked with partners on projects on five continents.

Donate