Projects & Publications


A guiding framework for vetting public sector technology vendors

The Ford Foundation commissioned Taraaz to develop a guiding framework to assess the societal impacts of digital technologies procured and/or deployed by public agencies. The framework consists of a list of red flags divided into seven categories, each with questions, hypothetical scenarios, and resources to contextualize potential harms. Categories include Theory of Change and value proposition; business model and funding; organizational governance, policies, and practices; product design, development, and maintenance; third-party relationships, infrastructure, and supply chain; government relationships; and community engagement.

📒 Download the Guiding Framework

📝 Download the summary/cheatsheet of the red flags


countering harmful speech online

Taraaz has partnered with scholars from the UC Berkeley School of Information and the Department of Political Science at Michigan State University to carry out a multi-disciplinary project on the nuanced counter-narratives of being Muslim online. Through mixed methods research, this project will achieve three goals: 1) gaining a deeper understanding of how marginalized groups craft counter-narratives 2) learning how those counter-narratives are amplified and sustained, and 3) providing recommendations for design and policy interventions. The project won a Facebook Content Policy Research Award in September 2019.

📌 project’s website

📝 Download the latest Peer-reviewed paper

💻 Project’s GitHub page (datasets & Twitter Analysis Script)


Digital Rights and Technology Sector Accountability in Iran

Taraaz was contracted by Filterwatch to carry out a multi-year project on researching human rights policies and practices of technology companies in Iran. In this project we apply the Ranking Digital Rights Corporate Accountability Index (RDR) methodology to assess the digital rights commitments of Iranian technology companies. Our research methods are also informed by the Global Network Initiatives (GNI) principles. We will produce annual reports in addition to developing a guidebook to help companies in their digital rights self-assessment process.

📒 Read the Tech Accountability Report (فارسی)

📝 Download Tech Accountability Worksheet

📒 Read the Digitization of Public Spaces Report (فارسی)


Human Rights-Centered Design

Our Human Rights-Centered Design project produces educational material and tools to help technologists, policymakers, and the general public explore the human rights implications of technical designs and apply human rights principles in product design and development process. This work is informed by current technology and human rights impact assessment practices in addition to other human-centric design methodologies from Value Sensitive Design and participatory design.

📝 a Guide for Responsible Use of Machine Learning APIs (فارسی)

🛠️ LUCID: Language Model Co-auditing through Community-based Red Teaming (Algorithmic bias Auditing tool)| Finalist for Stanford’s HAI AI Audit Challenge

💻 Gazing at the Mother Tongue: Analyzing Twitter's Image Cropping Algorithm (Algorithmic bias auditing method)| 3rd place Winner of Twitter’s Bias Bounty


OTHER HUMAN RIGHTS AND TECHNOLOGY PROJECTS

  • Amnesty International, for researching human rights implications of data-driven and surveillance technologies on forcibly displaced people.

  • Luminate and Ethical Resolve, for developing a “Data & Digital Rights Maturity Model” as a technical lead.

  • Open AI, for adversarial testing and red teaming GPT-4.

  • Business for Social Responsibility (BSR), for conducting human rights due diligence of a major technology company’s products.

  • Meta, for appointment on Facebook's Actor & Behavior Policies’ Expert Circle.

  • The Max Planck Institute for Empirical Aesthetics, for conducting a social and ethical impact review of a machine learning model.

We will post updates on any potential public outputs of these projects.