Updates


 

May 10, 2023

Check out the new issue of our newsletter on the history of red teaming, from its origins in the 16th-century Catholic Church and the Cold War era to its current state in generative AI systems. There is also a guide for better red teaming of Large Language Models which includes ways to address linguistic gaps in red teaming and current English-centric red teaming benchmark datasets; partnerships with civil society organizations and Humanities departments at universities; and options for designing red teaming platforms' interfaces and assessment metrics to further utilize them in improving the models. Read the full post on our Substack.


April 27, 2023

Our tool, “LUCID: Language Model Co-auditing through Community-based Red Teaming” was selected as a finalist at Stanford University’s HAI AI Audit Challenge. LUCID is an AI bias assessment tool that aims to automate certain aspects of AI bias red teaming, foster a community-based approach to auditing AI systems, and provide a platform for documenting, identifying, and discussing instances of bias in text. The tool uses semi-automated prompt engineering and community-based red teaming to enhance bias detection over a range of large language models. Read more on Stanford’s website and check out the tool on GitHub.


April 20, 2023

131 investors signed a statement to show their support for the "Women, Life, Freedom" uprising in Iran. Taraaz collaborated with the Investor Alliance for Human Rights to develop this statement. “From a civil society standpoint, we believe this can provide a valuable push for technology companies, as investors hold significant leverage over them. We look forward to witnessing the meaningful implementation of this statement, particularly through active consultation with Iranian civil society groups, conducting and releasing human rights impact assessments in Iran's context, and developing tools to further Iranians' access to the internet.” Read the press release and full statement in English and Farsi.


April 14, 2023

The Financial Times interviewed Roya Pakzad from Taraaz about her experience red-teaming OpenAI's GPT-4. Pakzad identified stereotypes about marginalized communities in the model and found that its "hallucinations" (fabricated information in response) were more frequent when testing in Farsi. Although she acknowledged the benefits of the tool for non-native English writers, she expressed concern about the potential loss of linguistic diversity and cultural nuances. Read the full article here.


April 4, 2023

The Ford Foundation commissioned Taraaz to develop a guiding framework to assess the societal impacts of digital technologies procured and/or deployed by public agencies. The framework consists of a list of red flags divided into seven categories, each with questions, hypothetical scenarios, and resources to contextualize potential harms. Categories include Theory of Change and value proposition; business model and funding; organizational governance, policies, and practices; product design, development, and maintenance; third-party relationships, infrastructure, and supply chain; government relationships; and community engagement. Find the framework here and read the Inside Philanthropy coverage about it.


March 16, 2023

We red-teamed Open AI's GPT-4 model by conducting various tests, including emotional manipulation and using different personae to test for racial, gender, religious, and socioeconomic stereotypes, using both English and Farsi prompts and visual inputs. We also challenged the model on human rights-related subjects, gender binaries, body image, beauty standards, and the model’s perception of human intelligence and open-mindedness. Read more on GPT-4’s System Card.


February 2, 2023

Our new paper from Our Muslim Voices project was accepted at the Computer-Supported Cooperative Work & Social Computing (CSCW) conference. In the paper entitled “Sustained Harm Over Time and Space Limits the External Function of Online Counterpublics for American Muslims,” we interviewed Muslim Americans who have a large following on social media platforms. We studied factors that limit their efforts in forming and sustaining counternarratives and provided design and policy alternatives for social media companies. The recommended alternatives are informed by restorative and transformative justice frameworks. Read the full paper here.


November 7, 2022

Taraaz’s comment on the use of surveillance technologies in Iran appeared in a Reuter’s article: “The government has always been keen to segregate women, control their public participation and limit their presence in public places, and they can use even these well-meaning or benign technologies to do it.” Read the article, “Protesters - and police - deploy tech in fight for future of Iran” here.


October 17, 2022

Our new report with Filterwatch, “Human Rights and Digitization of Public Spaces in Iran,” highlights the rise of "smart city" projects in Iran and what it means for surveillance, privacy, and freedom of assembly. By using real case studies, we demonstrate how smart city projects such as smart traffic control systems, smart street lighting, online taxis and bike-sharing apps, and public wifi, have advanced gender-based discrimination, the exclusion of marginalized communities, increased public surveillance, and crackdowns on protests. Read the report in English and Farsi.

October 6, 2022

In an interview with Ranking Digital Rights (RDR), Roya Pakzad and Filterwatch’s Melody Kazemi talked about the applicability of the RDR index in the context of Iran's repressive internet policies, state-forced internet localization plan, pressure on Iranian tech start-ups, & diaspora community’s role. Read the full interview here.


December 15, 2021

Congratulations to Taraaz and the CITRIS Policy Lab’s Technology and Human Rights fellows for the publication of their white papers. By studying developments in San Diego, New Orleans, Oakland, Portland, and Chicago, Brie McLemore discusses the human rights implications of smart/surveillance city projects and provides policy recommendations. By combining the study of the UN Sustainable Development Goals (SDG) and human rights-based frameworks, Ifejesu Ogunleye discusses the benefits and risks of AI for economic development in Nigeria. Read the announcement and the full papers here.


August 10, 2021

Taraaz’s founder, Roya Pakzad won the third prize 🥉 in Twitter Engineering’s first algorithmic bounty challenge. In her experiment, “Gazing at the Mother Tongue,” she shows how the algorithm favors cropping Latin scripts over Arabic scripts and what this means in terms of harms to linguistic diversity and representation online. Read about the method, results, discussion section, and the limitations of this experiment here in this Medium post. You can also read the media coverage on Wired and the Verge.


taraaz palestine banner new.jpg

May 10, 2021

Facebook and Twitter are systematically silencing users protesting and documenting the evictions of Palestinian families from their homes in the neighborhood of Sheikh Jarrah in Jerusalem. We joined a group of human rights organizations to demand Facebook and Twitter to immediately stop these takedowns, reinstate affected content and accounts, and provide a clear and public explanation for why the content was removed. Read the full statement on 7amleh’s website.


May 7, 2020

On May 5, 2021 Roya Pakzad from Taraaz participated in the New American Dream series organized by the WNET Group – parent to America’s flagship PBS station. She joined other fellow panelists including Dr. Rumman Chowdhury, Nicole Martinez-Martin, J.D., Ph.D., Mutale Nkonde, Dr. Kim TallBear (Sisseton Wahpeton Oyate), and Karen Hao. The conversation centered around the applicability and challenges of bringing human rights and racial justice frameworks to the research and development of artificial intelligence and genetic science. You can watch the recording here.


Screen Shot 2021-04-19 at 12.27.56 PM.png

April 19, 2020

In April 2021, We joined a coalition of human rights organizations, journalists, and researchers to urge the European Parliament to vote against the proposal for a regulation on addressing the dissemination of terrorist content online (#TERREG). Despite its promises, the proposal poses serious threats to freedom of expression, freedom to access information, and the right to privacy of marginalized groups by incentivizing online platforms to use automated content moderation tools and by allowing the EU Member States to issue cross-border removal orders without any checks.


January 25, 2021

We are excited to announce Taraaz and the CITRIS Policy Lab’s 2021 Technology and Human Rights fellows. Brie McLemore (Ph.D. student, Jurisprudence and Social Policy Program, the UC Berkeley) will be working on a project entitled, “When the Streetlights Come On: How ‘Smart Cities’ are Becoming a Surveillance State” and Ifejesu Ogunleye (Master’s student, Rausser College of Natural Resources, the UC Berkeley) will carry out a project on opportunities and constraints of AI development in developing countries, a case study of Nigeria. Read more about our fellows and their work on Medium.


November 27, 2020

Our first report on digital rights and technology companies in Iran is out 🎉 On Nov. 23, 2020, in partnership with Filterwatch and Ranking Digital Rights, we organized a launch event for the “Digital Rights and Technology Sector Accountability in Iran: The Case of Messaging Apps” report. Roya Pakzad and Melody Kazemi shared the findings, followed by a panel discussion with Afef Abrougui, Kaveh Azarhoosh, Jessica Dheere, and David Kaye. Watch the recording here and download the reports and digital rights workbooks in both English and Persian from our project page.


November 20, 2020

Taraaz joined UC Santa Cruz’s Center for Public Philosophy and the Baskin School of Engineering for the project, “Tech Futures 52 Conversation Starters.” The project will create a deck of playing cards that will catalyze conversations about Ethics and Technology. Consider adding your questions about ethical and human rights implications of genetic engineering, AI and machine learning, social algorithms, agritech, astrobiology, and more here in this form.


Screen Shot 2020-08-24 at 9.46.50 AM.png

August 24, 2020

Taraaz partnered with the CITRIS Policy Lab at UC Berkeley to launch the Human Rights by Design Fellowship program. Two paid fellowships will be provided to University of California graduate students (currently enrolled in Master’s, Ph.D., or J.D. programs at a campus in the University of California system) to carry out projects at the intersection of technology and human rights. Learn more and apply by September 30, 2020 on this website.


Screen Shot 2020-08-27 at 10.01.47 PM.png

August 12, 2020

In an article entitled “Responsible Use of Machine Learning APIs” we look into the often-overlooked relationships between machine learning service providers and software developers. Built on the principles of privacy, security, transparency, and fairness, we provide a guide for developers who want to choose and use machine learning general-purpose APIs more responsibly. Read the full article on Medium.


taraaz-website-banner (1).jpg

July 20, 2020

Taraaz’s joint project, “Our Muslim Voices: Nuanced Counter-Narratives of Being Muslim Online,” was featured in a UC Berkeley profile. In a Q&A session, the project researchers shared details on the research methods and elaborated on desirable outcomes. Read more on UC Berkeley I-school’s website.


cityhall.jpg

June 24, 2020

We celebrated the ban of predictive policing and facial recognition technologies in the City of Santa Cruz, CA. Santa Cruz becomes the first city in the U.S. to ban predictive policing software and the eighth city in banning the use of facial recognition technologies. Taraaz’s comment on the ban appeared in a Reuters article.


predpol.jpg

June 15, 2020

Taraaz joined a coalition of civil rights and racial justice organizations to urge the Santa Cruz City Council to adopt a proposed municipal ban on the use of predictive policing software and facial recognition technology. Read the joint letter here.


fat-20.jpg

January 27, 2020

We went to the FAccT 2020 conference to present our joint tutorial, “Leap of FATE: human rights as a complementary framework for AI policy and practice.” The tutorial explains how human rights frameworks serve as complementary frameworks (in addition to Fairness, Accountability, Transparency, and Ethics) for guiding machine learning research and development. Read more on the ACM FAT* '20 proceedings websites and find our slides here.


Screen Shot 2020-07-22 at 1.27.47 AM.png

January 27, 2020

In “Technology and Human Rights… in Comics!” Roya Pakzad walks you through her journey from working as an electrical engineer in a little cubicle at AMD to her current work in human rights. Read this personal essay illustrated with comics on Medium.


rockefeller-workshop.jpg

October 19, 2019

Taraaz joined a workshop entitled “Closing the Human Rights Gap in AI Governance.” The workshop was convened by Element AI, the Mozilla Foundation, and The Rockefeller Foundation “to determine what concrete actions could be taken to help ensure that respect for human rights is embedded into the design, development, and deployment of AI systems.” Roya Pakzad presented her work on human rights by design. Read more in Element AI’s report.


1_lTz2j-8QkVw4JRQ-ND_FcA.png

august 29, 2019

We applied a few experiments to highlight potential human rights concerns of IBM Watson Personality Insights tool. This includes the impacts on the right to a just and favorable condition of work and equal employment opportunity, the right to freedom of expression, and the right to privacy. Read more on Medium.