17.02.2025

Deepfakes, Real Harm: Building a Women’s Safety-Centered GenAI

by Payal Arora, Kiran Vinod Bhatia and Marta Zarzycka

3 min read

The emergence of AI-generated non-consensual synthetic intimate imagery (NSII) or ‘deepfake porn’ hasexacerbated the sexual and digital vulnerabilities experienced by women and girls in societies of the Global South, where the gender gap in digital access and equity remains glaring. A2019 study highlights a concerning trend in the use of GenAI for creating NSII: 96% of all deepfake videos available online are non-consensual pornography. This type of content predominantly targets women, underscoring its highly gendered nature.

 

This growing misuse of generative AI (GenAI) poses significant risks, but it also presents an opportunity to drive positive change. Addressing these challenges requires a women’s safety-centered framework for GenAI. Stakeholders such as tech experts, scholars, designers, activists, NGOs, and survivor communities need to work together to build a future where technology works to protect the most vulnerable while expanding their freedoms online.

 

Voices from the Field

In 2019, Bhatia, a research lead at the Inclusive AI Lab, met Neeta, a 19-year-old woman in a low-income slum settlement in Delhi, India. Neeta was preparing to apply for a seat in the Bachelor of Arts program when her ex-boyfriend created her nudes using image editing/manipulation software and circulated these on social media. She couldn’t convince her family and community that the images were fake or manipulated. The community blamed her family for educating Neeta, giving her access to a mobile phone/internet connection, and compelled them to send Neeta back to their native village, where she had no mobility and minimal freedoms.

Neeta’s story is far from unique. Her story exemplifies a pervasive pattern of disproportionate harm experienced by women and gender minorities due to the socio-cultural and legal gaps in Global South societies. Several members at Utrecht University’s Inclusive AI Lab have been working closely with women and girls in vulnerable contexts worldwide, documenting the harms emerging from digital non-consensual intimate content in the Global South. Arora, co-founder of the Inclusive AI Lab found recurrent tensions and creative navigations by women and girls between the need for digital safety and the freedom to be visible and heard online, such as the project with UNHCR on digital usage of forcibly displaced populations in Brazil and from an IDRC funded work on South Asian women and girls digital experiences with gig platforms.

 

Though mainstream media typically covers cases of how GenAI is used to create deepfake porn targeting actresses, women journalists, artists, and social media influencers, non-celebrity victim-survivors are the more common and impacted group as the spread of GenAI technology increases and becomes more affordable. The question is: why are girls, women, and other gender minorities at a higher risk of suffering severe and long-term consequences of deepfake porn? For starters, the socio-cultural fabric in many such communities of the global South foregrounds ideas of honor and (sexual) purity among women. Survivors of this deepfake porn have experienced victim-blaming as their families and communities impose stricter restrictions on their physical and digital mobility and compel them to withdraw from public spaces and conversations. Such patriarchal cultures discourage these young women from reporting the crime, which is exacerbated by a lack of supportive and affordable legal resources and structures.

Digital policy and tech laws in many Global South countries do not have sufficient guardrails against the use of GenAI for creating deepfake porn. For example, in India, even though theCriminal Law Amendment Act 2013 and the Information Technology Act 2000 (amended in 2008) cover offenses such as cyberbullying, voyeurism, non-consensual sharing of pornography, rape videos, and other such material, it doesn’t capture the concept of GenAI created non-consensual synthetic imagery. Similarly,Kenya’s existing laws don’t explicitly address deepfake porn, thus making it difficult to establish intent and malicious use. Moreover, tech users face additional barriers, such as limited digital literacy, nascent privacy protection, and a lack of awareness about identifying and reporting instances of non-consensual synthetic intimate imagery.

Building a Gen(der) AI Safety Framework

While emerging technologies pose a serious threat to the digital safety of girls and women, these technologies can also be (re)designed to foreground principles of inclusion, accountability, and responsibility. Examples from around the world highlight how thoughtful interventions can make a difference. For instance, South Korea’s Seoul Metropolitan Government runsa digital sex crime support center that employs a specialized tool to systematically track and delete deepfake images and videos. Similarly, some tech companies are trying to adopt watermarks to help identify content that is AI-generated. For example, Google has developedSynthID, a system that embeds watermarks in AI-generated images that remain detectable even after edits or screenshots. Adobe’sContent Authenticity Initiative integrates metadata into images and videos to trace edits and ownership, promoting transparency and accountability. Academic research organizations have developed tools such asPhotoGaurd andFawkes to shield and protect photos from being detected or manipulated by AI.

Despite the promise of these technologies and tools, their effectiveness depends on their design and deployment in diverse contexts, meaningfully and strategically accounting for the cultural, legal, and socio-economic realities of the Global South. A robust Gen(der) AI Safety Framework requires an intersectional approach that doesn’t romanticize nor dismisses culture. The fact is that much is changing in the Global South and yet, the patriarchal culture remains persistent, manifesting in gendered laws such as the guardianship system, victim blaming, and the criminalization of homosexuality. We need a redressal system informed by credible intermediaries on the ground and which spans critical digital literacies, awareness campaigns, gender data, and culturally centered user friendly approaches for the victims/survivors if we are to ensure that half the world’s population can enjoy the opportunities that GenAI brings without the harmful consequences.

 

The Inclusive AI Lab at Utrecht University is dedicated to help build inclusive, responsible, and ethical AI data, tools, services, and platforms that prioritize the needs, concerns, experiences, and aspirations of chronically neglected user communities and their environments, with a special focus on the Global South. It is a women-led, cross-disciplinary, and public-private stakeholder initiative co-founded by Payal Arora, Professor of Inclusive AI Cultures at the Department of Media and Culture Studies UU, and Dr. Laura Herman, Head of AI Research at Adobe.

Prof. dr. Payal Arora is a Professor of Inclusive AI Cultures at Utrecht University and co-founder of FemLab, a Feminist Futures of Work initiative, and Inclusive AI Lab, a Global South-centered debiasing data initiative. She is a digital anthropologist, with two decades of user experience in the Global South. She is the author of award-winning books including ‘The Next Billion Users’ with Harvard Press and ‘From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech’ with MIT Press. Forbes called her the ‘next billion champion’ and the ‘right kind of person to reform tech.’

Dr. Kiran Vinod Bhatia is a digital anthropologist working at the intersection of marginalization and digital media. She is interested in understanding how young people from low-income contexts in the global South use and ascribe meaning to digital technologies. She hopes that highlighting the underexplored realities of digital media users in resource-constrained contexts will help build diverse, equitable, and inclusive digital technologies/products, structures, and policies. She leads the Localizing Responsible AI cluster at Inclusive AI Lab, Utrecht University

Marta  Zarzycka is a senior User Experience researcher at Google, working across complex domains like trust & safety, counter abuse, risk and harm taxonomies, and egregious harms protections (focusing on non-consensual intimate imagery).

Technology, Employment and Wellbeing is an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.

FES Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Meet the team

Follow us on LinkedIn and  X

Subscribe to receive our Newsletter

Watch our short videos and recorded events Youtube