by Payal Arora, Kiran Vinod Bhatia and Marta Zarzycka
3 min read
The emergence of AI-generated non-consensual synthetic intimate imagery (NSII) or ‘deepfake porn’ hasexacerbated the sexual and digital vulnerabilities experienced by women and girls in societies of the Global South, where the gender gap in digital access and equity remains glaring. A2019 study highlights a concerning trend in the use of GenAI for creating NSII: 96% of all deepfake videos available online are non-consensual pornography. This type of content predominantly targets women, underscoring its highly gendered nature.
This growing misuse of generative AI (GenAI) poses significant risks, but it also presents an opportunity to drive positive change. Addressing these challenges requires a women’s safety-centered framework for GenAI. Stakeholders such as tech experts, scholars, designers, activists, NGOs, and survivor communities need to work together to build a future where technology works to protect the most vulnerable while expanding their freedoms online.
Voices from the Field
In 2019, Bhatia, a research lead at the Inclusive AI Lab, met Neeta, a 19-year-old woman in a low-income slum settlement in Delhi, India. Neeta was preparing to apply for a seat in the Bachelor of Arts program when her ex-boyfriend created her nudes using image editing/manipulation software and circulated these on social media. She couldn’t convince her family and community that the images were fake or manipulated. The community blamed her family for educating Neeta, giving her access to a mobile phone/internet connection, and compelled them to send Neeta back to their native village, where she had no mobility and minimal freedoms.
Neeta’s story is far from unique. Her story exemplifies a pervasive pattern of disproportionate harm experienced by women and gender minorities due to the socio-cultural and legal gaps in Global South societies. Several members at Utrecht University’s Inclusive AI Lab have been working closely with women and girls in vulnerable contexts worldwide, documenting the harms emerging from digital non-consensual intimate content in the Global South. Arora, co-founder of the Inclusive AI Lab found recurrent tensions and creative navigations by women and girls between the need for digital safety and the freedom to be visible and heard online, such as the project with UNHCR on digital usage of forcibly displaced populations in Brazil and from an IDRC funded work on South Asian women and girls digital experiences with gig platforms.
Though mainstream media typically covers cases of how GenAI is used to create deepfake porn targeting actresses, women journalists, artists, and social media influencers, non-celebrity victim-survivors are the more common and impacted group as the spread of GenAI technology increases and becomes more affordable. The question is: why are girls, women, and other gender minorities at a higher risk of suffering severe and long-term consequences of deepfake porn? For starters, the socio-cultural fabric in many such communities of the global South foregrounds ideas of honor and (sexual) purity among women. Survivors of this deepfake porn have experienced victim-blaming as their families and communities impose stricter restrictions on their physical and digital mobility and compel them to withdraw from public spaces and conversations. Such patriarchal cultures discourage these young women from reporting the crime, which is exacerbated by a lack of supportive and affordable legal resources and structures.
Digital policy and tech laws in many Global South countries do not have sufficient guardrails against the use of GenAI for creating deepfake porn. For example, in India, even though theCriminal Law Amendment Act 2013 and the Information Technology Act 2000 (amended in 2008) cover offenses such as cyberbullying, voyeurism, non-consensual sharing of pornography, rape videos, and other such material, it doesn’t capture the concept of GenAI created non-consensual synthetic imagery. Similarly,Kenya’s existing laws don’t explicitly address deepfake porn, thus making it difficult to establish intent and malicious use. Moreover, tech users face additional barriers, such as limited digital literacy, nascent privacy protection, and a lack of awareness about identifying and reporting instances of non-consensual synthetic intimate imagery.
Building a Gen(der) AI Safety Framework
While emerging technologies pose a serious threat to the digital safety of girls and women, these technologies can also be (re)designed to foreground principles of inclusion, accountability, and responsibility. Examples from around the world highlight how thoughtful interventions can make a difference. For instance, South Korea’s Seoul Metropolitan Government runsa digital sex crime support center that employs a specialized tool to systematically track and delete deepfake images and videos. Similarly, some tech companies are trying to adopt watermarks to help identify content that is AI-generated. For example, Google has developedSynthID, a system that embeds watermarks in AI-generated images that remain detectable even after edits or screenshots. Adobe’sContent Authenticity Initiative integrates metadata into images and videos to trace edits and ownership, promoting transparency and accountability. Academic research organizations have developed tools such asPhotoGaurd andFawkes to shield and protect photos from being detected or manipulated by AI.
Despite the promise of these technologies and tools, their effectiveness depends on their design and deployment in diverse contexts, meaningfully and strategically accounting for the cultural, legal, and socio-economic realities of the Global South. A robust Gen(der) AI Safety Framework requires an intersectional approach that doesn’t romanticize nor dismisses culture. The fact is that much is changing in the Global South and yet, the patriarchal culture remains persistent, manifesting in gendered laws such as the guardianship system, victim blaming, and the criminalization of homosexuality. We need a redressal system informed by credible intermediaries on the ground and which spans critical digital literacies, awareness campaigns, gender data, and culturally centered user friendly approaches for the victims/survivors if we are to ensure that half the world’s population can enjoy the opportunities that GenAI brings without the harmful consequences.
The Inclusive AI Lab at Utrecht University is dedicated to help build inclusive, responsible, and ethical AI data, tools, services, and platforms that prioritize the needs, concerns, experiences, and aspirations of chronically neglected user communities and their environments, with a special focus on the Global South. It is a women-led, cross-disciplinary, and public-private stakeholder initiative co-founded by Payal Arora, Professor of Inclusive AI Cultures at the Department of Media and Culture Studies UU, and Dr. Laura Herman, Head of AI Research at Adobe.
Prof. dr. Payal Arora is a Professor of Inclusive AI Cultures at Utrecht University and co-founder of FemLab, a Feminist Futures of Work initiative, and Inclusive AI Lab, a Global South-centered debiasing data initiative. She is a digital anthropologist, with two decades of user experience in the Global South. She is the author of award-winning books including ‘The Next Billion Users’ with Harvard Press and ‘From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech’ with MIT Press. Forbes called her the ‘next billion champion’ and the ‘right kind of person to reform tech.’
Dr. Kiran Vinod Bhatia is a digital anthropologist working at the intersection of marginalization and digital media. She is interested in understanding how young people from low-income contexts in the global South use and ascribe meaning to digital technologies. She hopes that highlighting the underexplored realities of digital media users in resource-constrained contexts will help build diverse, equitable, and inclusive digital technologies/products, structures, and policies. She leads the Localizing Responsible AI cluster at Inclusive AI Lab, Utrecht University
Marta Zarzycka is a senior User Experience researcher at Google, working across complex domains like trust & safety, counter abuse, risk and harm taxonomies, and egregious harms protections (focusing on non-consensual intimate imagery).
Technology, Employment and Wellbeing is an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.
Cours Saint Michel 30e 1040 Brussels Belgium
+32 2 329 30 32
futureofwork(at)fes.de
Meet the team
Follow us on LinkedIn and X
Subscribe to receive our Newsletter
Watch our short videos and recorded events Youtube
This site uses third-party website tracking technologies to provide and continually improve our services, and to display advertisements according to users' interests. I agree and may revoke or change my consent at any time with effect for the future.
These technologies are required to activate the core functionality of the website.
This is an self hosted web analytics platform.
Data Purposes
This list represents the purposes of the data collection and processing.
Technologies Used
Data Collected
This list represents all (personal) data that is collected by or through the use of this service.
Legal Basis
In the following the required legal basis for the processing of data is listed.
Retention Period
The retention period is the time span the collected data is saved for the processing purposes. The data needs to be deleted as soon as it is no longer needed for the stated processing purposes.
The data will be deleted as soon as they are no longer needed for the processing purposes.
These technologies enable us to analyse the use of the website in order to measure and improve performance.
This is a video player service.
Processing Company
Google Ireland Limited
Google Building Gordon House, 4 Barrow St, Dublin, D04 E5W5, Ireland
Location of Processing
European Union
Data Recipients
Data Protection Officer of Processing Company
Below you can find the email address of the data protection officer of the processing company.
https://support.google.com/policies/contact/general_privacy_form
Transfer to Third Countries
This service may forward the collected data to a different country. Please note that this service might transfer the data to a country without the required data protection standards. If the data is transferred to the USA, there is a risk that your data can be processed by US authorities, for control and surveillance measures, possibly without legal remedies. Below you can find a list of countries to which the data is being transferred. For more information regarding safeguards please refer to the website provider’s privacy policy or contact the website provider directly.
Worldwide
Click here to read the privacy policy of the data processor
https://policies.google.com/privacy?hl=en
Click here to opt out from this processor across all domains
https://safety.google/privacy/privacy-controls/
Click here to read the cookie policy of the data processor
https://policies.google.com/technologies/cookies?hl=en
Storage Information
Below you can see the longest potential duration for storage on a device, as set when using the cookie method of storage and if there are any other methods used.
This service uses different means of storing information on a user’s device as listed below.
This cookie stores your preferences and other information, in particular preferred language, how many search results you wish to be shown on your page, and whether or not you wish to have Google’s SafeSearch filter turned on.
This cookie measures your bandwidth to determine whether you get the new player interface or the old.
This cookie increments the views counter on the YouTube video.
This is set on pages with embedded YouTube video.
This is a service for displaying video content.
Vimeo LLC
555 West 18th Street, New York, New York 10011, United States of America
United States of America
Privacy(at)vimeo.com
https://vimeo.com/privacy
https://vimeo.com/cookie_policy
This cookie is used in conjunction with a video player. If the visitor is interrupted while viewing video content, the cookie remembers where to start the video when the visitor reloads the video.
An indicator of if the visitor has ever logged in.
Registers a unique ID that is used by Vimeo.
Saves the user's preferences when playing embedded videos from Vimeo.
Set after a user's first upload.
This is an integrated map service.
Gordon House, 4 Barrow St, Dublin 4, Ireland
https://support.google.com/policies/troubleshooter/7575787?hl=en
United States of America,Singapore,Taiwan,Chile
http://www.google.com/intl/de/policies/privacy/