17.02.2025

Mind the gender AI gap: The fight for fairness futures

by Weijie Huang and Payal Arora, Utrecht University

3 min read

Mind the gender AI gap: The fight for fairness futures

As our society increasingly relies on the decision-making capacities of artificial intelligence (AI) technology to mediate our work, our play and everything in between, the dearth  of gender data has become an urgent problem. According tothe UN Women report, 80% of the gender equality indicators in the Sustainable Development Goals (SDGs) lack sufficient data, which prevents us from building tools, services, and platforms that work just as effectively for women and girls worldwide. If the current situation remains on track, closing the gender data gap will take 22 years, far beyond the 2030 target.

Against this doomsday scenario, there is an emerging counter narrative and practice, one that brings on board critical changemakers across sectors and disciplines to come together to create new forms of gender oriented data that goes into training the AI. This is one step forward in taking control of systems that affect us, and hopefully in turn, we can change the rules of the game.

1. Why does gender data matter?

The decision-making ability of the AI system depends on the quality and diversity of data, but most existing human data sets the “male experience” as the default mode. This lack of data accuracy from multiple perspectives not only reinforces gender stereotypes but also leads to the continued existence of gender discrimination in areas such asmedical diagnosis,recruitment decisions, andallocation of educational resources in our society.

A UNESCO study shows that large language models (LLM) of popular generative AI platforms such as OpenAI and META generate content reinforcing stereotypes of women in leadership and professional roles, leading to a historical regression of gender stereotypes. In addition, AI algorithms favour male candidates in recruitment, such as the case ofAmazon, significantly exacerbating inequalities in the labour market. In the medical field, due to the dataset being biased towards male subjects,  women’s health problems (such asmyocardial infarction symptoms which are different from men’s) and diseases of people of colour are often not accurately diagnosed. This leads to increased gender and racial inequality in the medical system. At the same time, AI  systems ignorethe needs of girls in low-income areas, exacerbating the imbalance in education and technology access.

Closing the gender data gap has become a key to technological equity and social justice. This absence of gender data is even more striking inthe Global South, where women’s and girls’ voices and needs have long been ignored. Therefore, incorporating the perspectives of the Global South is crucial to ensuring fairness and inclusiveness in the development of AI. By integrating gender data, AI systems can better serve different social groups and drive a fair and inclusive digital future. To achieve this, we cannot simply wait for 22 years.

2. Recoding the rules: Taking back control

Taking a gender data approach to AI systems provides an opportunity to revert control and change the rules of the game. In the last few years we have seen creative strategies emerging from around the world, reclaiming AI for women by women.

Examples include Monica Lewinsky’s Goodness Bot, a proactive solution to the toxic online culture of cyberbullying by AI-powered interactions - this bot leverages AI tools to spread positivity - shifting online discourse from harm to support. Similarly, a key partner in our consortium RNW Media, has been creatively gaming the algorithms to expand the reach of their LOVE Matters Naija campaign, spreading progressive messages of consent, gender equality and healthy relationships on social media to challenge the social norms with young audiences in Nigeria and beyond.

In our Inclusive AI Lab, we are in the midst of working with women creatives in underrepresented communities in Ecuador, Kenya and India to co-create new kinds of visual data that reflect how they want to be represented. We are working with people with disabilities, rural artisanal communities and low income women workers through a fellows program that allows them to play with generative media tools and learn how the world sees them. This becomes a launching ground for building gender data for the creative commons, with new kinds of prompts and labels that reflect the interests, aspirations, and concerns of those being represented.

3. New storytelling matters!

New storytelling matters because it reshapes our collective imagination, illuminating the stories of women and girls that have long been overlooked or distorted. By building gender data for AI, we compel ourselves to rethink entrenched narratives and recognize the vital contributions of women and girls in shaping society. These stories are not just about representation but about transformation—ensuring women and girls are included not only in the stories we tell but in the institutions, tools, and systems we build. This work is essential for crafting an inclusive future where AI serves as a force for equity, amplifying diverse voices and fostering systems that reflect the richness of human experience. In rethinking the role of women and girls, we take a step toward a world where everyone’s potential drives progress, innovation, and justice.

 

The Inclusive AI Lab at Utrecht University is dedicated to help build inclusive, responsible, and ethical AI data, tools, services, and platforms that prioritize the needs, concerns, experiences, and aspirations of chronically neglected user communities and their environments, with a special focus on the Global South. It is a women-led, cross-disciplinary and public-private stakeholder initiative co-founded by Payal Arora, Professor of Inclusive AI Cultures at the Department of Media and Culture Studies UU and Dr. Laura Herman, Head of AI Research at Adobe.

Weijie Huang is a PhD candidate at the Faculty of Humanities at Utrecht University and Feminist AI cluster lead at the Inclusive AI Lab. Her research focuses on intersecting media, AI, and gender studies in China, with a special interest in how AI and digital platforms shape female players’ experiences and belongingness. Weijie holds a master’s degree in Media and Creative Industries from Erasmus University Rotterdam. Prior to her PhD, she worked for a decade as a journalist for the Chinese National Geography and Condé Nast China.

Prof. dr. Payal Arora is a Professor of Inclusive AI Cultures at Utrecht University and co-founder of FemLab, a Feminist Futures of Work initiative, and Inclusive AI Lab, a Global South centered debiasing data initiative. She is a digital anthropologist, with two decades of user-experiences in the Global South. She is the author of award-winning books including ‘The Next Billion Users’ with Harvard Press and ‘From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech’ with MIT Press. Forbes called her the ‘next billion champion’ and the ‘right kind of person to reform tech.’

Technology, Employment and Wellbeing is an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.

FES Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Meet the team

Follow us on LinkedIn and  X

Subscribe to receive our Newsletter

Watch our short videos and recorded events Youtube