Artificial Intelligence and Gender Equality


Although technical artifacts are often presented as neutral and objective, technology is not free from values. New technologies typically develop through gradual modifications and combinations of existing technologies rather than sudden innovations. Both new and existing technologies reflect the influence of their designers and the social context in which they are created (Wajcman, 1991). 

AI relies on data and algorithms to identify patterns, make decisions, and improve performance through learning and adaptation. Gender biases can be present in both input data and algorithms. Training data may reflect existing gender biases and past discrimination, while biases can also emerge during algorithm development due to developer choices or the training process itself. 

FES Future of Work is launching a new project on Artificial Intelligence and Gender Equality, exploring a range of essential but often overlooked topics, from gender bias and algorithmic fairness to digital literacy and labor inequalities in the evolving world of work.

Here, we present four analytical papers examining critical gendered aspects of system design, deployment, and use.


The EU Artificial Intelligence Act through a Gender Lens


This paper provides a feminist analysis of the European Union’s Artificial Intelligence Act (hereinafter the “AI Act”), assessing its capacity to address gender inequities and structural power imbalances in AI systems.

Drawing on feminist theories, the paper evaluates the AI Act’s limitations in mitigating gender biases that might disproportionately impact marginalised groups, particularly women of colour and women with disabilities. Through case studies in recruitment and employment, healthcare, border management control and risk assessment in domestic violence cases, the report highlights how AI applications can reinforce gender disparities.

A detailed examination of specific provisions within the AI Act reveals critical gaps in addressing systemic discrimination and bias in AI governance. To promote a more equitable AI landscape, the report recommends integrating intersectional, feminist-informed revisions that prioritise interdisciplinarity, collective instead of individual approaches and strong oversight mechanisms that are oriented by human rights – including feminist – values.

The proposed recommendations focus on strengthening the AI Act’s framework to better safeguard marginalised groups and to ensure a regulatory approach that reflects the diverse experiences of all individuals.

----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Anastasia Karagianni is a FARI Scholar and Doctoral Researcher at the LSTS research group of Law and Criminology Faculty at VUB. Her academic background is mainly based on international and European human rights law, as she holds an LL.M. from the Department of International Studies at the Aristotle University of Thessaloniki. While studying for her master’s degree, she spent a year at the Faculty of International Law at KU Leuven as an exchange student.  She has also been a visiting researcher at the iCourts research team at the University of Copenhagen. Her academic research focuses on the “Divergencies of gender discrimination in AI”.  Besides her academic interests, Anastasia is a digital rights activist, co-founding DATAWO, a civil society organisation based in Greece for the advocacy of gender equality in the digital era. Anastasia Karagianni was a MozFest Ambassador for 2023 and Mozilla Awardee for the project “A Feminist Dictionary in AI” of the Trustworthy Artificial Intelligence working group.


Multi-stakeholder guidelines on how to address gender bias in AI systems


As artificial intelligence (AI) systems increasingly influence critical sectors such as healthcare, employment, education and law enforcement, concerns around bias – especially gender bias – have come to the forefront.

Gender bias in AI not only reflects but can escalate existing inequalities, raising significant ethical, legal and societal issues. This policy paper examines the impact of gender bias in AI systems and presents comprehensive guidelines for addressing it through a socio-technical lens.

By focusing on different stages of the AI lifecycle, the paper provides actionable recommendations for various stakeholders, including developers, deployers, users and regulators. Therefore, the aims of this document are to:


→ raise awareness about the escalation of gender bias when using AI systems in the decision-making process; outline the challenges and opportunities of incorporating a socio-technical approach to tackle gender bias issues in AI systems;


→ provide a set of recommendations to key stakeholders from a socio-technical perspective on how to identify and prevent the reproduction of gender bias.

----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Monique Munarini is a trained lawyer with a double master’s degree in international relations from the University of Padova (IT) and Law, Economics and Management from the University of Grenoble-Alpes (FR). She is currently a PhD candidate in the Italian National PhD in AI at the University of Pisa (IT) where her research focuses on developing equitable AI audits from a
feminist perspective. At the same time, she also works as policy analyst in AI Governance for International Organisations and Global Civil Society Organisations.


Gender Data: What is it and why is it important for the future of AI Systems?


 

Gender bias in AI systems poses significant challenges to achieving equality. AI, dependent on data for training and decision-making, often perpetuates gendered inequalities due to biased datasets that reinforce stereotypes and exclude diverse experiences. This impacts women’s access to healthcare, employment and education, compounding structural inequities.

Addressing these biases requires integrating gender-sensitive data throughout AI development to ensure fairness and equity. Barriers include technical opacity, binary gender norms and patriarchal structures, particularly in the Global South.

Solutions involve algorithmic transparency, inclusive data frameworks, cross-sectoral collaboration and feminist ethics. Embedding gender data in AI is essential to empower marginalised groups and promote social justice.

 

----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Payal Arora is a Professor of Inclusive AI Cultures at Utrecht University and co-founder of FemLab, a Feminist Futures of Work initiative, and Inclusive AI Lab, a Global South-centred debiasing data initiative. She is a digital anthropologist with two decades of user experience in the Global South. She is the author of several award-winning books, including The Next Billion Users
(Harvard Press) and From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech (MIT Press). Forbes called her the “next billion champion” and the ”right kind of person to reform tech”. 

Weijie Huang is a PhD candidate at the Faculty of Humanities at Utrecht University and Feminist AI cluster lead at the Inclusive AI Lab. Her research focuses on intersecting media, AI and gender studies in China, with a special interest in how AI and digital platforms shape female players’ experiences and belongingness. Weijie holds a master’s degree in Media and Creative Industries
from Erasmus University Rotterdam. Prior to her PhD, she worked for a decade as a journalist for the Chinese National Geography and Condé Nast China.


What is Feminist AI?


 

This paper explores Feminist Artificial Intelligence (FAI), a framework leveraging intersectional feminism to address biases and inequities in AI systems. 

FAI emphasises interdisciplinary collaboration, systemic power analysis and iterative theory-practice loops. By embedding feminist values – equity, freedom and justice – FAI seeks to transform AI development, ensuring inclusivity and social sustainability.

Practical applications include initiatives such as FemAI’s advocacy for feminist perspectives in the EU AI Act and the MIRA diagnostic platform, which aligns AI tools with social justice goals.

FAI marks a critical departure from traditional AI by tackling structural inequalities and promoting accountable and equitable AI solutions.

 

----------------------------------------------------------------------------------------------------------------------------------------------------------------------

FemAI is a responsible AI start-up certifying AI systems designed to benefit humanity. Established in 2022 in Berlin, Germany, we started as a research-focused think tank, advising over 25 organisations on their ethical AI policies. Building on these insights, we developed feminist frameworks to evaluate AI tools, companies, courses and workshops – focusing on the people most
often overlooked in the AI age. At our core, FemAI uses intersectional feminism as a research method to illuminate the black box of AI, ensuring accuracy and equity in the technologies shaping our future. Co-creation and cross-sector collaboration are at our heart, allowing us to close the gaps between policymaking, businesses, research and civil society.

Alexandra Wudel is a tech entrepreneur and researcher focused on shaping an inclusive future for AI. As the CEO of FemAI, she was named “AI Person of the Year 2024”, recognised as one of 25 global experts on ethical considerations for AI, and awarded the title “35 under 35 Young Leaders”. Alexandra has contributed to the development of ethical AI guidelines for the German Federal
Foreign Ministry, the UN and the German Parliament. Working with AI tool providers, she supports private organisations in integrating societal and environmental responsibility into AI projects and strategies. Together with her team, Alexandra develops FAI frameworks to bridge the gap between principles and practice.

Anna Ehrenberg is part of the FemAI team. She works as a research assistant at FemAI, actively shaping the organisation’s understanding of FAI. With her background in data science and process optimisation, she has co-authored the design of an FAI approach intended to be practically implemented in AI tools from the outset.

 Dr. Inga Sabanova

   Policy Officer

   Email 

Friedrich-Ebert-Stiftung
Future of Work

   Cours Saint Michel 30e
   1040 Brussels
   Belgium

   @FES_FoW

Technology, Employment and Wellbeing

____________________________

an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.

____________________________

Synthetic Data: A quick cure-all? by Marianna Capasso and Payal Arora

Mind the gender AI gap: The fight for fairness futures by  Weijie Huang and Payal Arora

Deepfakes, Real Harm: Building a Women’s Safety-Centered GenAI by Payal Arora, Kiran Vinod Bhatia and Marta Zarzycka

The Meme-ification of Political Issues. Moving beyond the pros and cons of AI-enabled virality for social justice. by  Lucie Chateau

Ubuntu and AI: Africa’s Bold Vision for an Ethically Inclusive Tech Future by by Wakanyi Hoffman


PhD Summer School 2024: Gender, AI and Inclusive Work

 

In this summer school, we aimed  to explore the everyday experiences of female workers within the changing nature of work environments due to new technologies and new forms of work from by looking at the interaction between technological change, labour market, institutions and gender from an intersectional/critical perspective.