Although technical artifacts are often presented as neutral and objective, technology is not free from values. New technologies typically develop through gradual modifications and combinations of existing technologies rather than sudden innovations. Both new and existing technologies reflect the influence of their designers and the social context in which they are created (Wajcman, 1991).
AI relies on data and algorithms to identify patterns, make decisions, and improve performance through learning and adaptation. Gender biases can be present in both input data and algorithms. Training data may reflect existing gender biases and past discrimination, while biases can also emerge during algorithm development due to developer choices or the training process itself.
FES Future of Work is launching a new project on Artificial Intelligence and Gender Equality, exploring a range of essential but often overlooked topics, from gender bias and algorithmic fairness to digital literacy and labor inequalities in the evolving world of work.
Here, we present four analytical papers examining critical gendered aspects of system design, deployment, and use.
This paper provides a feminist analysis of the European Union’s Artificial Intelligence Act (hereinafter the “AI Act”), assessing its capacity to address gender inequities and structural power imbalances in AI systems.
Drawing on feminist theories, the paper evaluates the AI Act’s limitations in mitigating gender biases that might disproportionately impact marginalised groups, particularly women of colour and women with disabilities. Through case studies in recruitment and employment, healthcare, border management control and risk assessment in domestic violence cases, the report highlights how AI applications can reinforce gender disparities.
A detailed examination of specific provisions within the AI Act reveals critical gaps in addressing systemic discrimination and bias in AI governance. To promote a more equitable AI landscape, the report recommends integrating intersectional, feminist-informed revisions that prioritise interdisciplinarity, collective instead of individual approaches and strong oversight mechanisms that are oriented by human rights – including feminist – values.
The proposed recommendations focus on strengthening the AI Act’s framework to better safeguard marginalised groups and to ensure a regulatory approach that reflects the diverse experiences of all individuals.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Anastasia Karagianni is a FARI Scholar and Doctoral Researcher at the LSTS research group of Law and Criminology Faculty at VUB. Her academic background is mainly based on international and European human rights law, as she holds an LL.M. from the Department of International Studies at the Aristotle University of Thessaloniki. While studying for her master’s degree, she spent a year at the Faculty of International Law at KU Leuven as an exchange student. She has also been a visiting researcher at the iCourts research team at the University of Copenhagen. Her academic research focuses on the “Divergencies of gender discrimination in AI”. Besides her academic interests, Anastasia is a digital rights activist, co-founding DATAWO, a civil society organisation based in Greece for the advocacy of gender equality in the digital era. Anastasia Karagianni was a MozFest Ambassador for 2023 and Mozilla Awardee for the project “A Feminist Dictionary in AI” of the Trustworthy Artificial Intelligence working group.
As artificial intelligence (AI) systems increasingly influence critical sectors such as healthcare, employment, education and law enforcement, concerns around bias – especially gender bias – have come to the forefront.
Gender bias in AI not only reflects but can escalate existing inequalities, raising significant ethical, legal and societal issues. This policy paper examines the impact of gender bias in AI systems and presents comprehensive guidelines for addressing it through a socio-technical lens.
By focusing on different stages of the AI lifecycle, the paper provides actionable recommendations for various stakeholders, including developers, deployers, users and regulators. Therefore, the aims of this document are to:
→ raise awareness about the escalation of gender bias when using AI systems in the decision-making process; outline the challenges and opportunities of incorporating a socio-technical approach to tackle gender bias issues in AI systems;
→ provide a set of recommendations to key stakeholders from a socio-technical perspective on how to identify and prevent the reproduction of gender bias.
Monique Munarini is a trained lawyer with a double master’s degree in international relations from the University of Padova (IT) and Law, Economics and Management from the University of Grenoble-Alpes (FR). She is currently a PhD candidate in the Italian National PhD in AI at the University of Pisa (IT) where her research focuses on developing equitable AI audits from a feminist perspective. At the same time, she also works as policy analyst in AI Governance for International Organisations and Global Civil Society Organisations.
Gender bias in AI systems poses significant challenges to achieving equality. AI, dependent on data for training and decision-making, often perpetuates gendered inequalities due to biased datasets that reinforce stereotypes and exclude diverse experiences. This impacts women’s access to healthcare, employment and education, compounding structural inequities.
Addressing these biases requires integrating gender-sensitive data throughout AI development to ensure fairness and equity. Barriers include technical opacity, binary gender norms and patriarchal structures, particularly in the Global South.
Solutions involve algorithmic transparency, inclusive data frameworks, cross-sectoral collaboration and feminist ethics. Embedding gender data in AI is essential to empower marginalised groups and promote social justice.
Payal Arora is a Professor of Inclusive AI Cultures at Utrecht University and co-founder of FemLab, a Feminist Futures of Work initiative, and Inclusive AI Lab, a Global South-centred debiasing data initiative. She is a digital anthropologist with two decades of user experience in the Global South. She is the author of several award-winning books, including The Next Billion Users (Harvard Press) and From Pessimism to Promise: Lessons from the Global South on Designing Inclusive Tech (MIT Press). Forbes called her the “next billion champion” and the ”right kind of person to reform tech”.
Weijie Huang is a PhD candidate at the Faculty of Humanities at Utrecht University and Feminist AI cluster lead at the Inclusive AI Lab. Her research focuses on intersecting media, AI and gender studies in China, with a special interest in how AI and digital platforms shape female players’ experiences and belongingness. Weijie holds a master’s degree in Media and Creative Industries from Erasmus University Rotterdam. Prior to her PhD, she worked for a decade as a journalist for the Chinese National Geography and Condé Nast China.
This paper explores Feminist Artificial Intelligence (FAI), a framework leveraging intersectional feminism to address biases and inequities in AI systems.
FAI emphasises interdisciplinary collaboration, systemic power analysis and iterative theory-practice loops. By embedding feminist values – equity, freedom and justice – FAI seeks to transform AI development, ensuring inclusivity and social sustainability.
Practical applications include initiatives such as FemAI’s advocacy for feminist perspectives in the EU AI Act and the MIRA diagnostic platform, which aligns AI tools with social justice goals.
FAI marks a critical departure from traditional AI by tackling structural inequalities and promoting accountable and equitable AI solutions.
FemAI is a responsible AI start-up certifying AI systems designed to benefit humanity. Established in 2022 in Berlin, Germany, we started as a research-focused think tank, advising over 25 organisations on their ethical AI policies. Building on these insights, we developed feminist frameworks to evaluate AI tools, companies, courses and workshops – focusing on the people most often overlooked in the AI age. At our core, FemAI uses intersectional feminism as a research method to illuminate the black box of AI, ensuring accuracy and equity in the technologies shaping our future. Co-creation and cross-sector collaboration are at our heart, allowing us to close the gaps between policymaking, businesses, research and civil society.
Alexandra Wudel is a tech entrepreneur and researcher focused on shaping an inclusive future for AI. As the CEO of FemAI, she was named “AI Person of the Year 2024”, recognised as one of 25 global experts on ethical considerations for AI, and awarded the title “35 under 35 Young Leaders”. Alexandra has contributed to the development of ethical AI guidelines for the German Federal Foreign Ministry, the UN and the German Parliament. Working with AI tool providers, she supports private organisations in integrating societal and environmental responsibility into AI projects and strategies. Together with her team, Alexandra develops FAI frameworks to bridge the gap between principles and practice.
Anna Ehrenberg is part of the FemAI team. She works as a research assistant at FemAI, actively shaping the organisation’s understanding of FAI. With her background in data science and process optimisation, she has co-authored the design of an FAI approach intended to be practically implemented in AI tools from the outset.
Dr. Inga Sabanova
Policy Officer
Email
Friedrich-Ebert-Stiftung Future of Work
Cours Saint Michel 30e 1040 Brussels Belgium
@FES_FoW
____________________________
an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.
Synthetic Data: A quick cure-all? by Marianna Capasso and Payal Arora
Mind the gender AI gap: The fight for fairness futures by Weijie Huang and Payal Arora
Deepfakes, Real Harm: Building a Women’s Safety-Centered GenAI by Payal Arora, Kiran Vinod Bhatia and Marta Zarzycka
The Meme-ification of Political Issues. Moving beyond the pros and cons of AI-enabled virality for social justice. by Lucie Chateau
Ubuntu and AI: Africa’s Bold Vision for an Ethically Inclusive Tech Future by by Wakanyi Hoffman
In this summer school, we aimed to explore the everyday experiences of female workers within the changing nature of work environments due to new technologies and new forms of work from by looking at the interaction between technological change, labour market, institutions and gender from an intersectional/critical perspective.
This site uses third-party website tracking technologies to provide and continually improve our services, and to display advertisements according to users' interests. I agree and may revoke or change my consent at any time with effect for the future.
These technologies are required to activate the core functionality of the website.
This is an self hosted web analytics platform.
Data Purposes
This list represents the purposes of the data collection and processing.
Technologies Used
Data Collected
This list represents all (personal) data that is collected by or through the use of this service.
Legal Basis
In the following the required legal basis for the processing of data is listed.
Retention Period
The retention period is the time span the collected data is saved for the processing purposes. The data needs to be deleted as soon as it is no longer needed for the stated processing purposes.
The data will be deleted as soon as they are no longer needed for the processing purposes.
These technologies enable us to analyse the use of the website in order to measure and improve performance.
This is a video player service.
Processing Company
Google Ireland Limited
Google Building Gordon House, 4 Barrow St, Dublin, D04 E5W5, Ireland
Location of Processing
European Union
Data Recipients
Data Protection Officer of Processing Company
Below you can find the email address of the data protection officer of the processing company.
https://support.google.com/policies/contact/general_privacy_form
Transfer to Third Countries
This service may forward the collected data to a different country. Please note that this service might transfer the data to a country without the required data protection standards. If the data is transferred to the USA, there is a risk that your data can be processed by US authorities, for control and surveillance measures, possibly without legal remedies. Below you can find a list of countries to which the data is being transferred. For more information regarding safeguards please refer to the website provider’s privacy policy or contact the website provider directly.
Worldwide
Click here to read the privacy policy of the data processor
https://policies.google.com/privacy?hl=en
Click here to opt out from this processor across all domains
https://safety.google/privacy/privacy-controls/
Click here to read the cookie policy of the data processor
https://policies.google.com/technologies/cookies?hl=en
Storage Information
Below you can see the longest potential duration for storage on a device, as set when using the cookie method of storage and if there are any other methods used.
This service uses different means of storing information on a user’s device as listed below.
This cookie stores your preferences and other information, in particular preferred language, how many search results you wish to be shown on your page, and whether or not you wish to have Google’s SafeSearch filter turned on.
This cookie measures your bandwidth to determine whether you get the new player interface or the old.
This cookie increments the views counter on the YouTube video.
This is set on pages with embedded YouTube video.
This is a service for displaying video content.
Vimeo LLC
555 West 18th Street, New York, New York 10011, United States of America
United States of America
Privacy(at)vimeo.com
https://vimeo.com/privacy
https://vimeo.com/cookie_policy
This cookie is used in conjunction with a video player. If the visitor is interrupted while viewing video content, the cookie remembers where to start the video when the visitor reloads the video.
An indicator of if the visitor has ever logged in.
Registers a unique ID that is used by Vimeo.
Saves the user's preferences when playing embedded videos from Vimeo.
Set after a user's first upload.
This is an integrated map service.
Gordon House, 4 Barrow St, Dublin 4, Ireland
https://support.google.com/policies/troubleshooter/7575787?hl=en
United States of America,Singapore,Taiwan,Chile
http://www.google.com/intl/de/policies/privacy/