fes.de

Artificial Intelligence and Gender Equality

Read more
© Barbora Novotna
Creator: Barbora Novotna

The project on Artificial Intelligence and Gender Equality addresses  critical yet often overlooked issues, from algorithmic fairness and gender bias to digital literacy and labor inequalities in the changing world of work.

While technical artifacts are often portrayed as neutral, technology is never value-free. Innovations usually evolve through gradual modifications and combinations of existing technologies, shaped by both their creators and the social context in which they emerge

AI systems, which rely on data and algorithms to detect patterns, make decisions, and learn, are not immune to bias. Gender biases can appear in training data—reflecting historical discrimination—or arise during algorithm development due to design choices or the learning process itself. Addressing these challenges requires a multi-layered approach that spans technical design, regulatory frameworks, and labour market interventions. 

by Katharina Mosene 

Artificial intelligence is considered to be the technology of the future. However, it is frequently based on existing patterns of exclusion and discriminatory logics, with far-reaching consequences for justice, participation and equal opportunities.

What does artificial intelligence have to do with discrimination? 

Whether in the screening of job applications, allocation of social benefits, medical diagnostics or criminal prosecution, AI-supported systems are shaping more and more areas of society and promise neutrality, efficiency and better decision-making. However, these “automated decision-making systems”, or ADM systems, are by no means objective. Current studies and practical examples show that ADM systems often reproduce the very exclusions and inequalities that they claim to address. They are currently perpetuating structural discrimination, particularly affecting FLINTA*, BIPoC, queer people and people with disabilities. This “orientation towards the past” of the systems as socio-technical actors must be critically examined.

Definition: What are algorithmic decision-making systems? 

Algorithmic decision-making systems, or ADM systems, are digital systems that automatically make decisions or provide recommendations on the basis of predefined computational rules known as algorithms. They analyse data, follow defined decision-making logics and thus arrive at a result, often with no direct human involvement. In many instances, methods of artificial intelligence (AI) are also used, for example when the system recognises patterns from existing data and adapts its decision-making rules to new cases. In technical terms, this is referred to as machine learning, whereby an algorithm uses large volumes of data to create mathematical models in order to perform similar tasks more effectively in the future.

The algorithm is the basic technical framework – a clear step-by-step set of instructions for data processing. AI extends this by adding the capability to “generalise” statistically from sample data; that is, to improve rules autonomously. Ultimately, the ADM system is the application of these technologies within a concrete decision-making process, for instance in credit allocation, automated selection of applicants or medical diagnostics. The use of such systems raises important questions in respect of aspects such as the traceability, fairness and accountability of the decisions made.

Why ADM systems reproduce discrimination 

AI-based decision-making systems analyse data and make (partially) automated decisions or recommendations: who is suitable for a job, how high a credit limit should be or whether a person should receive state support, for example. In theory, such systems are supposed to minimise discrimination because they operate “free of human biases”. In practice, however, they often lead to new exclusions. Why is that?

Algorithmic decisions are based on historical data, and such data are anything but neutral. They originate from a world in which marginalised groups such as BIPoC, women (including individuals perceived as female)*, FLINTA/LGBTQIA+ or people with disabilities have often been disadvantaged. This means that anyone who has been treated less favourably in the past will also be deemed less suitable in the future, according to machine logic. They reflect social power relations, and rather than opening up equitable prospects for the future, many ADM systems consolidate the past, and with it racist, sexist and classist patterns of exclusion.

Decision-making systems within companies: automated decisions, real exclusions 

Individuals who are assessed today, within or by companies, on the basis of old patterns face poorer prospects tomorrow. Such biases are evident in credit allocation and other areas. If historical data show that white cisgender men were more frequently granted loans and were able to repay them, this logic becomes entrenched in the system. Those who have been impacted by structural disadvantages, such as poorer pay or discrimination in the labour market, will also be regarded as higher risk in the future. The machine “learns” to exclude certain groups of people. This reinforces economic inequalities and promotes a power imbalance in favour of already privileged groups (cf. Rudl 2021). The following examples show that the data used to train AI systems play a decisive role in perpetuating discrimination in automated decision-making processes.

Example 1: Labour market opportunity model

In Austria, the labour market opportunity model (AMS) was intended to divide people into three categories, depending on their “chances of integration” in the labour market. The classification then determined which support measures were to be received. Analyses quickly revealed that the system systematically categorised older applicants, women with caring responsibilities and people whose first language was not German as being less able to integrate; and so it discriminated against them.

Example 2: Allegheny Family Screening Tool

The Allegheny Family Screening Tool in Pennsylvania is an ADM system for assessing child welfare risk. Parents who are dependent on state aid generate far more data relating to health, housing or addictive behaviour, for example. More affluent parents who use private services, on the other hand, hardly appear in the datasets. As a result. poor families are more likely to be at risk; not because they are objectively more at risk, but because they are easier to see. In other words, the algorithm confuses parenthood in poverty with poor parenting. This example shows how discrimination arises through overrepresentation in the data.

Example 3: Amazon’s recruitment algorithm

The Amazon recruitment algorithm was designed to sort applications automatically. It learned from historical data that male applicants in the tech sector had more frequently been successful and rated female first names and even terms such as Women’s College less favourably. This reinforced existing inequalities in the labour market and further disadvantaged marginalised groups (Peña/Varon 2021, Criado-Perez 2019; D’Ignazio/Klein 2020; Eubanks 2018; Buolamwini/Gebru 2018). Amazon’s AI-assisted recruitment algorithm is a particularly extreme example of discrimination through underrepresentation in the data. Certain groups – in this case women – are hardly represented in training data because they previously had limited access to certain occupations, for example, and so are systematically rated less favourably.

Example 4: Biases in credit allocation

Such biases are also evident in credit allocation. If historical data show that white cisgender men were more frequently granted loans and were able to repay them, this logic becomes entrenched in the system. Those who have been impacted by structural disadvantages, such as poorer pay or discrimination in the labour market, will also be regarded as higher risk in the future. The machine learns exclusion according to the principle of discrimination through training. This reinforces economic inequalities and promotes a power imbalance in favour of already privileged groups (cf. Rudl 2021).

Additionally, there is a lack of transparency in the use of the systems: the consequences of discrimination usually go unnoticed if the way in which automated decisions are made is not apparent. This is why the people affected are often unable to challenge such decisions.

“[W]hile pilots of anti-poverty AI systems start to be tested everywhere, why is there no pilot AI system predestining young rich white politicians to corruption? Why is the fiscal secrecy of the rich so ensured while every single data of the poor […] is easily consented for data processing by both governments and companies?” (Varon/Peña 2021:18) 

Predictive policing: AI in sensitive areas 

It becomes even more problematic when ADM systems are deployed in the fields of security and fundamental rights. In predictive policing, past offences are used by AI systems to calculate future dangers. However, this is where racist biases are particularly hazardous. The problem is that these systems only pretend to predict the future. In reality, they project the past into the future, often with dire consequences for people who are already disadvantaged.

The data on which the AI systems are based are not neutral. They reflect the power relations of those who collect, categorise and evaluate the data. Feminist data practices begin here. They demand:

  • participatory data collection
  • context-related categories
  • independent audits
  • inclusion of marginalised groups

“How did we get to the point where data science is used almost exclusively in the service of profit (for a few), surveillance (of the minoritized), and efficiency? […] Data are expensive and resource-intensive, so only already powerful institutions—corporations, governments, and elite research universities—have the means to work with them at scale. These resource requirements result in data science that serves the primary goals of the institutions themselves. We can think of these goals as the three Ss: science (universities), surveillance (governments), and selling (corporations). […] If science, surveillance, and selling are the main goals that data are serving, because that’s who has the money, then what other goals and purposes are going underserved?” (D’Ignazio/Klein 2020: 41–42)

Algorithmic vicious circle in predictive policing 

Studies show: In districts with high levels of surveillance, mostly inhabited by marginalised groups, police action becomes more frequent because predictive policing predicts high levels of crime here. The resulting increased police presence leads to more recorded offences. In turn, these data flow back into the system and reinforce the monitoring of precisely these groups. The prediction is confirmed. An algorithmic vicious circle is created.

Options for action and resistance against algorithmic inequality 

The European Union has taken a significant step towards AI regulation with the AI Act 2024 following lengthy negotiations. Although bias is cited as a problem, no binding requirements for diversity, gender equality or anti-discrimination are formulated (cf. Vieth-Ditlmann/Sombetzki 2024; Mosene/Rachinger 2025). Article 10 of the AI Act also requires “sufficiently representative” data but remains vague as to what this actually means. This Act also falls short from an intersectional feminist perspective. While it emphasises transparency, explainability and technical certifiability, key questions of power distribution, representation and structural inequality remain unaddressed.

Beyond this, the following action can be taken:

  • Feminist data scientists such as Catherine D’Ignazio and Lauren Klein are instead calling for radical redistribution of data power that makes systemic discrimination visible, focuses on marginalised perspectives and strives for epistemic justice. Therefore, an equality-by-design approach must be more than a certification framework; it must be profoundly political, reveal power relations and actively challenge them. “Shifting the frame from concepts that secure power, like fairness and accountability, to those that challenge power, like equity and co-liberation, can help to ensure that data scientists, designers, and researchers take oppression and inequality as their grounding assumption for creating computational products and systems.” (D’Ignazio/Klein 2020: 72)
  • The General Equal Treatment Act (AGG) must also cover algorithmic discrimination in future, with provisions for collective redress and protection against what are known as proxy variables.
  • Binding criteria for fair training data are also needed. These must be analysed in advance for biases, as historical inequalities must not be adopted without reflection.
  • There is a need for transparency obligations, and for AI applications to be assessed with due regard to discrimination. Users need to know where ADM systems are used and how they work. Moreover, decisions must be open to challenge even if they were made algorithmically.
  • Companies and developers also bear responsibility: through more diverse teams, audits, participatory data collection and clear mechanisms for intervention.
  • And users? They cannot stop ADM on their own, but they can help to bring about change through education, awareness-raising and political advocacy. Civil society organisations such as AlgorithmWatch are doing important work in this regard.

The justice of the future begins with critical reflection on its technical present.

Think of AI differently: For a fair digital future 

Artificial intelligence is not a neutral technology, after all. It is a socio-technical system based on social norms, values and power relations. How it makes decisions is largely dependent on how it is trained, regulated and used. If we want AI not to repeat the inequalities of the past, but to help shape a fairer future, we must act now.

In practical terms, this means:

  • Making power relations visible
  • Questioning data practices
  • Changing narratives

Artificial intelligence is a mirror of our society; but it is also a tool that can be used to change it.

 

Sources and further information:

Boulamwini, Joy/Gebru, Timnit (2018): Gender Shades. Online: https://www.media.mit.edu/projects/gender-shades/overview/ [20 March 2024]

Criado Perez, Caroline (2019): Invisible Women: Data Bias in a World Designed for Men. New York, NY: Abrams Press.

D’Ignazio, Catherin/Klein, Lauren F. (2020): Data Feminism. MIT Press.

Eubanks, Virginia (2018): Automating Inequality – How High-Tech Tools Profile, Police and Punish the Poor. St. Martin’s Press.

Mosene, Katharina; Rachinger, Felicitas (2025): KI zwischen technologischem Fortschritt und gesellschaftlicher Verantwortung, from: blog interdisziplinäre geschlechterforschung, 22 April 2025, www.gender-blog.de/beitrag/ki-fortschritt-und-verantwortung/, DOI: https://doi.org/10.17185/gender/20250422 [22 May 2025]

Rudl, Thomas (2021): Wenn Algorithmen den Hauskredit verweigern. netzpolitik.org https://netzpolitik.org/2021/datenrassismus-wenn-algorithmen-den-hauskredit-verweigern/ [22 May 2025]

Varon, Joana/Peña, Paz (2021): Artificial intelligence and consent: a feminist anti-colonial critique, from: Internet Policy Review. 10 (4). https://doi.org/10.14763/2021.4.1602

Vieth-Ditlmann, Kilian/ Sombetzki, Pia (2024): EU-Parlament stimmt über KI-Verordnung ab. Mitgliedstaaten müssen nachbessern. https://algorithmwatch.org/de/ki-verordnung-eu-parlament-stimmt-ab/ [28 May 2025]

 

 

by Katharina Mosene

AI shapes our day-to-day lives – from chatbots to image generators. However, it often distributes racist, sexist and classist stereotypes through biased training data. 

Reproduced exclusion: how generative AI thinks 

Whether in chatbots, text generators or image generation: generative artificial intelligence (AI) is on the rise, and with it a new chapter of automated stereotyping. It has long been part of our day-to-day digital lives, from AI-generated application or advertising images to automated social media captions, video subtitles and deepfake pornography. Machines write stories, generate images and often make the invisible visible: structural discrimination. This is because large language models such as ChatGPT or image generators such as Midjourney are trained on the basis of biased data: hence, they often reproduce racist, sexist, ableist and classist narratives.

AI systems suggest objectivity, but they are frequently based on selective data and discriminatory patterns. Generative AI often reproduces standardised notions of gender, beauty, bodies and behaviour in its outputs.

The second part of this series of blogs explains:

  • how generative AI – that is, systems that write texts or generate images – reinforces discriminatory narratives (stereotypical notions of gender, body, origin or social roles),
  • what this means for our knowledge and how we see the world, and
  • how we can defend ourselves against such biases.

Definition: What is generative AI? 

Generative AI (also known as generative artificial intelligence) refers to a form of AI that is able to generate new content instead of merely analysing or classifying existing data. This may include texts, images, videos, music or even program code. So Generative AI is an AI that generates new data autonomously on the basis of patterns that it has learned from large volumes of data.

Why generative AI models reproduce discrimination 

The results of generative AI are based on large training datasets, including texts, images and videos taken from the internet and other data sources. Models such as ChatGPT, Midjourney or Stable Diffusion learn patterns, language and image compositions from these data. But who or what is in these datasets? Which narratives, which patterns are reproduced?

A large number of studies show that the underlying data are anything but neutral. They reflect social power relations, prejudices and exclusions. And because AI learns from these aspects, it also reproduces them – sometimes in subtle ways, sometimes more obviously: ”The first risks we consider are the risks that follow from the LMs absorbing the hegemonic worldview from their training data.” (Bender et al. 2021)

Example 1: Reinforcement of gender stereotypes by image-generating AI 

If users enter prompts such as CEO, politician or scientist into image generators, results depict white, norm-conforming men in affluent contexts with disproportionate frequency. Conversely, images generated from contexts such as care work, assistance or domestic work (nurse, secretary, cleaning lady) often depict hyperfeminised female figures.

These images are not coincidences: they are the result of a data-driven world that cements patriarchal and racist norms. Scientific analyses clearly show that GPT models systematically reinforce gender stereotypes, for example by assigning certain professions or adjectives to male or female names. “Multimodal datasets used in generative models reproduce a narrow and stereotypical worldview that privileges white, Western ideals and the hypersexualisation of women.” (Birhane et al. 2023) Moreover, individual systems such as Stable Diffusion distort reality to the detriment of already marginalised groups.

Example 2: Discrimination in AI systems such as Stable Diffusion

Women in the US are underrepresented in well-paid occupations, but data show that gender representation has improved significantly in most industries over time. However, Stable Diffusion presents a different scenario in which women rarely hold lucrative jobs or positions of power. Women made up a tiny fraction of the images generated for the prompt “judge” (about 3%), whereas in reality 34% of judges in the US are female. In the results presented by Stable Diffusion, women were not only underrepresented in well-paid occupations, but also overrepresented in low-paid occupations:

The world according to Stable Diffusion is run by White male CEOs. Women are rarely doctors, lawyers or judges. Men with dark skin commit crimes, while women with dark skin flip burgers.” (Nicoletti et al. 2023)

These biases have serious consequences. This is because AI images and texts shape perceptions of what people are like, what they should be and who is allowed to belong. They perpetuate visual norms: slim, bright, smiling faces; successful men, beautiful women.

This becomes particularly dangerous when this imagery overlaps with sexualised violence. Generative AI is increasingly being used to create what are known as deepfakes: fake images and videos that show people naked or in sexual situations without their consent.

Deepfake pornography: systematic digital violence 

What may appear harmless – a holiday photo or a selfie – can become a template for deceptively realistic deepfake nude images.

According to research by netzpolitik.org, tens of thousands of such images are created every day with the help of freely accessible AI tools. Research by 404 Media and Vice shows that providers of such AI image and video generators offer few safeguards against misuse, enabling users to generate sexualised deepfake videos with just a few clicks, sometimes on the basis of a single image such as a profile picture on social media. Those affected are almost exclusively women and girls. Studies confirm that more than 95% of all deepfakes are of a sexualised nature – and almost 100% involve bodies perceived as female.

Definition: What are deepfakes? 

Deepfakes are realistic-looking media content (images, videos, audio) that have been altered, generated or falsified/manipulated using artificial intelligence techniques. (based on Wikipedia)

From Taylor Swift to you: who is being targeted 

What began with celebrities – actresses, politicians and influencers – now mainly affects women from the perpetrators’ immediate environment: classmates, colleagues, neighbours.

The technology is the cause: many image-based AI systems have been trained using millions of pornographic images of women. The systems have learned to depict female bodies in a sexualised manner, not as a malfunction, but as a feature. Above all, however, this development is based on patriarchal social patterns and logics of violence that disproportionately affect women (and queer people).

Discrimination in the data: systematic exploitation of underrepresented, marginalised groups 

What all these phenomena have in common is that they do not affect everyone in the same way [in German]. Above all, generative AI reinforces discrimination against groups that are already underrepresented or marginalised: black women, people from groups with fewer resources, people with disabilities, and queer communities. They appear in either distorted or sexualised forms, or not at all.

There is also the factor of moderation, because even the results of generative AI systems are still corrected by humans. ChatGPT, the tool that has opened up a new era in the AI debate, is no exception. ChatGPT, as a form of generative AI, is a dialogue-oriented chatbot that uses AI and modern machine learning technology to communicate with users via text-based messages and generates texts on request, for example. The developers have gone to great lengths to create a system that does not reproduce sexism or racism – at least on the surface. However, this came at a high price. This price was paid by low-wage workers in the Global South, who were required to manually flag instances whenever the system produced discriminatory or violent content [in German]. This was done by manually sifting through precisely this type of material. Put simply: without having received any training or support, these workers spent entire days looking at violent, racist and sexist suggestions produced by the system and then flagging them so that the system would no longer reproduce them.

When machines lie: hallucinations and political disinformation 

It is becoming increasingly common for AI models to hallucinate and spread political disinformation.

Definition: Hallucination in generative AI 

Hallucinations in generative AI are plausible-sounding yet erroneous and fabricated statements generated by AI systems. In other words, the AI generates content that does not correspond to reality. For example, generative AI can invent quotes (In 2021, Angela Merkel said: “…”, even though she never said this), provide false figures or statistics or quote non-existent laws.

In political contexts in particular, hallucinated content can look like genuine information and spread quickly via social media. Moreover, people can use AI in a coordinated fashion to manipulate content and spread disinformation. Generative AI thus amplifies targeted disinformation.

Example 1: coordinated manipulation campaigns in the run-up to the 2025 German federal election

In the run-up to the 2025 federal election, the latest FIMI report by the Institute for Strategic Dialogue (ISD) clearly shows how generative AI has become part of coordinated manipulation campaigns. Automated tools were used to generate masses of fake content, such as manipulated quotes from politicians, distorted statements on migration or staged scandals surrounding the war in Ukraine. This content was distributed via bot networks and fake social media accounts and gave the impression of widely shared opinions. Emotional AI-generated narratives deliberately created mistrust of democratic processes. Particularly insidious is the way in which the boundaries between foreign influence and domestic political instrumentalisation became blurred. Right-wing populist parties in Germany also seized on this disinformation to push their own agendas.

Example 2: The Grok chatbot 

The example of the Grok chatbot clearly illustrates the core problems of generative AI with regard to disinformation: hallucinations are confused with ideology. An AI system that openly draws on toxic online posts is, to an extent, regularly and disproportionately exposed to hateful content: in a worst-case scenario, this can produce explicitly misanthropic statements with no clear distinction between fact and ideology. Without public monitoring or ethical control, these systems run a constant risk of harming democracy; technical creative power becomes political influence and creates democratic risks.

What can be done about discrimination in AI? Routes to more equitable technical design 

There is no simple update to counter these developments. There is room for manoeuvre, however:

  • Users must be trained in critical thinking. In the context of media skills acquisition, there needs to be significant emphasis on a reflective, critical attitude when engaging with generative AI outputs (in the sense of source criticism).
  • Training data for generative AI systems must become more diverse, curated and verifiable. The datasets used to train generative AI systems should reflect diverse perspectives, languages, cultures, lived experiences and forms of knowledge, rather than being dominated by Western, male, white, English-speaking and academic aspects. This will be achieved by including multilingual sources and non-dominant forms of knowledge (such as indigenous, queer and feminist archives). Rather than using information from the Internet in an uncontrolled manner, the data should be selected and compiled in a targeted fashion according to ethical, qualitative criteria.
  • There must be transparency regarding the databases. The origin, quality and composition of training data should also be transparently documented and traceable so that scientists, regulatory authorities and the general public are able to verify the origins of AI system knowledge.
  • There should be disclosure obligations for companies that develop generative AI, particularly for public or government use.
  • External audits are needed; that is to say, independent reviews of AI systems that are not carried out by developers or operators themselves, but by external, neutral third parties (research institutions, data protection authorities, supervisory authorities or specialised auditing bodies).
  • Labelling of synthetic content should be mandatory because users often cannot tell whether a text, image or video was created by a human or AI.

Initial examples of regulations and support 

The EU’s AI Act provides for initial transparency obligations, while the new EU Directive on combating violence against women criminalises sexualised deepfakes.

However, more is needed: specific criminal offences, faster deletion mechanisms, sanctions for providers and platforms.

Tools such as Glaze and Nightshade help users to protect their images from unauthorised AI use. Platforms such as HateAidbff e.V. and anna nackt offer advice and support to anyone affected by sexualised image-based violence.

At the same time, there is a need for education and awareness of the difference between technical feasibility and ethical justifiability, and of the dangers of generative AI; because this is not a neutral atlas, nor an objective encyclopaedia. It is a reflection of our society, which is shaped by exclusions and ideologies.

Shaping AI: As a tool for diversity, democracy and justice 

When machines tell stories, a key question arises: who determines the narratives? Generative AI is not merely a technical tool; it is a cultural instrument that shapes our images of the world. Failing to question this technology perpetuates discrimination. If we shape it, it can become a tool for diversity, democracy and justice:

  • A case in point is that generative AI transcribes spoken language in real time and automatically subtitles or even translates it simultaneously. It offers access to knowledge and forms of expression and can thus support people with little formal education or few language skills.
  • In a way, it democratises creativity and forms of expression by helping people who would otherwise have no way of using professional tools to express themselves, or no access to such tools.
  • Generative AI could even make marginalised perspectives visible if diverse, carefully curated datasets are used for training, giving space to underrepresented voices or reconstructing historically erased narratives.

Thus, generative AI also holds a great deal of emancipatory potential. Or, in the words of Audre Lorde: “The master’s tools will never dismantle the master’s house – but we can build our own.” Let’s harness this potential.

 

Sources and further information: 

Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of FAccT 2021. https://dl.acm.org/doi/10.1145/3442188.3445922

Birhane, A., Prabhu, V., & Kahembwe, E. (2023). Multimodal Datasets: misogyny, pornography, and malignant stereotypes. https://arxiv.org/abs/2110.01963 

Kira, Beatriz (2024): Deepfakes, the Weaponisation of AI Against Women and Possible Solutions https://verfassungsblog.de/deepfakes-ncid-ai-regulation/

Mosene, Katharina (2025): Unfreiwillig nackt: Wie Deepfake Porn sexualisierte Gewalt gegen Frauen verschärft. HIIG Digital Society Blog. DOI 10.5281/zenodo.15602099 / https://www.hiig.de/unfreiwillig-nackt-deepfake-porn/ 

Sandoval-Martín, Teresa & Martínez-Sanzo, Ester. (2024). Perpetuation of Gender Bias in Visual Representation of Professions in the Generative AI Tools DALL·E and Bing Image Creator. Social Sciences. 13. 250. 10.3390/socsci13050250

by Katharina Mosene 

AI promises to break down barriers, but automated decision-making and generative models can lead to digital violence, discrimination and political manipulation. This is why clear regulation is now essential.

Artificial intelligence, biases and the protection of fundamental rights: why clear regulation is overdue 

Artificial intelligence is changing our society, often reinforcing the very inequalities that it is supposed to overcome. New forms of digital violence, discrimination, reinforcement of social prejudices, socio-economic inequalities and political manipulation are emerging in the context of automated decision-making and generative AI that produces texts, images and videos. This requires clear regulation and legal provisions.

In the third and final part of this series of blogs:

  • … we take a critical look at current attempts to regulate AI, in particular the EU’s AI Act and the Digital Services Act (DSA). 
  • … we ask what the benefits of these regulatory frameworks are in the battle against gender-specific, racist and classist discrimination in AI, and where to find the protective mechanisms for marginalised groups. 
  • … we show how legal loopholes could be closed.

Artificial intelligence (AI) is no longer an abstract topic for the future. It is already structuring our day-to-day lives, influencing our decisions and impacting social power relations. However, studies and case analyses show that AI systems can not only reproduce existing discrimination but also reinforce it. The underlying training data are often not neutral, reflecting instead social prejudices, unequal representation and colonial data structures. See Part 1 of this series for more information.

Both UNESCO and the EU highlighted the risks of using AI in (sensitive) areas of society a number of years ago. According to the European Commission, AI systems could jeopardise the values of the EU and impact fundamental rights such as freedom of expression, non-discrimination and the protection of personal data. “The use of AI can affect the values on which the EU is founded and lead to breaches of fundamental rights, including the rights to freedom of expression, freedom of assembly, human dignity, non-discrimination based on sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation, as applicable in certain domains, protection of personal data and private life, or the right to an effective judicial remedy and a fair trial, as well as consumer protection.” European Commission (2020) 

In 2021, UNESCO also emphasised that gender-based stereotypes and discrimination in AI systems had to be avoided and actively addressed. To this end, UNESCO launched its “Women for Ethical AI” project in 2022 to integrate gender equality into the AI agenda. The UN AI Advisory Board also calls for AI for the common good and highlights gender as an important cross-cutting issue.

With the Digital Services Act (DSA) and the AI Act, the EU is now attempting to define limits for the digital power of large platforms and AI systems. Both regulatory frameworks contain important approaches, but key questions remain unanswered from an intersectional feminist perspective.

 The Digital Services Act (DSA) and its impact 

The DSA came into force in February 2024 and requires Very Large Online Platforms, known as VLOPs (such as Meta, TikTok and X), to perform systematic risk assessments and mitigation measures regarding algorithmically mediated content, particularly with regard to newsfeed ranking and content moderation.

 In brief: what is the Digital Services Act? 

The Digital Services Act (DSA) is a European Union regulation that established harmonised European liability and safety regulations for digital platforms, services and products. Its aim is to prevent the distribution of illegal content and provide better protection for users.

The DSA’s aims include identifying, analysing and mitigating systemic risks, as they are known, arising from the functioning of the platform; in particular those that affect fundamental rights, democratic processes or public safety, such as disinformation or hate speech.

To this end, the DSA requires transparent reporting in the context of risk assessment, but it does not specify how such risk assessment and reporting is to be carried out in concrete terms. The term systemic risk is also defined very broadly in the legal text and can therefore be interpreted differently by the platforms. That is why the initial reports from late 2024 remained superficial and vague. The platforms’ own risk assessments promise transparency but provide little useful information: “The initial risk reports suggest that everything is in order on the platforms. Anyone who is active on social media knows that this is not the whole truth.” Essentially, these reports can help us assess platform risks, but they need to be much more informative and comprehensive.” Clara Helming, AlgorithmWatch [in German].

The DSA also includes transparency obligations for recommendation systems such as advertising.

Example: proceedings against TikTok for breach of transparency obligations in online advertising 

In this context, proceedings are currently ongoing against TikTok. In May 2025, the European Commission reached a preliminary conclusion that TikTok’s advertisement repository violates the DSA: “Transparency in online advertising — who pays and how audiences are targeted — is essential to safeguarding the public interest. […] Whether we are defending the integrity of our democratic elections, protecting public health, or protecting consumers from scam ads, citizens have a right to know who is behind the messages they see. In our preliminary view, TikTok is not complying with the DSA in key areas of its advertisement repository, preventing the full inspection of the risks brought about by its advertising and targeting systems.” Henna Virkkunen, European Commission Executive Vice-President for Tech Sovereignty, Security and Democracy.

Furthermore, the DSA focuses on adequate mechanisms for reporting illegal content and appealing against moderation decisions. “Social media platforms make decisions every day on which content remains visible and which is removed, restricted or downranked. This is because user accounts are restricted or completely blocked in extreme cases. […] The impact on freedom of opinion, freedom of the press and academic freedom is serious: if there is no clarity on what criteria are used to delete content or restrict accounts, it is impossible to prevent systematic discrimination and the suppression of certain opinions in public discourse.” Davy Wang, Society for Civil Rights – so the organisation assists anyone affected when filing complaints against platforms’ lack of transparency under the DSA.

In Germany, the Federal Network Agency will be responsible for further implementation going forward, as well as providing a complaint form for users who wish to submit complaints under the DSA. For the first time, therefore, the DSA establishes a comprehensive European legal framework for platform regulation and user rights in the digital space.

However, its effectiveness is reliant upon enforcement: national supervisory authorities and the European Commission must work closely together, yet remain independent at the same time. The first year shows that while the framework is in place, the work is just beginning.

Critique of the Digital Services Act (DSA) from an intersectional feminist perspective and initial approaches to solutions 

From an intersectional feminist perspective, the DSA does not adequately address structural power and discrimination in the digital space. It fails to consistently identify and address the epistemic violence embedded in algorithmic moderation systems, in platform logic itself, and in their economic architectures.

This places those affected by digital violence in a paradoxical situation: their strategies of resistance and self-empowerment are censored, while the violence they are endeavouring to counter remains online, often under the guise of “freedom of expression”.

  • Furthermore, the DSA does not provide a structural discussion of the economic logic by which platforms curate and disseminate contentAlgorithms are optimised to maximise engagement, not promote user well-being. Content that fuels outrage, polarisation or attention economies is given preference. In concrete terms, this means that anti-feminist, racist and anti-queer narratives are given more visibility than factual, evidence-based counter-speech. This logic increases the risk faced by the people affected by digital violence and acts as an amplifier for structurally marginalised voices that are pushed out of the digital discourse. What is missing here is an intersectional perspective: there is neither recognition nor requirement for platforms to examine whether their systems systematically disadvantage or silence queer, migrant, disabled or otherwise marginalised people. 
  • The perspectives of those affected also remain unaddressed. While the DSA provides for the participation of civil society and research (through supervisory bodies, for example), it remains unclear how structurally marginalised groups are included, not only as those affected, but also as producers of knowledge. 

There is therefore an urgent need to introduce firmly embedded and systematically mandated bias analyses for moderation and recommendation systems that:

  • explicitly include intersectional discrimination, 
  • ensure the unconditional participation of marginalised groups in risk assessments and design decisions, and 
  • include comprehensive impact assessments for platform architectures that examine reach dynamics, affordances and monetisation in terms of their consequences for democracy, fundamental rights and human rights.

 

In brief: what is the AI Act? 

The European Union devised what is known as the AI Act in order to address manipulation risks and ensure the safety and transparency of AI systems. This regulation classifies AI systems according to risk levels, from minimal to unacceptable. This Act follows a risk-based approach to ensure that the use of AI-based systems has no negative impact on people’s safety, health and fundamental rights.

The specific legal requirements set out in the AI Act are dependent on the relevant potential risk posed by an AI system:

  • unacceptably high-risk systems are prohibited, 
  • high-risk systems are subject to certain rules, and
  • low-risk AI systems are not subject to any requirements. 

Example: unacceptably high-risk systems

Prohibited AI systems include, for example, emotion recognition in schools and social scoring – that is, the evaluation of individuals on the basis of their personal data such as credit behaviour, traffic violations or social engagement – in order to control their access to certain services or privileges. High-risk systems include automated credit scoring, systems for recruitment and employment management, and also systems used in the context of law enforcement.

Example: High-risk AI systems

High-risk systems are subject to strict requirements in respect of transparency, human oversight and information obligations. Systems that deceive or manipulate people using deepfakes, for example, are also defined as high-risk applications and must therefore comply with strict transparency requirements (such as labelling requirements for synthetic media).

This is a start, but it is not enough. The regulation falls short when it comes to gender-based violence in relation to sexualised image-based violence (“deepfake porn”): click here to view part 2 of this series for a more in-depth discussion. Generative AI models such as GPT-4, DALL-E or Midjourney are not explicitly classified as high-risk, although they are increasingly being used to produce problematic content.

This is another reason why further adjustments were made. As of August 2025, AI systems such as ChatGPT must comply with the AI Act and disclose technical documentation and bias mitigation measures. The European Commission is aiming to clarify the often vague provisions by means of new guidelines and a code of practice. According to the General-Purpose AI Code of Practice, there are three key obligations: 

  • Transparency: focusing on disclosure of model size, training data sources, energy consumption and measures addressing bias and errors
  • Copyright: no use of copyrighted content without consent, plus measures to prevent AI-generated copyright infringements, and
  • Safety and security measures: namely, protection against discrimination and negative effects 

These codes are not legally binding, but provide interpretative guidance issued by the European Commission. Anyone failing to comply must provide their own evidence to demonstrate compliance with the law. The code thus serves as a “model solution” for avoiding regulatory conflicts. While Microsoft and OpenAI are currently willing to adopt the code, Meta has declined to do so. 

Insufficient protection of fundamental rights under the AI Act 

Although the AI Act is intended to protect fundamental rights, the measures it provides are not sufficient to prevent genuine risks from AI systems. Particularly in sensitive areas such as surveillance or criminal prosecution, there is a risk of far-reaching infringements of the right to privacy, non-discrimination and freedom of expression and assembly.

A key problem: Law enforcement, security and migration authorities are largely exempt from the AI Act. They may invoke national security interests and thus even employ prohibited AI practices without having to observe the safeguards set out in the law.

The use of biometric technologies (such as facial recognition) is not restricted clearly enough, either:

  • real-time surveillance is only partially prohibited, while retrospective identification is permitted even on the basis of mere suspicion. 
  • Biometric categorisations (by gender, for example) are still permitted, too, even though they may have discriminatory effects. 
  • While emotion recognition is prohibited in educational and workplace settings, it is permitted for the police and migration: a clear imbalance of power. 

In this way, the AI Act ignores structural power relations between users and authorities or companies. AI systems may thus continue to help control, monitor and exclude marginalised groups, particularly in migration and security contexts where transparency obligations are explicitly suspended. Furthermore, the question of how discriminatory logics are embedded in data, models and institutions remains largely unanswered.

Biases and the AI Act: the devil is in the detail 

The recognition of the risks posed by “(unfair) biasis referred to in many places in the legal text of the AI Act, but this is not addressed in particularly specific terms.

  • Gender, race and other aspects appear as categories of discrimination. A specific requirement to prevent biases in datasets and results appears only in Article 10, paragraphs 2(f), 2(g) and paragraph 3 with regard to what are known as high-risk systems. These provisions stipulate legally binding quality requirements for training datasets; they should be “sufficiently representative” with regard to the target group of an AI system and “to the best extent possible, free of errors and complete” in view of the intended purpose. 

→ However, these specifications offer scope for different interpretations: what does representative actually mean?

→ Additionally, “appropriate measures” should be taken to detect, prevent and mitigate possible biases. However, what constitutes these “appropriate measures” will only be revealed by the application of the law. A corresponding definition has yet to be established, given the widely differing systems.

  • A further problem: essentially, it is assumed that an individual is affected, as otherwise, by definition, there is no discrimination. However, this is frequently not the case when AI systems are used, for example when AI-generated content reflects social stereotypes embedded in training data.

It is therefore necessary to examine the problems in greater detail and to address and elaborate on the unresolved points in more concrete terms.

The AI Act: pros and cons from an intersectional feminist perspective 

From an intersectional feminist perspective, the regulation falls short of its own requirements. It does not consistently address how AI systems reinforce social inequalities, how they perpetuate discriminatory norms, or how these biases affect marginalised groups in particular.

A feminist perspective shows that discrimination by AI is not “technical misconduct”, but the result of powerful data practices that have emerged historically. Therefore, the AI Act attempts to manage symptoms without addressing causes.

Pro 

The AI Act recognises the risk of algorithmic discrimination, particularly in the case of what are referred to as high-risk AI systems.

Con 

However, many discriminatory applications do not fall into this category at all. Content moderation systems, social media recommendations or language models are usually regarded as a “limited risk”, even though in practice they can have a significant impact on fundamental rights.

Pro 

The AI Act relies on technical, risk-based categorisation. However, this technocratic definition of risk fails to recognise one thing: 

Con 

Discrimination does not begin with high-risk applications, but is embedded in data collection, model architecture, training processes and design decisions within these socio-technical systems.

Pro 

With the later introduction of the “General Purpose AI” category, the AI Act responds to the growing role of language, image and video models (GenAI). These systems shape the formation of opinion, visibility, disinformation and – increasingly – deepfake pornography.

Con 

Nevertheless, the AI Act contains no explicit obligation to take structural action to address gender-based violence through AI (such as sexualised deepfakes).

Pro 

Although the AI Act refers to “discrimination” on a number of occasions, it does so in generalised terms, with no intersectional analysis. 

Con 

There is no differentiation between how AI disadvantages marginalised people – black queer women, trans* people with disabilities, or migrants on low incomes, for instance – in multiple and specific ways. As a result, audit mechanisms and risk reports often only record “discrimination” across individual categories (such as gender or origin), rather than in terms of the interplay between structural exclusions.

Regulation must go much further here 

There is a need for:

  • mandatory intersectional bias audits, including for non-high-risk systems such as content moderation, language models or recommendation algorithms, 
  • discrimination-aware impact assessments for the systems, but also 
  • empowerment opportunities for those affected, such as via concrete rights to object or the mandatory inclusion of marginalised groups and civil society actors in development – particularly organisations with expertise regarding gender, racism, ableism and queerness – in standard-setting, auditing and oversight processes. 

Without an intersectional analysis, regulation will remain technocratic and overlook the deeply rooted social hierarchies and traditional systems of power and exclusion in digital systems that need to be challenged.

Or, to conclude with a quotation from the feminist data scientists D’Ignazio & Klein (2020): 

“Shifting the frame from concepts that secure power, like fairness and accountability, to those that challenge power, like equity and co-liberation, can help to ensure that data scientists, designers, and researchers take oppression and inequality as their grounding assumption for creating computational products and systems. We must learn from—and design with—the communities we seek to support. A commitment to data justice begins with an acknowledgment of the fact that oppression is real, historic, ongoing, and worth dismantling.

12 December 2025, Paris - Exploratory workshop: What progressive agenda for gender equality at work in the digital age?

This workshop organised and hosted by FES Paris brought together partners to identify and discuss, within a European cooperation framework, emerging topics and priority areas for action related to gender equality and artificial intelligence in the French context. During the workshop, FES Future of Work presented a recent study on Changing Working Lives: Women and Automation in the Labour Market. As an exploratory initiative, it helped lay the groundwork for future joint projects between the Friedrich-Ebert-Stiftung and its partners from the political, trade union, civil society, and academic spheres in France and across Europe.

Conference Page

The conference has addressed the intersection of artificial intelligence (AI), digital transformation, and gender equality in the workplace. The event brought together policymakers, academics, trade union representatives, and civil society actors to explore how digital technologies can both challenge and advance women’s rights in professional environments.

Speakers emphasised that AI is not gender-neutral. Left unchecked, automated systems can reproduce and even amplify existing inequalities, affecting hiring, promotion, workplace surveillance, and access to opportunities. Keynote speaker Ivana Bartoletti highlighted the urgent need for AI governance that actively promotes inclusion and fairness, ensuring that technology works for all, rather than reinforcing biases.

The conference featured three thematic workshops. The first focused on inclusive digital ecosystems, discussing how EU and national policies can embed gender-responsive regulation into the design and implementation of AI technologies. The second addressed AI-driven work, safety, and health, examining risks such as cyberviolence, intrusive monitoring, and gendered workplace stress, and exploring strategies to protect workers’ autonomy and well-being. The third workshop emphasized AI literacy for trade unions and civil society, underscoring the importance of understanding AI systems to advocate effectively for transparency, accountability, and workers’ rights.

PhD Summer School 2024: Gender, AI and Inclusive Work

In this summer school, we aimed to explore the everyday experiences of female workers within the changing nature of work environments due to new technologies and new forms of work from by looking at the interaction between technological change,...

Our Publications

Une vie professionnelle en pleine mutation

Sabanova, Inga | Bonn : Friedrich-Ebert-Stiftung, Décembre 2025

les femmes et l'automatisation sur le marché du travail ; revue de littérature

Gender & AI at work

Nagell, Hilde | Bonn : Friedrich-Ebert-Stiftung e.V., December 2025

strengthening OSH to address algorithmic risks

Changing working lives: women and automation in the labour market

Sabanova, Inga | Bonn : Friedrich-Ebert-Stiftung e.V., December 2025

scoping review

Gender data

Arora, Payal ; Huang, Weijie | Bonn : Friedrich-Ebert-Stiftung, January 2025

what is it and why is it important for the future of AI systems?

What is feminist AI?

Wudel, Alexandra ; Ehrenberg, Anna | Bonn : Friedrich-Ebert-Stiftung, January 2025

The EU Artificial Intelligence Act through a gender lens

Karagianni, Anastasia | Bonn : Friedrich-Ebert-Stiftung, January 2025

Multi-stakeholder guidelines on how to address gender bias in AI systems

Munarini, Monique | Bonn : Friedrich-Ebert-Stiftung, January 2025

Contact

Policy Officer