28.09.2025

The gender gap in AI: Structural Inequalities in Emerging Technologies

by Justyna Stypińska, Principal Investigator Project AnyAge.AI

6 min

According to the World Economic Forum, women’s economic participation globally has regressed rather than recovered since the pandemic. In Europe, however, the key social indicator of economic activity – the gender employment gap – has been improving, albeit only marginally. Over the past decade, the gap has narrowed by just 1.7 percentage points (down from 12.5 pps in 2011), prompting the European Commission to call for »new impetus to gender equality«. 

The technology industry remains one of the sectors in which women are most underrepresented, particularly in senior positions. According to the World Bank, women make up less than one-third of the global workforce in technology-related fields. Structural disadvantages have shaped women’s participation in technology: while in the early days of computing women played leading roles, the field gradually shifted to become a male-dominated domain. Over time, women were relegated to the margins and often treated as a »special interest group«, as Donna Haraway noted in her seminal 1988 work on feminism and science. Still, entrenched stereotypes and technical cultures continue to sustain discrimination and occupational segregation in computing and engineering. The current rise of AI technologies is introducing new forms of gender disparity. These include unequal access to positions and resources, as well as »representational harms«, where AI systems reproduce stereotypical portrayals of women. Historical associations between technical expertise and masculinity reinforce the image of tech professionals as predominantly male, typically also young and white. Within the competitive and prestigious field of AI, these dynamics are reshaping into a new »gender order«. In practice, however, and despite policy commitments to diversity, this order remains embedded in hyper-masculine institutional and interpersonal cultures.

The potential for AI systems to exacerbate social inequalities has been widely debated, with growing evidence from academic, policy and journalistic research. Gender bias in AI manifests in multiple domains, from facial recognition systems and word embeddings to CV screening algorithms and even datasets for car crash tests. Societal expectations of the »tech expert« – and especially the »AI expert« – remain overwhelmingly masculine, as is evident when generative AI tools are asked to produce an image of one.

One of the most striking inequalities in technology is the severe underrepresentation of women in AI. Globally, women make up only 22 per cent of AI talent and less than 14 per cent of those in executive positions (Stiftung Neue Verantwortung 2020). The World Economic Forum also notes that jobs typically held by low- and middle-income women are disproportionately vulnerable to automation (Chernoff and Warman 2020). Research further indicates that these disparities may widen. Judy Wajcman argues that such trends risk entrenching existing inequalities, while Howcroft and Rubery (2019) show that women in technology are often segregated into »less prestigious« subfields rather than occupying roles at the forefront of innovation. This stratification exacerbates gender gaps, including pay disparities. Persistent inequalities cut across job roles, qualifications, seniority, attrition rates, industry representation and even self-confidence. These barriers limit women’s access to prestigious, well-remunerated and intellectually stimulating positions, key gateways to future opportunities in the labour market. To illustrate these gender disparities, the following section examines the German AI research landscape and reveals striking figures on women’s representation in this critical field.

German research landscape as a case study for women in AI

A closer look at the number of women working in German research institutes reveals stark gender disparities in AI-related fields, raising questions about hyper-masculine institutional cultures that remain unaddressed.

At the institutional level, female participation in AI research in Germany is even lower than the already worrying global averages. In July 2022, Bettina Stark-Watzinger, Minister of Education and Research, announced that five AI competence centres at German universities would be permanently funded by the federal government with €50 million per year. These AI competence centres are: BIFOLD (Berlin Institute for the Foundations of Learning and Data), the Lamarr Institute, MCML (Munich Centre for Machine Learning), ScaDS.AI (Centre for Scalable Data Analytics and Artificial Intelligence), and the Tübingen AI Centre. In addition, the German Research Centre for Artificial Intelligence (DFKI) has been made part of the network of leading AI centres, with the aim of strengthening both the national and international visibility of German AI research.

Examining the diversity, equality and inclusion (DEI) strategies of these centres highlights their varying levels of commitment to gender inclusion in the sector. Out of the six research centres, only two have published official documents on gender equality in AI research. The most comprehensive is DFKI’s 15-page Gender Equality Plan (2025–2029), which identifies institutional and personal infrastructure, as well as financial resources dedicated to advancing gender equality. Similarly, BIFOLD has outlined a five-page Gender Equality Strategy, committing to non-discrimination in the science system on the basis of gender, origin, age or health status and presenting concrete measures to improve gender equality and diversity among BIFOLD employees.

Two centres have appointed institutional representatives for diversity and gender equality. MCML employs two DEI officials and three representatives responsible for coordinating DEI meetings with other research centres, while DFKI maintains a Diversity and Gender Equality Working Group, led by a designated coordinator. In contrast, the Tübingen AI Centre and ScaDS.AI make no mention of DEI on their websites, have no visible institutional structures in place, and provide no related documents online. The Lamarr Institute briefly acknowledges the importance of diversity and gender equality in career development but has not published any concrete strategies or appointed official representatives.

This uneven landscape of gender equality initiatives suggests that while some institutions are actively addressing the issue, there is considerable room for improvement. A more standardised and transparent approach to DEI across German AI research centres might strengthen the sector’s inclusion of women in its institutional structures, as the data collected from these institutions still highlights the severe underrepresentation of female researchers and leaders.

An analysis of data from six AI research centres reveals significant gender imbalances across different professional categories. The representation of women ranges from complete exclusion in some areas to modest inclusion in others. The most striking absence is at the highest managerial level, directors, who were male at five out of the six institutions. The sole exception is the Lamarr Institute. Among research group leaders and principal investigators, the proportion of women varies from 9 per cent at BIFOLD to 30 per cent at the Lamarr Institute. Among faculty/fellows the range is between 11 and 29 per cent. Women’s representation on advisory and scientific advisory boards is about 20 per cent, although not all institutes even have such an organ. Women account for an average of 20 per cent of the members of steering committees. In contrast, the only occupational group in which women constitute the majority is office and administrative staff, with an average of about 80 per cent female employees. This distribution reflects the deep gender segregation within the AI research sector: a male-dominated leadership, mixed but predominantly male research positions, and predominantly female administrative and support staff.

These figures shed light on six major institutions in Germany’s AI landscape, which shape training, career paths and standards of expertise in the field. The data reveal striking gaps, more clearly than broader statistics are usually able, pointing to how power, access and opportunities are distributed in the labour market. Gender equality in AI isn’t just about how many women hold senior titles such as professor, principal investigator or research group leader. What stands out is the near complete absence of women in top leadership positions, such as directors of AI competence centres across Germany. This contradiction casts doubt on the diversity promises highlighted on these institutions’ own websites. The results make it clear: real progress will require systemic changes, not mere surface-level measures.

Is change possible?

Progress in gender equality has been made. For instance, more women now hold high-profile positions such as CEO across various industries. While far from perfect, these gains offer some hope. But advancing gender equality in AI will require much more: tackling male-dominated workplaces, pushing for stronger policies, involving industry leaders and supporting women as they navigate professional barriers.

Such progress remains fragile, however. The COVID-19 pandemic exposed how easily it can be reversed, as women disproportionately carried both economic and caregiving burdens. In AI, the stakes are especially high: diverse representation is vital for shaping technologies and practices that affect society at large. But diversity efforts remain contested in today’s polarised political climate, often slowing the integration of inclusion into AI development.

Real inclusion in AI will not come from token programmes or surface-level initiatives. It requires clear, strategic action, and it must rest on equality as a principle, not on claims that women’s presence will make AI »better« or »fairer«. Men are never asked to justify their role in AI or to argue for their value in the job market. This double standard reflects a deeper imbalance, beyond what statistics can show. The default image of an AI expert is still a young, white man, a bias visible even in AI-generated images in response to the prompt »show me an AI expert«.

Acknowledgement:  “I would like to thank Lucia Malén Reuther Carioni for her assistance with the research”.

Dr. Justyna Stypińska is a sociologist with research interests in different forms of algorithmic discrimination and bias in AI technologies and how those create new dimensions of social and economic inequalities in late capitalism. At the WZB, she works as the Principal Investigator of an international research project funded by the Volkswagen Foundation “AI Ageism: new forms of age discrimination and exclusion in the era of algorithms and artificial intelligence”.

Technology, Employment and Wellbeing is an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.

FES Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Meet the team

Follow us on LinkedIn and  X

Subscribe to our Newsletter

Watch our short videos and recorded events Youtube