28.09.2025

The gender gap in AI: Structural Inequalities in Emerging Technologies

by Justyna Stypińska, Principal Investigator Project AnyAge.AI

6 min

According to the World Economic Forum, women’s economic participation globally has regressed rather than recovered since the pandemic. In Europe, however, the key social indicator of economic activity - the gender employment gap - has been improving, though only marginally. Over the past decade, the gap has narrowed by just 1.7 percentage points (from 12.5 pps in 2011), prompting the European Commission to call for “new impetus to gender equality”.  

The technology industry remains one of the sectors where women are most underrepresented, particularly in senior positions. According to the World Bank, women make up less than one-third of the global workforce in technology-related fields. Structural disadvantages have shaped women’s participation in technology: while women played leading roles in the early days of computing, the field gradually shifted into a male-dominated domain. Over time, women were relegated to the margins and often treated as a “special interest group,” as Donna Haraway noted in her 1988 seminal work on feminism and science. Still, entrenched stereotypes and technical cultures continue to sustain discrimination and occupational segregation in computing and engineering. The current rise of AI technologies introduces new forms of gender disparities. These include unequal access to positions and resources, as well as “representational harms,” where AI systems reproduce stereotypical portrayals of women. Historical associations between technical expertise and masculinity reinforce the image of tech professionals as predominantly male—typically also young and white. Within the competitive and prestigious field of AI, these dynamics are reshaping into a new “gender order.” In practice, however, and despite policy commitments to diversity, this order remains embedded in hyper-masculine institutional and interpersonal cultures.

The potential for AI systems to exacerbate social inequalities has been widely debated, with growing evidence from academic, policy, and journalistic research. Gender bias in AI manifests in multiple domains, from facial recognition systems and word embeddings to CV-screening algorithms and even datasets for car crash tests. Societal expectations of the “tech expert”—and especially the “AI expert”—remain overwhelmingly masculine, as is evident when generative AI tools are asked to produce an image of one.

One of the most striking inequalities in technology is the severe underrepresentation of women in AI. Globally, women make up only 22% of AI talent and fewer than 14% of executive positions (Stiftung Neue Verantwortung, 2020). The World Economic Forum also notes that jobs typically held by low- and middle-income women are disproportionately vulnerable to automation (Chernoff & Warman, 2020). Research further indicates that these disparities may widen. Judy Wajcman argues that such trends risk entrenching existing inequalities, while Howcroft and Rubery (2019) show that women in technology are often segregated into “less prestigious” subfields rather than occupying roles at the forefront of innovation. This stratification exacerbates gender gaps, including pay disparities. Persistent inequalities cut across job roles, qualifications, seniority, attrition rates, industry representation, and even self-confidence. These barriers limit women’s access to prestigious, well-remunerated, and intellectually stimulating positions—key gateways to future opportunities in the labour market. To illustrate these gender disparities, the following section examines the German AI research landscape, which reveals striking figures on women’s representation in this critical field.

German Research Landscape as a Case Study for Women in AI

A closer look at the number of women working in German research institutes reveals stark gender disparities in AI-related fields, raising questions about hyper-masculine institutional cultures that remain unaddressed.

At the institutional level, female participation in AI research in Germany is even lower than the already concerning global averages. In July 2022, Bettina Stark-Watzinger, Minister of Education and Research, announced that five AI competence centers at German universities would be permanently funded by the federal government with €50 million per year. These AI competence centers are: BIFOLD (Berlin Institute for the Foundations of Learning and Data), Lamarr Institute, MCML (Munich Center for Machine Learning), ScaDS.AI (Center for Scalable Data Analytics and Artificial Intelligence), and the Tübingen AI Center. In addition, the German Research Center for Artificial Intelligence (DFKI) is part of the network of leading AI centers, with the aim of strengthening both the national and international visibility of German AI research.

Examining the Diversity, Equality, and Inclusion (DEI) strategies of these centers highlights their varying levels of commitment to gender inclusion in the sector. Out of the six research centers, only two have published official documents on gender equality in AI research. The most comprehensive is DFKI’s 15-page Gender Equality Plan (2025–2029), which identifies institutional and personal infrastructure as well as financial resources dedicated to advancing gender equality. Similarly, BIFOLD has outlined a five-page Gender Equality Strategy, committing to non-discrimination in the science system on the basis of gender, origin, age, or health status and presenting concrete measures to improve the gender equality and diversity among BIFOLD employees.

Two centers have appointed institutional representatives for diversity and gender equality. MCML employs two DEI officials and three representatives responsible for coordinating DEI meetings with other research centers, while DFKI maintains a Diversity & Gender Equality Working Group led by a designated coordinator. In contrast, the Tübingen AI Center and ScaDS.AI make no mention of DEI on their websites, have no visible institutional structures in place, and provide no related documents online. The Lamarr Institute briefly acknowledges the importance of diversity and gender equality in career development but does not publish any concrete strategies or appoint official representatives.

This uneven landscape of gender equality initiatives suggests that while some institutions are actively addressing the issue, there is considerable room for improvement. A more standardized and transparent approach to DEI across German AI research centers might strengthen the sector’s inclusion of women in the institutional structures, as the data collected from these institutions still highlights the severe underrepresentation of female researchers and leaders.

An analysis of data from six AI research centers reveals significant gender imbalances across different professional categories. The representation of women ranges from complete exclusion in some areas to modest inclusion in others. The most striking absence is at the highest managerial level—the directors—where in five out of six institutions all directors were male. Only one center, the Lamarr Institute, has a single woman among its directors. Among research group leaders and principal investigators, the proportion of women varies from 9% at BIFOLD to 30% at Lamarr Institute. Among faculty/ fellows the range is between 11 and 29 %. Women’s representation on advisory and scientific advisory boards is at rate of about 20% while not all institutes have this organ at all. Among the steering committees are women represented by average of 20%. In contrast, the only occupational group where women constitute the majority is office and administrative staff, with an average of about 80% female employees. This distribution reflects the deep gender segregation within the AI research sector: male-dominated leadership, mixed but predominantly male research positions, and predominantly female administrative and support staff.

These figures shed light on six major institutions in Germany’s AI landscape, which shape training, career paths, and standards of expertise in the field. The data reveal striking gaps, clearer than what broad statistics usually show which are pointing to how power, access, and opportunities are distributed in the labour market. Gender equality in AI isn’t just about how many women hold senior titles like professor, principal investigator, or research group leader. What stands out is the near-complete absence of women in top leadership positions, such as directors of AI competence centres across Germany. This contradiction casts doubt on the diversity promises highlighted on these institutions’ own websites. The results make it clear: real progress will require systemic changes, not surface-level measures.

 Is Change Possible?

Progress in gender equality has been made—for instance, more women now hold high-profile positions such as CEOs across various industries. While far from perfect, these gains offer some hope. But advancing gender equality in AI will require much more: tackling male-dominated workplaces, pushing for stronger policies, involving industry leaders, and supporting women as they navigate professional barriers.

That progress, however, remains fragile. The COVID-19 pandemic exposed how easily it can be reversed, with women disproportionately carrying both economic and caregiving burdens. In AI, the stakes are especially high: diverse representation is vital for shaping technologies and practices that affect society at large. Yet diversity efforts remain contested in today’s polarized political climate, often slowing the integration of inclusion into AI development.

Real inclusion in AI will not come from token programs or surface-level initiatives. It requires clear, strategic action—and it must rest on equality as a principle, not on claims that women’s presence will make AI “better” or “fairer.” Men are never asked to justify their role in AI or to argue for their value in the job market. This double standard reflects a deeper imbalance, beyond what statistics can show. The default image of an AI expert is still a young, white man—a bias visible even in AI-generated images when prompted with “show me an AI expert.”

 

Acknowledgement:  “I would like to thank Lucia Malén Reuther Carioni for her assistance with the research”.

Dr. Justyna Stypińska is a sociologist with research interests in different forms of algorithmic discrimination and bias in AI technologies and how those create new dimensions of social and economic inequalities in late capitalism. At the WZB, she works as the Principal Investigator of an international research project funded by the Volkswagen Foundation “AI Ageism: new forms of age discrimination and exclusion in the era of algorithms and artificial intelligence”.

Technology, Employment and Wellbeing is an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.

FES Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Meet the team

Follow us on LinkedIn and  X

Subscribe to our Newsletter

Watch our short videos and recorded events Youtube