25.02.2026

A leader’s responsibility to harness AI: from oppression to empowerment

by Eva Gengler, feminist AI

5 min read

»Show me powerful people«.

Across six text-to-image generators, that simple prompt yielded results in which roughly nine out of ten subjects were men – predominantly white and older – while women, if they featured at all, were more frequently sexualised. »Successful people« produced similar skews; »beautiful people« tilted toward young, slim women. The default visual logic was affluence. These outcomes are not stylistic coincidences. They are systematic outputs of systems trained on data that encode long-standing asymmetries of gender, race, class, age and other intersecting social positions in contexts of patriarchy, colonialism and capitalism (Study: Sexism, Racism, and Classism, 2023).

Generative AI is currently the most visible face of AI, but AI systems can be so much more. AI in practice comprises rule-based and data-driven systems embedded in hiring, performance management, finance, welfare administration, health-care triage and security. They influence what we see, where we work and whom we love. Across these domains, patterned harms recur: discriminatory screening and scoring of workers, misclassification and intensified monitoring, the offshoring of low-paid data work and environmental burdens from compute- and water-intensive infrastructures. Generative models add a representational layer that can normalise these harms by reproducing and amplifying stereotypes at scale.

A socio-technical, intersectional feminist perspective clarifies why these outcomes are predictable rather than accidental (a feature, not a bug). First, hegemonic viewpoints are overrepresented in web-scale corpora and image datasets, while marginalised voices are underrepresented. Other data sets omit women altogether (a topic dealt with in the book Invisible Women) Second, model pipelines embed these distributions through pretraining, prompting conventions and alignment choices. Third, organisational incentives – time-to-market, cost containment and performance metrics – systematically deprioritise bias remediation and stakeholder participation. Fourth, this also limits diversity in people engaging with AI and making decisions about it. Fifth, all of this happens in a broader context of patriarchal systems, colonialist structures and capitalist processes. Finally, as a consequence of all this, AI is being developed and applied to automate existing worldviews and power structures. In my empirical work with text-to-image systems, women were substantially underrepresented in depictions of »power« and »success«, people of colour were underrepresented across categories, and women were depicted as younger and more frequently sexualised; while social class skewed towards privilege. Such patterns mirror offline media and labour market stereotypes, which is precisely why AI matters for both employment and general well-being.

The labour market implications of AI are both representationally and structurally important. Representationally, generative outputs shape the managerial imagination: who is pictured as a »leader«, what professional aesthetics are deemed appropriate, and which bodies are associated with competence and authority or, alternatively, with beauty and care? Structurally, AI tools are being integrated into HR workflows (whether drafting job ads, writing feedback or deciding on applications), customer interaction and knowledge production. If ungoverned, they risk entrenching status hierarchies, accelerating surveillance and deskilling certain groups, while disproportionately externalising environmental and social costs to those who benefit least, especially in the Global South. Intersectionality is a necessary analytic lens with which to identify compounding disadvantages, for example, among people involved in precarious working arrangements around data cleaning (for example, women in the Global South), people negatively affected by AI outcomes (such as women of colour), and people who do not have access to these systems (for example, elderly people in rural areas).

If we want to change this system, which often further marginalises the marginalised, we need to discover where responsibility lies. And for the most part, it does not lie with the development teams. Responsibility for oppressive AI systems lies with both politicians and corporate leadership.

On the policy side, it is the responsibility of policymakers to protect citizens from harm and to provide guardrails in which market actors should have to operate. Along these lines, the European Union has created a nascent governance architecture through the AI Act and complementary digital legislation. The immediate task is not to reopen the matter of first principles but to ensure rigorous implementation: clear guidance, adequately resourced market surveillance authorities, meaningful conformity assessment for high-risk systems, avenues for redress, and credible sanctions. All this needs to be accompanied by a bureaucracy that is fit for purpose. The goal must be to create more responsible AI, not to box-checking exercises. Calls to dilute obligations in the name of »competitiveness« misconstrue Europe’s comparative advantage. Normative clarity and enforceable guardrails are not in themselves antithetical to innovation; they are preconditions for trustworthy, adoption-ready systems. Our case insights (Exploring Organizational AI Governance Maturity, 2024) show that some firms welcome a level playing field and some even view responsible AI obligations as creating a competitive advantage. Therefore, politicians should (i) hold the line on values and enforcement, heeding Zuboff’s (2025) assertion that the EU is the last bastion against digital autocracies and her call for resolute enforcement of digital laws. Furthermore, (ii) it is crucial to design for enforceability, administrability, and equity: clear secondary rules and guidance that include mandatory diversity and ecological requirements, strong transparency and accountability commitments, population-wide AI literacy, and well-resourced regulators.

On the corporate side, the guardrails for and direction of AI development and application are shaped by management goals, priorities and decisions. They set the context for AI. Executives determine objectives (why and where is AI built or used?), acceptable risks (what are the trade-offs, and what trade-offs are we willing to accept?), and resourcing (where do budgets go; how much value does high quality data have?) In focus-group discussions in one of our ongoing studies, bias-reduction work was repeatedly deprioritised relative to go-to-market timelines; this is a choice shaped by incentives. Responsibility lies with leadership, which decides on company culture and priorities. Concretely, leadership should (i) accept and own this responsibility by (ii) embedding responsible AI governance firmly in their corporate governance, (iii) assigning clear executive ownership, (iv) operationalising it through mechanisms across the full AI lifecycle, and (v) ensuring horizontal alignment with market and regulatory expectations. There should also be vertical integration in terms of roles, routines and culture, with companies taking steps toward critical and reflective AI literacy across the workforce (on alignment cf. Exploring Organisational AI Governance Maturity, 2024).

The European debate should focus on how responsibility is owned and exercised. Policymakers must hold the regulatory line and equip institutions to enforce it. Senior leaders must align strategy, culture and incentives with concrete outcomes for equity and well-being. Without such commitments, AI will largely replicate the contexts and inequities it is set in and learns from. The choice we have to make concerns whether we follow the path of oppressive AI or switch towards empowerment.

Eva Gengler, a business informatics specialist, conducts research and works at the intersection of power and AI taking an intersectional-feminist perspective. Her research on feminist AI was conducted at FAU Erlangen-Nuremberg within the doctoral programme Business and Human Rights. She is speaker and co-founder of the feminist AI Community and of enableYou. She has been nominated for the Brigitte Award 2025 in the category Future Economy, and has been featured by the Bavarian Ministry of Social Affairs, the Elite Network of Bavaria, and Marc Cain, as well as interviewed by ZDF, 3sat, t3n and Deutschlandfunk, among others.

Technology, Employment and Wellbeing is an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.

FES Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Meet the team

Follow us on LinkedIn and  X

Subscribe to our Newsletter

Watch our short videos and recorded events Youtube