08.04.2026

Artificial intelligence, fundamental rights and intersectionality: a critical analysis of EU regulatory frameworks

by Victoire Olczak, Gender5+

3 min read

In simple terms, so-called »artificial intelligence« (AI) may be used to classify, analyse and predict outcomes from data sets using a set of rules known as algorithms. While in itself AI might be seen as a neutral and objective technology, it takes on new meanings and implications through the use humans make of it in specific contexts. A wide variety of biases are inherent in human society and culture, and thus they become part of the »contextual factors« that influence the use and interpretation of AI technologies. Consequently, these technologies often incorporate and perpetuate the same biases. There are three main types of bias: data-driven bias, algorithmic bias and human bias. These biases are closely linked and influence each other significantly.

Mitigating AI’s discriminatory effects: where does the EU legislation stand today?

The EU has enacted a number of legislative measures to regulate the development of artificial intelligence, each with a different approach and scope. The General Data Protection Regulation (GDPR), the Digital Services Act (DSA) and the Artificial Intelligence Act (AI Act) are the three main legal frameworks aimed at making it easier to cope with discriminatory practices in the context of AI use.

All three instruments prohibit discriminatory AI, but the scope differs in each case. The GDPR tackles AI-related discrimination mainly in Article 22, which restricts automated decision-making and requires safeguards to protect data subjects from discriminatory outcomes. The Digital Services Act obliges so-called »very large online platforms« (VLOPs) and »very large online search engines« (VLOSEs) to assess and mitigate systemic risks – including algorithmic discrimination – under Article 34, reinforcing platform responsibility for a safe online environment. The AI Act includes the strongest anti-discrimination rules: it bans AI systems that breach non-discrimination principles and imposes strict obligations on high-risk systems, such as risk mitigation, quality of data, clear communication with users, human oversight and robust cybersecurity measures. Article 6(2) and Annex III(4) specifically target gender bias in AI recruitment, recognising the risks of perpetuating historical disadvantages for women. The Act also underscores human oversight (Art. 14), echoing similar provisions in the GDPR.

Uncovering shortcomings in the EU framework: the consequences of neglecting an intersectional approach

Although the GDPR is considered a cornerstone of the protection of fundamental rights, legislators failed to take a truly innovative approach to intersectionality and intersectional discrimination. While it recognises the existence of »vulnerable data subjects in need of protection«, it does not contain an explicit definition and therefore neglects the issue of intersectionality. In practice, data protection authorities do not yet appear to have taken up the issue of vulnerable data subjects apart from recommendations and guidelines for dealing with children’s data. 

Similarly, the Digital Services Act recognises that certain groups may be vulnerable or disadvantaged when using online services because of factors such as gender, race or ethnic origin, religion or belief, disability, age, or sexual orientation. Nevertheless, it fails to emphasise the importance of assessing the risk of intersectional discrimination in its risk assessment framework

The prevalence of gender-based online violence in Europe has only increased over the years, and marginalised groups are at an increased risk of being affected by it. Although the DSA seeks to address online violence – including online gender-based violence – its lack of intersectional analysis limits its ability to protect those facing overlapping discrimination. Díaz and Hecht-Felella (2021) have even exposed a double standard in the moderation of content on social media, where marginalised groups – including racialised people, women, LGBTQ+ individuals and religious minorities – are more likely to face excessive enforcement measures, while the harmful content directed at them often goes unchecked. Their research shows that content moderation can lead to mass takedowns for these communities, whereas dominant groups are subject to less severe measures, such as warnings or temporary demonetisation. One of the DSA’s objectives is to better regulate content moderation but as it has failed to incorporate an intersectional approach, such inequalities are likely to remain untouched. 

Finally, there are significant concerns about the AI Act's failure to include provisions that tackle the use of AI for national security purposes. Indeed, the AI Act exempts AI systems that are developed or deployed solely for national security purposes, whether by state authorities or private companies. This exemption creates a significant loophole that allows certain AI systems to circumvent controls and human rights safeguards. In practice, governments could use national security as an excuse to introduce AI practices, such as biometric mass surveillance systems, without having to conduct a fundamental rights impact assessment (FRIA) and ensure high technical standards and non-discrimination. The legislation has also developed a separate legal framework for the use of AI by migration control authorities, enabling the testing and deployment of dangerous surveillance technologies at EU borders. This is of particular concern as these practices disproportionately affect marginalised groups, especially those affected by multiple forms of discrimination.

(For more details, see Victoire Olczak’s full report Gender Equality and Artificial Intelligence: Navigating the EU Frameworks for a Feminist Future.)

 

Victoire Olczak is is a dedicated researcher specialising in gender studies and European affairs. Her experience in associative, academic and institutional environments has given her a comprehensive approach to equality issues - especially gender equality. As a member of the think tank Gender Five Plus, she has authored and formally presented several policy papers addressing key EU policy issues from an intersectional and feminist perspective. She holds an MA in Human Rights from Sciences Po Paris and a BA in International Relations.

Technology, Employment and Wellbeing is an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.

FES Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Meet the team

Follow us on LinkedIn and  X

Subscribe to our Newsletter

Watch our short videos and recorded events Youtube