28.11.2025

Facial recognition in cultural events: the unequal gaze to accountability

by Monique Munarini

3 min read

Live facial recognition (LFR) is a technology that enables the real-time, remote identification of individual persons through biometric data. While often framed as a tool for enhancing public safety and efficiency, its practical use, especially in policing, both of large public gatherings and more generally, has repeatedly been shown to reproduce and amplify existing social inequalities. From this starting point one begins to question whose safety is being prioritised, whose freedoms are being curtailed and who ultimately benefits from these infrastructures of surveillance.

The use of facial recognition for law enforcement has been widely criticised for its inaccuracy and bias, particularly against already marginalised groups. The Gender Shades Project was among the first to expose how facial recognition systems misidentify Black women at disproportionately high rates. Similarly, the National Physical Laboratory, when assessing LFR systems used by the Metropolitan Police Service in the United Kingdom, found that false positives were higher among Black subjects than among Asian and White subjects.

Large-scale public events, such as street festivals and sports events, are often classified as high-risk spaces because of the high risk of terrorist attacks and other disturbances of order, which is taken to justify the deployment of surveillance technologies. But these are also deeply cultural and collective sites of belonging.

In Brazil, for example, football is more than a sport; it is a form of cultural and political expression, woven into the country’s identity (Wisnik 2007). Today, however, football events are becoming arenas of intensified surveillance. The Panóptico Project run by the Centro de Estudos de Segurança e Cidadania (CESeC) monitors the implementation of surveillance technologies in Brazil. Their report on data use in stadiums revealed that 9.7 million people entered Brazilian stadiums during the national championship. Alarmingly, 90.5 per cent of people arrested with the aid of facial recognition at public events in 2019 were Black. At the same time, UNESCO’s 2025 Readiness Assessment showed that 72 per cent of respondents expressed discomfort at being subjected to facial recognition and reported low awareness of the risks associated with data collection. Despite these concerns, as of May 2025, Brazil’s AI Bill (PL 2338/2023) remained stalled in Congress. There is still no comprehensive framework defining how these systems are monitored, what metrics are used to assess their performance or who bears responsibility for ensuring accountability. Meanwhile, the Lei Geral do Esporte (General Sports Law 2023) mandates the use of facial recognition systems in football stadiums on the pretext of enhancing public safety. Article 144 §2º requires stadium administrators to adopt technologies that allow the »individualisation of those present«. Ostensibly designed to prevent violence and improve crowd management, these systems have been implemented with minimal public debate, transparency or regulation. In practice, they normalise the constant monitoring of marginalised bodies, disproportionately targeting Black and peripheral fans. While mainstream media outlets such as Jornal Nacional (2024) frame these measures as public safety innovations, the consequences are already visible in the form of wrongful arrests due to misidentification (Brasil de Fato 2023; G1 2024; UOL 2023).

The 2025 Notting Hill Carnival in the United Kingdom saw the use of live facial recognition, resulting in 97 alerts and 61 arrests among the total of 528 arrests recorded during the event. Eleven civil society organisations publicly condemned the technology, calling it an instrument of mass surveillance that unfairly targeted the very communities whose histories of resistance gave birth to the carnival itself.

In Europe, the EU AI Act follows a risk-based approach, classifying AI systems into different risk categories, one of which consists of AI practices posing unacceptable risks to fundamental rights and Union values that are prohibited under Article 5. This article generally prohibits the use of real-time remote biometric identification in public spaces, except in narrowly defined law enforcement cases, such as preventing an »imminent threat to life or physical safety«. Even in such cases, authorities must complete a Fundamental Rights Impact Assessment, as prescribed in Article 27 of the EU AI Act, to demonstrate that safeguards are in place and risks have been mitigated. In February 2025, the Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act) provided non-binding interpretation of prohibited practices, and one of the examples of lawful use of LFR is to ensure the safety of citizens attending large-scale cultural events:

During a busy festival in a city, police authorities deploy live facial recognition technologies to monitor the area around the festival and identify wanted individuals with outstanding arrest warrants for illegal drug trafficking and sexual offences. At different entrances to the festival, the police use live video footage of people passing in front of a mobile camera to compare their faces with a watchlist of faces of wanted individuals.

Despite these provisions, authorities in Germany have admitted to deploying such systems in cities such as Berlin, raising questions about enforcement and oversight.

As Dr Asress Gikay notes, surveillance technologies introduced under the banner of safety often face early resistance, only to be later normalised through narratives of »efficiency«, such as airport passport control. Efficiency can come at a steep price, however. Sasha Costanza-Chock has illustrated how non-binary individuals experience misidentification and humiliation under gender-normative facial recognition systems, which shows how technological »neutrality« often masks exclusion and also how so-called efficiency may come at a high price to groups whose safety and benefit should be equally important.

The convenience – or the price – that some groups tacitly accept for the sake of their »peace of mind« when attending events is entangled with resistance from groups who are overly scrutinised by these same systems. In contrast to Brazil and the United Kingdom, where the debate centres on legality and enforcement, the European framework formally embeds accountability through the EU AI Act. However, true accountability depends not only on compliance with regulations but also on meaningful oversight. Achieving this requires collaboration between civil society organisations, researchers and affected communities to scrutinise how such systems are procured, how operators are trained and how individuals can challenge wrongful outcomes.

Monique Munarini is a trained lawyer with a double master’s degree in international relations from the University of Padova (IT) and Law, Economics and Management from the University of Grenoble-Alpes (FR). She is currently a PhD candidate in the Italian National PhD in AI at the University of Pisa (IT) where her research focuses on developing equitable AI audits from a
feminist perspective. At the same time, she also works as policy analyst in AI Governance for International Organisations and Global Civil Society Organisations.

Technology, Employment and Wellbeing is an FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.

FES Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Meet the team

Follow us on LinkedIn and  X

Subscribe to our Newsletter

Watch our short videos and recorded events Youtube