10.09.2024

The laws and flaws of AI management in the workplace

by Antonio Aloisi, Associate Professor at IE University Law School, Madrid

6 min read

The news that 77% of employees who use artificial intelligence (AI) report that doing so has decreased their productivity and increased their workload has been met with mixed reactions. For those familiar with the reality of workplace automation, however, this finding was not surprising. Often, the adoption of technologies does not reduce drudgery or liberate workers from repetitive and meaningless tasks.

When caught between techno-FOMO (or fear of missing out) and outdated business practices, some companies rush to introduce automated decision-making systems, only to find that they replicate the worst traits of poor traditional management. Does this mean that policymakers should mandate shutting down all the server farms fuelling AI systems worldwide? Not at all. But citizens, societies and their leaders need to assess the contributions these technologies make to workplace environments and reconsider how they are implemented.

Algorithms are pervasive in offices and factories. For instance, they allocate cleaning shifts to domestic workers, pressuring them to work at an unsustainable pace; they track how long a maintenance worker spends at a client’s location and suggest the quickest route to the next client; they collect data to generate snake oil statistics on the top-performing call centre employees, exacerbating competitive tensions; they remind warehouse managers when a percentage of their team fails to meet the average performance indicators; and they covertly monitor remote workers in consultancy industries to track the time consumed on single tasks or identify defiant conducts. AI hiring software is also widely used in conventional sectors to streamline the recruitment funnel, from targeting vacancies to sifting job applications and automating candidates’ assessments.

In Your Boss Is an Algorithm, we argue that algorithmic management systems—the vast set of tools and software used to organise, control and discipline workforces—represent the latest iteration of a die-hard command-and-control approach to human resources management. These systems deepen hierarchies without significantly improving well-being, fulfilment or competitiveness. From a legal perspective, the inherent risk is that such systems disrupt established controlling factors designed to prevent the arbitrary exercise of managerial authority. Consequently, long-standing principles such as fairness, transparency, equal treatment, due process and proportionality are under significant strain.

This concern is now widespread and justified. At the European Union (EU) level, the last few years have seen remarkable activism in this regard, culminating in the adoption of the AI Act and the Platform Work Directive. While these two legal instruments differ greatly in terms of their purpose, content and legal basis, they both form part of a constellation of digital regulations aimed at charting a new course for the design, deployment and development of technologies that are less harmful and more functional. However, various pieces of legislation must come into play when mapping the laws governing AI management systems in the workplace.

Perhaps the most prominent element of this map is data protection, with the EU’s General Data Protection Regulation (GDPR) situated at its legal pinnacle. Compliance with this fundamental standard provides safeguards against algorithms spiralling out of control, including protection against unreliable inferences and the repurposing of personal data, as well as the right to be informed about the existence of automated systems, their inner logic and the delineation of protocols that minimise their impacts on fundamental rights. Nevertheless, the GDPR is riddled with exceptions, carve-outs and loopholes that undermine its effectiveness in workplace contexts. While the data protection framework represents a solid tool for rebalancing information asymmetries and restraining the power of data holders, ambiguities concerning the provisions regulating solely automated decision-making, along with the ‘contractual necessity’ and ‘legitimate interest’ exemptions, underscore the GDPR’s incompleteness.

In a recently published paper, I argue for a shift from a retrospective, remedial and individualistic model to an anticipatory, integrated and participatory one. In short, the convergent application of the GDPR, non-discrimination frameworks and labour law provisions can result in fairer and smarter data-driven workplaces, especially when employees’ rights are exercised collectively.

While interferences with privacy have long been scrutinised, experts now recognise that AI systems are prone to discriminating against workers with protected characteristics and those belonging to vulnerable communities. Faced with this challenge, which is unprecedented in terms of its scale, scope and volume, equality law appears somewhat disempowered. The ability to dynamically classify workers based on traits that are not visible and to differentiate treatments based on aspects that are misaligned with protected grounds exemplifies the limitations of the EU’s anti-discrimination framework, not to mention procedural issues such as the obstacles to accessing evidence, the predominantly complaint-based nature of enforcement and the technical and legal hurdles faced by litigants in decoding and exposing the source (code) of discrimination.

This is where data accrued through the exercise of the GDPR rights to notification and access, along with tools such as data protection impact assessments, can support individuals and representative bodies in scrutinising the fairness of automated decision-making systems. Similarly, workers and their representatives can push companies to design procedures that uphold accountability, thereby enabling challenges to automated decisions, including proposing alternatives and demanding a human interface. In other words, the richness of the current legal framework should encourage managers to avoid falling prey to the latest shiny dashboard promising efficiency and optimisation, instead considering whether a more compliant and less privacy-infringing technology is available.

Momentum is building for dedicated intervention to establish a legal framework forworkers’ digital rights, as discussed in this new policy brief. In the meantime, workers are not defenceless and companies are not doomed to digital dependency. A broad set of EU legal principles and legislation is available to make AI work for everyone. Admittedly, the largely individualised exercise of certain rights clashes with how algorithmic tools handle community data, team dynamics and population statistics. Yet, practitioners and activists are beginning to mobilise legal resources in all the relevant venues—that is, in court, before data protection authorities and through equality bodies. This new wave of awareness and contestation complements the ongoing regulatory engagement at the institutional level.

In this context, the paradigms of worker involvement and participatory AI governance together offer a groundbreaking opportunity. Despite its shortcomings, the GDPR encourages member states to strengthen the EU’s model through law or collective agreements regarding the processing of employees’ personal data at the workplace. Comprehensive collective agreements at the firm level are emerging as ideal platforms for negotiating the terms and conditions of AI systems used for worker recruitment and management. Additionally, the potential for the co-design of technological tools is vast, thanks to novel instruments such as fundamental rights and data protection impact assessments, which can be employed to both flag risks for workers and design more effective mitigation strategies.

If innovation cannot be ‘uninvented’, it must be shaped to benefit those it affects. This is no easy task, although it presents an exciting opportunity for social partners, civil society organisations, regulators and collective movements to update their agendas. As AI systems increasingly impact hiring procedures and working conditions by making decisions on behalf of managers, the need to question and counterbalance this power has become more urgent than ever before. Workplaces have always been sites of creativity; now, they can serve as a testing ground for norms that promote workers’ self-determination and ensure their job quality.

Dr. Antonio Aloisi is an associate professor of European and comparative labour law at IE University Law School in Madrid. He is currently an Emile Noël Fellow at the Jean Monnet Center within the New York University (NYU) School of Law. His research critically explores how technology is transforming workplace dynamics and examines the potential of social institutions to foster a more sustainable future of work.

Technology, Employment and Wellbeing is a new FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.


Connnect with us

Friedrich-Ebert-Stiftung
Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Team