by Antonio Aloisi, Associate Professor at IE University Law School, Madrid
6 min read
The news that 77% of employees who use artificial intelligence (AI) report that doing so has decreased their productivity and increased their workload has been met with mixed reactions. For those familiar with the reality of workplace automation, however, this finding was not surprising. Often, the adoption of technologies does not reduce drudgery or liberate workers from repetitive and meaningless tasks.
When caught between techno-FOMO (or fear of missing out) and outdated business practices, some companies rush to introduce automated decision-making systems, only to find that they replicate the worst traits of poor traditional management. Does this mean that policymakers should mandate shutting down all the server farms fuelling AI systems worldwide? Not at all. But citizens, societies and their leaders need to assess the contributions these technologies make to workplace environments and reconsider how they are implemented.
Algorithms are pervasive in offices and factories. For instance, they allocate cleaning shifts to domestic workers, pressuring them to work at an unsustainable pace; they track how long a maintenance worker spends at a client’s location and suggest the quickest route to the next client; they collect data to generate snake oil statistics on the top-performing call centre employees, exacerbating competitive tensions; they remind warehouse managers when a percentage of their team fails to meet the average performance indicators; and they covertly monitor remote workers in consultancy industries to track the time consumed on single tasks or identify defiant conducts. AI hiring software is also widely used in conventional sectors to streamline the recruitment funnel, from targeting vacancies to sifting job applications and automating candidates’ assessments.
In Your Boss Is an Algorithm, we argue that algorithmic management systems—the vast set of tools and software used to organise, control and discipline workforces—represent the latest iteration of a die-hard command-and-control approach to human resources management. These systems deepen hierarchies without significantly improving well-being, fulfilment or competitiveness. From a legal perspective, the inherent risk is that such systems disrupt established controlling factors designed to prevent the arbitrary exercise of managerial authority. Consequently, long-standing principles such as fairness, transparency, equal treatment, due process and proportionality are under significant strain.
This concern is now widespread and justified. At the European Union (EU) level, the last few years have seen remarkable activism in this regard, culminating in the adoption of the AI Act and the Platform Work Directive. While these two legal instruments differ greatly in terms of their purpose, content and legal basis, they both form part of a constellation of digital regulations aimed at charting a new course for the design, deployment and development of technologies that are less harmful and more functional. However, various pieces of legislation must come into play when mapping the laws governing AI management systems in the workplace.
Perhaps the most prominent element of this map is data protection, with the EU’s General Data Protection Regulation (GDPR) situated at its legal pinnacle. Compliance with this fundamental standard provides safeguards against algorithms spiralling out of control, including protection against unreliable inferences and the repurposing of personal data, as well as the right to be informed about the existence of automated systems, their inner logic and the delineation of protocols that minimise their impacts on fundamental rights. Nevertheless, the GDPR is riddled with exceptions, carve-outs and loopholes that undermine its effectiveness in workplace contexts. While the data protection framework represents a solid tool for rebalancing information asymmetries and restraining the power of data holders, ambiguities concerning the provisions regulating solely automated decision-making, along with the ‘contractual necessity’ and ‘legitimate interest’ exemptions, underscore the GDPR’s incompleteness.
In a recently published paper, I argue for a shift from a retrospective, remedial and individualistic model to an anticipatory, integrated and participatory one. In short, the convergent application of the GDPR, non-discrimination frameworks and labour law provisions can result in fairer and smarter data-driven workplaces, especially when employees’ rights are exercised collectively.
While interferences with privacy have long been scrutinised, experts now recognise that AI systems are prone to discriminating against workers with protected characteristics and those belonging to vulnerable communities. Faced with this challenge, which is unprecedented in terms of its scale, scope and volume, equality law appears somewhat disempowered. The ability to dynamically classify workers based on traits that are not visible and to differentiate treatments based on aspects that are misaligned with protected grounds exemplifies the limitations of the EU’s anti-discrimination framework, not to mention procedural issues such as the obstacles to accessing evidence, the predominantly complaint-based nature of enforcement and the technical and legal hurdles faced by litigants in decoding and exposing the source (code) of discrimination.
This is where data accrued through the exercise of the GDPR rights to notification and access, along with tools such as data protection impact assessments, can support individuals and representative bodies in scrutinising the fairness of automated decision-making systems. Similarly, workers and their representatives can push companies to design procedures that uphold accountability, thereby enabling challenges to automated decisions, including proposing alternatives and demanding a human interface. In other words, the richness of the current legal framework should encourage managers to avoid falling prey to the latest shiny dashboard promising efficiency and optimisation, instead considering whether a more compliant and less privacy-infringing technology is available.
Momentum is building for dedicated intervention to establish a legal framework forworkers’ digital rights, as discussed in this new policy brief. In the meantime, workers are not defenceless and companies are not doomed to digital dependency. A broad set of EU legal principles and legislation is available to make AI work for everyone. Admittedly, the largely individualised exercise of certain rights clashes with how algorithmic tools handle community data, team dynamics and population statistics. Yet, practitioners and activists are beginning to mobilise legal resources in all the relevant venues—that is, in court, before data protection authorities and through equality bodies. This new wave of awareness and contestation complements the ongoing regulatory engagement at the institutional level.
In this context, the paradigms of worker involvement and participatory AI governance together offer a groundbreaking opportunity. Despite its shortcomings, the GDPR encourages member states to strengthen the EU’s model through law or collective agreements regarding the processing of employees’ personal data at the workplace. Comprehensive collective agreements at the firm level are emerging as ideal platforms for negotiating the terms and conditions of AI systems used for worker recruitment and management. Additionally, the potential for the co-design of technological tools is vast, thanks to novel instruments such as fundamental rights and data protection impact assessments, which can be employed to both flag risks for workers and design more effective mitigation strategies.
If innovation cannot be ‘uninvented’, it must be shaped to benefit those it affects. This is no easy task, although it presents an exciting opportunity for social partners, civil society organisations, regulators and collective movements to update their agendas. As AI systems increasingly impact hiring procedures and working conditions by making decisions on behalf of managers, the need to question and counterbalance this power has become more urgent than ever before. Workplaces have always been sites of creativity; now, they can serve as a testing ground for norms that promote workers’ self-determination and ensure their job quality.
Dr. Antonio Aloisi is an associate professor of European and comparative labour law at IE University Law School in Madrid. He is currently an Emile Noël Fellow at the Jean Monnet Center within the New York University (NYU) School of Law. His research critically explores how technology is transforming workplace dynamics and examines the potential of social institutions to foster a more sustainable future of work.
Technology, Employment and Wellbeing is a new FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.
Friedrich-Ebert-Stiftung Future of Work
Cours Saint Michel 30e 1040 Brussels Belgium
+32 2 329 30 32
futureofwork(at)fes.de
Team
This site uses third-party website tracking technologies to provide and continually improve our services, and to display advertisements according to users' interests. I agree and may revoke or change my consent at any time with effect for the future.
These technologies are required to activate the core functionality of the website.
This is an self hosted web analytics platform.
Data Purposes
This list represents the purposes of the data collection and processing.
Technologies Used
Data Collected
This list represents all (personal) data that is collected by or through the use of this service.
Legal Basis
In the following the required legal basis for the processing of data is listed.
Retention Period
The retention period is the time span the collected data is saved for the processing purposes. The data needs to be deleted as soon as it is no longer needed for the stated processing purposes.
The data will be deleted as soon as they are no longer needed for the processing purposes.
These technologies enable us to analyse the use of the website in order to measure and improve performance.
This is a video player service.
Processing Company
Google Ireland Limited
Google Building Gordon House, 4 Barrow St, Dublin, D04 E5W5, Ireland
Location of Processing
European Union
Data Recipients
Data Protection Officer of Processing Company
Below you can find the email address of the data protection officer of the processing company.
https://support.google.com/policies/contact/general_privacy_form
Transfer to Third Countries
This service may forward the collected data to a different country. Please note that this service might transfer the data to a country without the required data protection standards. If the data is transferred to the USA, there is a risk that your data can be processed by US authorities, for control and surveillance measures, possibly without legal remedies. Below you can find a list of countries to which the data is being transferred. For more information regarding safeguards please refer to the website provider’s privacy policy or contact the website provider directly.
Worldwide
Click here to read the privacy policy of the data processor
https://policies.google.com/privacy?hl=en
Click here to opt out from this processor across all domains
https://safety.google/privacy/privacy-controls/
Click here to read the cookie policy of the data processor
https://policies.google.com/technologies/cookies?hl=en
Storage Information
Below you can see the longest potential duration for storage on a device, as set when using the cookie method of storage and if there are any other methods used.
This service uses different means of storing information on a user’s device as listed below.
This cookie stores your preferences and other information, in particular preferred language, how many search results you wish to be shown on your page, and whether or not you wish to have Google’s SafeSearch filter turned on.
This cookie measures your bandwidth to determine whether you get the new player interface or the old.
This cookie increments the views counter on the YouTube video.
This is set on pages with embedded YouTube video.
This is a service for displaying video content.
Vimeo LLC
555 West 18th Street, New York, New York 10011, United States of America
United States of America
Privacy(at)vimeo.com
https://vimeo.com/privacy
https://vimeo.com/cookie_policy
This cookie is used in conjunction with a video player. If the visitor is interrupted while viewing video content, the cookie remembers where to start the video when the visitor reloads the video.
An indicator of if the visitor has ever logged in.
Registers a unique ID that is used by Vimeo.
Saves the user's preferences when playing embedded videos from Vimeo.
Set after a user's first upload.
This is an integrated map service.
Gordon House, 4 Barrow St, Dublin 4, Ireland
https://support.google.com/policies/troubleshooter/7575787?hl=en
United States of America,Singapore,Taiwan,Chile
http://www.google.com/intl/de/policies/privacy/