30.10.2024

Protecting Workers’ Rights in Digitalised Workplaces

by Christina J. Colclough, the founder of the Why Not Lab

5 min read

As the new European political cycle begins, now is a perfect time to learn from the past, correct the mistakes and, in line with an ETUC resolution, table a regulation of AI at Work[1] that truly protects workers’ fundamental rights. Despite the claims that the EU AI Act will protect fundamental rights, many stakeholders such as Amnesty International, Liberties, Access Now, correctly claim that the adopted legislation falls short of offering sufficient protection.

In this article, I propose three main principles that the regulation of digital systems at work should be built on to right the wrong. The three principles will ensure workers’ fundamental rights, prevent the commodification of work and workers and address information asymmetries which are disempowering workers. All three can be further operationalised through a range of obligations and commitments, but for now, let’s look at the main principles.  

Principle 1: Algorithmic systems must be inclusively governed 

Whilst the EU AI Act stipulates a range of obligations on developers and deployers of high-risk systems the Act does not go far enough in respecting the European Social Model and social dialogue. This even though a OECD report indicates that workplaces in which workers or worker representatives are consulted regarding new technologies are the same workplaces where the most positive impacts on worker productivity and working conditions are reported.   

This principle therefore calls for the obligatory and meaningful inclusion of workers and/or their legitimate representatives in what should be an on-going governance of digital technologies at work. This principle will enforce transparency. It will also provide workers with the possibility to raise concerns about AI systems, be party to the ongoing assessment of them, and negotiate redlines and frames around the data, algorithmic inferences and purposes of these systems.  It will also ensure that deployers of digital systems actually understand the real and potential consequences of the digital systems they choose to use, and adjust or reject them if harms are caused to the workers.  

No impact assessment, nor governance procedure, can be meaningful or even truthful if the voices of the workers are not party to them both prior to, and periodically after, the deployment of workplace AI systems.  

Principle 2: Reverse the burden of proof  

Workers do not have access to the same information about workplace AI systems as the employers and/or the developers of these systems. Nor do they have full access to the algorithmic systems, their instructions, training data etc.  

To address this information asymmetry, the regulation of AI at work systems should therefore include the reverse burden of proof. The logic is that it should never be down to the workers to prove that their rights have been violated by workplace AI systems. It should be the responsibility of the employers to prove that harms or other violations are not caused.  

This principle will empower workers. It will ensure the employers have understood, can analyse and therefore do govern the technologies in question, and it takes seriously that rights violations do occur. 

Principle 3: The Right to be Free from Algorithmic Manipulation. 

This principle might seem abstract at first, but it is really a condition for the respect of workers’ fundamental rights. Algorithmic systems constantly manipulate us. From determining what news and posts we see online, to the advertisements we see, and at work who to hire, fire, discipline or promote. They also influence whether we even see a particular job opening on the internet, or whether we have a priori been deemed unfit for the job, who is likely to leave the workplace soon, or they can be used to assess and compare workers’ activities, or analyse and score workers’ spoken or written language relative to customer preferences. The list is endless. 

This all happens not least due to algorithmic inferencing, also known as profiling, through which an AI model can reason and make predictions in a way that mimics human abilities. What these inferences in actual fact do is turn workers actions and non-actions into data points, which are then used to create profiles that then are used to determine a particular reality for the workers.  

Inferences are opaque yet have real-life impacts on workers in and outside of work. In essence, they are manipulative.  

With the right to be free from algorithmic manipulation, workers will have the ultimate right to reject AI systems at work. This principle will put action behind Professor Shoshana Zuboff call to ban markets in human futures, i.e. the analysis and trading of behavioural predictions based on our personal data and/or personally identifiable information. It will also stop the current commodification of work and workers.  

In summary 

Combined, these three principles will promote workplace democracy and empower workers. They will strengthen democratic control over the systems used in public services as well as in the market. They will put social (and environmental) impact costs, risks and possible benefits centre stage thus providing a more nuanced, even real, evaluation of any productivity and/or efficiency promises.  

They will prevent the opaque algorithmic manipulation that we all are currently subject to. They will, in a meaningful way, put workers at the centre of workplace change. And importantly, they offer an opt-out option. They do not assume that digital change is the only constant we all must adjust to. On the contrary they provide us with the right to say no if our rights are violated. This is what, some scholars, such as   Jonnie Penn and Ben Tarnoff insightfully have called a de-computerization strategy. As Tarnoff says: “Decomputerization doesn’t mean no computers. It means that not all spheres of life should be rendered into data and computed upon. Ubiquitous “smartness” largely serves to enrich and empower the few at the expense of the many, while inflicting ecological harm that will threaten the survival and flourishing of billions of people.”

 

[1]1 In the text, AI refers to any data-driven or algorithmic system that processes workers’ data and impacts working conditions, jobs and workers’ rights.  

Dr Christina J. Colclough is the founder of the Why Not Lab a - value-driven consultancy that equips workers and their unions across the world with the skills to ensure collective rights in the digital age.

Technology, Employment and Wellbeing is a new FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.


Connnect with us

Friedrich-Ebert-Stiftung
Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Team