by Christina J. Colclough, the founder of the Why Not Lab
5 min read
As the new European political cycle begins, now is a perfect time to learn from the past, correct the mistakes and, in line with an ETUC resolution, table a regulation of AI at Work[1] that truly protects workers’ fundamental rights. Despite the claims that the EU AI Act will protect fundamental rights, many stakeholders such as Amnesty International, Liberties, Access Now, correctly claim that the adopted legislation falls short of offering sufficient protection.
In this article, I propose three main principles that the regulation of digital systems at work should be built on to right the wrong. The three principles will ensure workers’ fundamental rights, prevent the commodification of work and workers and address information asymmetries which are disempowering workers. All three can be further operationalised through a range of obligations and commitments, but for now, let’s look at the main principles.
Principle 1: Algorithmic systems must be inclusively governed
Whilst the EU AI Act stipulates a range of obligations on developers and deployers of high-risk systems the Act does not go far enough in respecting the European Social Model and social dialogue. This even though a OECD report indicates that workplaces in which workers or worker representatives are consulted regarding new technologies are the same workplaces where the most positive impacts on worker productivity and working conditions are reported.
This principle therefore calls for the obligatory and meaningful inclusion of workers and/or their legitimate representatives in what should be an on-going governance of digital technologies at work. This principle will enforce transparency. It will also provide workers with the possibility to raise concerns about AI systems, be party to the ongoing assessment of them, and negotiate redlines and frames around the data, algorithmic inferences and purposes of these systems. It will also ensure that deployers of digital systems actually understand the real and potential consequences of the digital systems they choose to use, and adjust or reject them if harms are caused to the workers.
No impact assessment, nor governance procedure, can be meaningful or even truthful if the voices of the workers are not party to them both prior to, and periodically after, the deployment of workplace AI systems.
Principle 2: Reverse the burden of proof
Workers do not have access to the same information about workplace AI systems as the employers and/or the developers of these systems. Nor do they have full access to the algorithmic systems, their instructions, training data etc.
To address this information asymmetry, the regulation of AI at work systems should therefore include the reverse burden of proof. The logic is that it should never be down to the workers to prove that their rights have been violated by workplace AI systems. It should be the responsibility of the employers to prove that harms or other violations are not caused.
This principle will empower workers. It will ensure the employers have understood, can analyse and therefore do govern the technologies in question, and it takes seriously that rights violations do occur.
Principle 3: The Right to be Free from Algorithmic Manipulation.
This principle might seem abstract at first, but it is really a condition for the respect of workers’ fundamental rights. Algorithmic systems constantly manipulate us. From determining what news and posts we see online, to the advertisements we see, and at work who to hire, fire, discipline or promote. They also influence whether we even see a particular job opening on the internet, or whether we have a priori been deemed unfit for the job, who is likely to leave the workplace soon, or they can be used to assess and compare workers’ activities, or analyse and score workers’ spoken or written language relative to customer preferences. The list is endless.
This all happens not least due to algorithmic inferencing, also known as profiling, through which an AI model can reason and make predictions in a way that mimics human abilities. What these inferences in actual fact do is turn workers actions and non-actions into data points, which are then used to create profiles that then are used to determine a particular reality for the workers.
Inferences are opaque yet have real-life impacts on workers in and outside of work. In essence, they are manipulative.
With the right to be free from algorithmic manipulation, workers will have the ultimate right to reject AI systems at work. This principle will put action behind Professor Shoshana Zuboff call to ban markets in human futures, i.e. the analysis and trading of behavioural predictions based on our personal data and/or personally identifiable information. It will also stop the current commodification of work and workers.
In summary
Combined, these three principles will promote workplace democracy and empower workers. They will strengthen democratic control over the systems used in public services as well as in the market. They will put social (and environmental) impact costs, risks and possible benefits centre stage thus providing a more nuanced, even real, evaluation of any productivity and/or efficiency promises.
They will prevent the opaque algorithmic manipulation that we all are currently subject to. They will, in a meaningful way, put workers at the centre of workplace change. And importantly, they offer an opt-out option. They do not assume that digital change is the only constant we all must adjust to. On the contrary they provide us with the right to say no if our rights are violated. This is what, some scholars, such as Jonnie Penn and Ben Tarnoff insightfully have called a de-computerization strategy. As Tarnoff says: “Decomputerization doesn’t mean no computers. It means that not all spheres of life should be rendered into data and computed upon. Ubiquitous “smartness” largely serves to enrich and empower the few at the expense of the many, while inflicting ecological harm that will threaten the survival and flourishing of billions of people.”
[1]1 In the text, AI refers to any data-driven or algorithmic system that processes workers’ data and impacts working conditions, jobs and workers’ rights.
Dr Christina J. Colclough is the founder of the Why Not Lab a - value-driven consultancy that equips workers and their unions across the world with the skills to ensure collective rights in the digital age.
Technology, Employment and Wellbeing is a new FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.
Friedrich-Ebert-Stiftung Future of Work
Cours Saint Michel 30e 1040 Brussels Belgium
+32 2 329 30 32
futureofwork(at)fes.de
Team
This site uses third-party website tracking technologies to provide and continually improve our services, and to display advertisements according to users' interests. I agree and may revoke or change my consent at any time with effect for the future.
These technologies are required to activate the core functionality of the website.
This is an self hosted web analytics platform.
Data Purposes
This list represents the purposes of the data collection and processing.
Technologies Used
Data Collected
This list represents all (personal) data that is collected by or through the use of this service.
Legal Basis
In the following the required legal basis for the processing of data is listed.
Retention Period
The retention period is the time span the collected data is saved for the processing purposes. The data needs to be deleted as soon as it is no longer needed for the stated processing purposes.
The data will be deleted as soon as they are no longer needed for the processing purposes.
These technologies enable us to analyse the use of the website in order to measure and improve performance.
This is a video player service.
Processing Company
Google Ireland Limited
Google Building Gordon House, 4 Barrow St, Dublin, D04 E5W5, Ireland
Location of Processing
European Union
Data Recipients
Data Protection Officer of Processing Company
Below you can find the email address of the data protection officer of the processing company.
https://support.google.com/policies/contact/general_privacy_form
Transfer to Third Countries
This service may forward the collected data to a different country. Please note that this service might transfer the data to a country without the required data protection standards. If the data is transferred to the USA, there is a risk that your data can be processed by US authorities, for control and surveillance measures, possibly without legal remedies. Below you can find a list of countries to which the data is being transferred. For more information regarding safeguards please refer to the website provider’s privacy policy or contact the website provider directly.
Worldwide
Click here to read the privacy policy of the data processor
https://policies.google.com/privacy?hl=en
Click here to opt out from this processor across all domains
https://safety.google/privacy/privacy-controls/
Click here to read the cookie policy of the data processor
https://policies.google.com/technologies/cookies?hl=en
Storage Information
Below you can see the longest potential duration for storage on a device, as set when using the cookie method of storage and if there are any other methods used.
This service uses different means of storing information on a user’s device as listed below.
This cookie stores your preferences and other information, in particular preferred language, how many search results you wish to be shown on your page, and whether or not you wish to have Google’s SafeSearch filter turned on.
This cookie measures your bandwidth to determine whether you get the new player interface or the old.
This cookie increments the views counter on the YouTube video.
This is set on pages with embedded YouTube video.
This is a service for displaying video content.
Vimeo LLC
555 West 18th Street, New York, New York 10011, United States of America
United States of America
Privacy(at)vimeo.com
https://vimeo.com/privacy
https://vimeo.com/cookie_policy
This cookie is used in conjunction with a video player. If the visitor is interrupted while viewing video content, the cookie remembers where to start the video when the visitor reloads the video.
An indicator of if the visitor has ever logged in.
Registers a unique ID that is used by Vimeo.
Saves the user's preferences when playing embedded videos from Vimeo.
Set after a user's first upload.
This is an integrated map service.
Gordon House, 4 Barrow St, Dublin 4, Ireland
https://support.google.com/policies/troubleshooter/7575787?hl=en
United States of America,Singapore,Taiwan,Chile
http://www.google.com/intl/de/policies/privacy/