26.02.2024

The Unmagical World of AI: Workers at the bottom of the AI supply chain

by Dr Funda Ustek Spilda, Dr Lola Brittain, Dr Callum Cant and Dr Mark Graham, University of Oxford

8 min read

It is impossible to ignore the apparent magic of AI. Centuries-old texts are translated into the language of your choosing, you can read menus in foreign languages just by hoovering your camera on the menu, or you can write a 1000-word essay on obscure plants, even without having any knowledge of the subject. The opportunities are endless. However, as more and more people are thinking, writing and working with AI, they are also questioning how the magic happens. The answer is less to do with sorcery, and more to do with human labour.

There are few idioms that translate across different cultures and to different languages as well as – looking for a needle in a haystack. The idea that some things are impossible to find, and the effort spent in finding them will be in vain, is such an easy sentiment to relate to. For AI to work the way it should, however, there is a vast number of people who need to continuously look for needles in exceedingly messy haystacks. And in the way digital infrastructures are set up, their work ends up being invisible, unvalued and as such, poorly compensated.

In their 2019 book, anthropologist Mary L. Gray and computer scientist Siddharth Suri, coined the term ‘ghost work’ to describe the invisiblised, task-based labour facilitating the development of AI. The people performing such work, also described as ‘data workers’, conduct a myriad of tasks to ensure that data used to train machine learning algorithms for AI is fit for purpose. Such work can entail finding images online, annotating them, labelling them, translating texts or moderating content online so that internet is and remains a safe space – as much as possible.

In recent years, thanks to the work of Gray and Siddarth, as well as other scholars and critical journalists like Phil Jones and Billy Perrigo, the existence of data workers, and the critical role they play within the supply chains of artificial intelligence production, has gained increasing attention. In others, the curtain has been opened and we, as a society, we have been told about the difficult conditions that such workers face as they make these supposedly awe-inspiring AI systems work. Yet much remains unknown about these workers, including the size of this emerging work force, but we know globally recognised standards of work are far from reality for a vast majority.  That is why Fairwork, in partnership with the Global Partnership on Artificial Intelligence, has established a set of Fairwork AI principles. Developed via a global multi-stakeholder consultation in 2022 and adapted in response to the on-the-ground concerns of workers observed in fieldwork in 2023, the principles cohere around the core Fairwork structure - of pay, conditions, contracts, management, and representation - to establish basic minimum standards of fairness within the supply chains of AI. It is true, there are already several standards in place for AI, and AI ethics is a topical discussion. However, our intention at Fairwork, has been to establish standards that are operationalisable, and that answer to the needs and wants of the very people who work in the AI supply chain.

What exactly does this mean in practice? The answer is not overtly technical, as despite the undeniable shifts and changes to work brought about by AI, workers still need a sufficient wage, safe conditions, adequate protection, and decent contracts. They still deserve management that is fair and non-discriminatory and an assurance of their freedom to associate and the change to have a say in their conditions.

As alluded to earlier, this is not yet a reality. Indeed, behind the magic of AI are conditions that are from dazzling. In 2023 Fairwork AI project team conducted our first case study exploration into the working conditions in the supply chains of AI. The company in question was Sama, a well-known impact sourcing company that specialises in data annotation for computer vision. Impact sourcing evolved in the late 2000s as corporate social responsibility initiatives began expanding to business process outsourcing; and the practice of hiring marginalised individuals was promoted as a way of helping to lift them out of poverty and boost economic development overall, especially within the data annotation and verification sectors.

Fairwork team was invited by Sama management to conduct a fieldwork study in their offices in Kenya and Uganda. The initial findings were worrying. A company committed to the double bottom line model (combining financial profit strategy with social responsibility), which held BCorp Status, and had a reputation as an ethical operator in impact sourcing initially appeared to meet none of the Fairwork AI principles. Interviews with workers revealed the effect of insecure contracts, poor management practices, gender-based discrimination, unpaid working time and long working hours on the well-being of workers.

When Fairwork shared their findings with Sama management, their response was commendable. Sama’s senior management took the research findings very seriously and committed to respond to the problems identified. In the consultation period between the sharing of findings until the Fairwork report launch, Sama continued to engage with the Fairwork team, sharing evidence, information on the changes being implemented and detailed explanations on how the changes were coming into effect in the company. This has resulted in a significant improvement of fairness at Sama over a very short period, and eventually Sama was awarded 5/10 in the Fairwork AI report published in December 2023.

The company has committed to overall 24 changes to address the issues identified in Fairwork’s research. These ranged from updating the overtime policy and adapting incentive structures to ensure that workers are not rewarded for overworking making a commitment not to pay any employee a base salary below the living wage levels; from updating the employment contract to more explicitly featuring information on working hours, breaks, maximum hours allowed for overtime and benefits, to updating the Workplace Illness and Injury Policy to include specific coverage of work-related psychological disorders; from introducing a new policy to change the default contracts from 1 month to 12 months (or where there are not client terminations of ramp down rights, to the end of the client contract – except where there is good reason to beliece contract will not run its full course) to starting the “Zero Tolerance Culture” campaign to train staff on anti-bullying and harassment, as well as introducing specialised out-of-house training on gender-based violence for key HR, legal and management personnel; as well as  creating specific employee rights training to cover freedom of association and investigating representation structures to increase employee involvement in decision-making across the organisation and at all levels of decisions.

With these changes, working conditions for the Sama workers in Kenya and Uganda offices undeniably improved, but there is still a lot to be done. Ultimately, Fairwork’s initial findings gave import important insights into the working conditions of data workers. Indeed, the workers we interviewed through the course of our fieldwork commented that even though conditions at Sama were poor, they were not the poorest in comparison to other data companies. The workers drew a rather unmagical picture of the AI production chains, where financial instability coupled with work strain and poor management practices were all too common.

Magic is often used as a metaphor for complex technological processes and systems, and this is why in the marketing rhetoric of AI systems, magic has been such a powerful metaphor. We are told of its amazing, un-ending capabilities; its power to both save and ruin the world and of God like qualities just round the corner. It is a powerful metaphor, that is easy to get swept up. But a metaphor is all it is. AI is not untethered, immaterial magic. It is structurally reliant on a vast number of people providing a myriad of tasks in not so magical working conditions.

Everyone likes to believe in magic. But where AI is concerned, awe should be reserved for the workers performing the tasks behind the curtain. It is only because of them that the systems can do what they do. The least they deserve is basic minimum standards at work.   

As Fairwork, we will be continuing our investigation into AI supply chains in the new year with new studies. We will be shifting our attention to business process outsourcing companies in Latin America with further support from the Global Partnership on AI. There is nothing inevitable about poor working conditions in the digital economy. Despite their claims to the contrary, companies have substantial control over the nature of the jobs that they provide. Fairwork’s aim is to hold them to account.

 

About the Authors

The Fairwork project is coordinated by the Oxford Internet Institute and the WZB Berlin Social Science Center.

Technology, Employment and Wellbeing is a new FES blog that offers original insights on the ways new technologies impact the world of work. The blog focuses on bringing different views from tech practitioners, academic researchers, trade union representatives and policy makers.


Connnect with us

Friedrich-Ebert-Stiftung
Future of Work

Cours Saint Michel 30e
1040 Brussels
Belgium

+32 2 329 30 32

futureofwork(at)fes.de

Team