Clickwork

Old joke: A woman says “I’m writing a book”, another woman replies ‘Oh how wonderful, neither am I”.

As the deadline looms for my book UberTherapy, to be published next year by Bristol University Press, I’m dealing with two rather unsavoury prospects. Firstly I have to actually use Mindfulness Apps and chatbots to understand the what-it-feels-like of digital mental health tools. Secondly despite being someone who has never read a single instruction book, I’m having to learn how AI actually works and find a way of explaining it in a way that you will actually want to read. And as a result of that I’m having to actually talk to people about technology in a way that doesn’t make me sound like a sarcastic teenager.

Last week my research on therapy platforms got a bit of traction in the USA courtesy of an i-D magazine article. Unless I’m flogging a book when I’m all yours, I rarely speak to journalists. Increasingly I find the work of being asked to give an elevator pitch on complex social crisis plus an unattributed 1+ hour free lecture on the sociology of therapy is a big ask but sometimes I give in to a bright young feminist and they find a way of saying something that can be heard by the new users of digital therapies that someone with no tik or tok struggles to do.

As an academic I’m resigned to seven people doing a skim read of my research including my mum. It's OK, ultimately I do this for myself. But in these large language times the authorship waters become muddied and as knowledge professionals we see ourselves and our outputs replicated indiscriminately. With the routine advent of AI in academic publishing linked to the drive to Open Access, we are facing a profound shift in how and why knowledge gets produced. Way beyond the black and white threat of epistemic violence in knowledge production, it’s not just what data gets produced but how much and what happens to it that now matters.

A phrase used in big data circles is if you put shit in you get shit out. Size matters when it comes to the data economy. Quality and truth does not. And there’s the rub, the system can’t distinguish between the good stuff and the shit stuff. All gets mixed into a muddy soup that only the chatbots can digest.

And this is the reality of the academy now facing the mainstream use of AI technologies in the act of research and writing, and by extension marking, academic standards and the long term legitimacy of our various research fields. This has become very real on the HE front line with the advent of OpenAI’s ChatGPT, a large language model that can mimic human speech and writing. With tech bells and whistles its a free essay writing service available to anyone at all. This is the first year that we can anticipate some students submitting AI generated work as their own on steriods. Turnitin - the software used in the UK university system to measure ‘similarity scores’ of student assignments boldly claims it can detect ChatGPT authorship hilariously on the basis that it is characteristically ‘bland’. Big words in a big machine learning world.

Because five technological minutes later ChatGPT technologies were commercially partnered with Microsoft and voila the search engine Bing offers an advanced GPT model which allows for data to be drawn from the internet and the game suddenly changes. That’s a LOT of data that can get churned up into muddy soup. It allows Bing to generate higher levels of analysis of a higher volume of data. After years of being able to measure unique educational development of students through their critical analysis, now the robots can do it too. Added to which they can now mimic writers. So for those students with a low opinion of the objectivity of their lecturers, one strategy of my students could be to ask Bing to write their assignments in the style of Dr Elizabeth Cotton and roll the dice as to whether I’m a self-replicating narcissist.

Since academics have a bias towards the sound of their own voices AI can now bring that baby home.

The problems of not regulating this are multiple. As all the current debates about AI regulation suggest you can’t rely on machine learning to produce something that isn’t going to harm us and therefore it all requires ‘human centred’ policies and high levels of human moderation to oversee this potential.

This includes the well documented problems of dis-information (including problems with data privacy and protection) and mis-information where AIs can be manipulated to produce un-truths and un-facts. Things that are literally untrue from deaths to 2+2 no longer equalling 4 by using the drive of AI technologies to self-protect their own systems. It is routine that when reality does not reflect the data output, machine learning allows that the data can be changed or fabricated.

Putting it at its most mundane I’m wondering what this year’s marking season is going to involve because my human grasp of actual facts in my field, and sometimes very lite face to face contact with students means that I am unlikely to have the resources available to know with any certainty who wrote the assignment and whether the ‘facts’ included are actually true. In many ways years of telling students to stop using wikipedia and doing google searches led us to becoming school teachers recommending texts and avoiding the demand of original thinking. But what has emerged now is a production of assignments that cannot be measured using actual knowledge. At which point I have the recurring feeling of being a battery chicken pecking away at my own feathers.

Click click. Cluck cluck.

And behind these AI systems exists a growing industry of ‘clickwork’ - data entry and output moderation of AI tools. This is going to come as an awful shock to you dear reader but the vast majority of click workers are women, living in the Global South and working in the gig economy. As the field research grows, we learn the universal self-reproducing truth of our economic system that if there’s ever an opportunity to reinvent a sweat shop we’ll find it. And it is in this sense that the degradation of knowledge always implies the degradation of work, for call centre and academic workers alike.

Back to the academy. Of course for academics having to prepare actual young people for the world of work requires us not to just throw our phones down the lav and head for a bunker in Arizona a la Sarah Connor. We have to engage with the technology that we may (or may not who actually knows) already be working with from online prevent training to algorithmic performance management systems.

So I teach this stuff trying to maintain a neutrality that I just don’t have. Fortunately the glitches in the system invite polemics and it’s remarkably easy to pull together in a few handouts (sorry, eLearning links) if you look at the link between AI recruitment tools and racial bias.

And if you’re teaching a group of international students the relief is palpable to know that something really was up in the AI scanning of their CVs on recruitment sites and the poor scores in the assessment centres may not entirely be about them. That to ‘fit’ into a white male algorithm may just be impossible for some of us. That your social media presence was influential in completely counterintuitive ways as to whether you got to interview or not.

I then add a session analysing the APPG inquiry into AI at work - to understand the principles of a new AI Accountability Act and the Good Work Charter required to manage the clickwork careers of the future.

And a Panorama short about what happened to all the senior staff at Twitter overseeing child porn and hate crimes when Elon Musk took over. They got sacked, and a demented democracy fuelled by mis-information lives. Most of them, by the way, don’t let their kids have smart phones or access to Apps.

And I do that quietly during seminars which are not recorded and uploaded for the eLearning Gods to scan with their indifferent eyes. Just. In. Case.

It’s the moment that you see the architecture of this brave new labour market that you realise you’re already living in it. As a sociologist it pains me to now see that the context in which my knowledge is produced has changed so quietly and so dramatically that its meaning has shifted without me knowing. And it is with this humility that I turn back to the rapid tide of an unregulated muddy soup of Apps and Chatbots to understand them. To work out a way to quietly co-exist and loudly resist them. By actually writing an actual book.

Three things worth reading about the political fault lines underpinning AI:

Previous
Previous

AI Autopsy

Next
Next

Soft Edges