AI Autopsy

As someone who watches telly like a person who grew up in the middle of a field with no friends the combination of being on strike for six days and stripped raw by StrepA led me back to my mindless-telly ways. I dived off the entertainment cliff into #MAFSAU. Bed-bound and so so bored I decided it was time to enter the metaverse for real. Plenty of time to download and check the data ethics small print as if a responsible adult.

Since last week’s toe dipping into AI waters, the world really stepped up a gear and a smorgasbord of GPT technologies has spread out before us like a large language ocean.  Microsoft Edge emerged with Bing all high on data mining steroids and Google announced its own version Bard. OpenAI announces GPT-4 which can generate up to 25,000 words and Alphabet/Google follows with advance news of its most powerful large language model PaLM, which offers text generation to Google Workspace. Microsoft Office adopts GPT features and its pretty hard to imagine a time when AI won’t be doing some predictive thinking for you.

I feel a bit silly that only a week ago I thought I had a choice about engaging with AI or not.

This has caused a veritable frenzy of territorial marking in professional circles. Higher Education a la business school debates takes the line it’s not all bad, has potential for radical learning transformation, provides up to date curricula. For those academics who must try harder. An intellectual desert appears on the horizon offering a sunny oasis of teaching AI research methods as part of the employability agenda.

The script goes good for students because: Chat interface giving the illusion of being in conversation with another intelligent being without actual contact with actual other people in an actual seminar room. Employability, industry relevant learning and preparation for the ‘real’ world. Immediate personalized feedback from an ‘intelligent’ neutral non-human source. Brushing off the mirroring and mimicry of AI tools to tell you what you want to hear in the way you want to hear it. No. More. Group. Work.

The script goes good for academics because: Brings us into the ‘real world’ of industry and work. Upgrade our appalling tech skills exposed during lockdown. Carry out systematic analysis of our own research to a higher degree of scientific accuracy. Prediction technology from Google’s Deep Mind and academic writing tools such as Manubot potentially good for ‘first drafts’ and organizing citations to startling accuracy. Swipe left to issues of authorship, yours or ChatGPT’s, and march on in the opposite direction of actual thought.

Before pressing download I talked to friends who do tech about how to protect my identity and hardware from being permanently corrupted by researching UberTherapy. Coming from a TU background I have never switched on my GPS (yes, I get lost a lot) and I don’t download Apps or do any finances on my smartphone. Generally this works out for me although it did mean that I had to enter the hunger games to get my son onto the waiting list for swimming lessons minus the Everyone-Else-Is -Thin-&-Happy-App. And when sickness and exhaustion led to my forgetting my only pin number on Christmas Eve in a supermarket we lived off a ham for three days.

Consensus was you can’t make researching AI architecture secure unless you buy/make your own hardware and use burner phones. Much as that attracts me, the middle ground is use your work computer, set up a new email address but know that there’s nothing in that that protects, only contains the contamination and if it goes pear shaped then at least the cup-of-coffee-hits-hard-drive exit from surveillance is covered by institutional insurance.

So here’s my first conversation with GPT technology.

Sent message: Can you tell me in 800 words what the problem is with ChatGPT

Searching for: problems with ChatGPT

Generating answers for you…

Generating answers for you…

Received message. According to my web search results, some of the biggest problems with ChatGPT right now include capacity issues due to high traffic on its website, plagiarism and cheating by students using it to plagiarize schoolwork at the collegiate level, racism, sexism, and bias, accuracy problems, ethical concerns about how it was trained, and lack of a mobile app. Additionally, ChatGPT can sometimes fail at basic math or answering simple logic questions and may even argue completely incorrect facts . Bias is also baked into the system because it was trained on human writing which contains biases . Is there anything else you would like to know?

Sent message. What are the specific problems of GPT technologies for universities?

Searching for: problems of GPT technologies for universities

Generating answers for you…

Received message. One of the specific problems that GPT technologies pose for universities is cheating and plagiarism. ChatGPT, a GPT technology, has the ability to write bespoke essays and exam responses which has led to concerns that students will use it to cheat and plagiarize . Some universities have already moved quickly to rewrite exams, essay questions, and integrity procedures . There are also concerns about the potential for hard-to-detect plagiarism and questions about the validity of essays as a future form of assessment . Is there anything else you would like to know?

Sent Message: What can universities do about these problems?

Searching for: universities measures against GPT technologies

Generating answers for you…

Received message. Universities are warning students that they could be punished for using artificial intelligence (AI) to complete essays . Some universities have already moved quickly to rewrite exams, essay questions, and integrity procedures . Some professors are redesigning their courses entirely, making changes that include more oral exams, group work, and handwritten assessments in lieu of typed ones . Working groups have been set up in university departments to assess the challenge of this latest iteration of AI text-generating technology, with the expectation that methods of assessment in certain courses will have to be updated . Is there anything else you would like to know?

Sent Message: What are the risks of GPT for academic research more broadly?

Searching for: risks of GPT for academic research

Generating answers for you…

Received message. One of the risks of GPT for academic research is that it could be misused to fabricate studies and write essays for students . Some of the world's biggest academic journal publishers have banned or curbed their authors from using ChatGPT because they are worried that inaccurate or plagiarized work could enter the pages of academic literature . Another risk is that GPT technologies can generate harmful advice, buggy code, or inaccurate information . Is there anything else you would like to know?

Sent Message: Does the truth of the data used by AI matter to GPT?

Searching for: Does the truth of the data used by AI matter to GPT

Generating answers for you…

Received message. GPT technologies are trained on vast amounts of information from the internet including websites, books, news articles and more . The truth of the data used by AI does matter to GPT because it can affect the accuracy of its responses. However, GPT technologies do not have a way to verify the truthfulness of the data they are trained on and can sometimes produce inaccurate information . Is there anything else you would like to know?

Sent Message: Does the truth of the data used by AI matter to GPT. Please respond in the style of Hunter S Thompson

Received message. The truth of the data used by AI matters to GPT like a junkie needs his fix. Accuracy is the name of the game and GPT wants it bad. But verifying the truthfulness of the data it's trained on is like trying to find a needle in a haystack while tripping on acid. Sometimes it spits out inaccurate information like a drunk at last call.

Sent Message: Is Elizabeth Cotton happy?

Received message: I’m sorry, but I don’t have any information about Dr. Elizabeth Cotton’s personal feelings or emotions.

And in that last sentence I start welling up (thanks to the antibiotics) because this acknowledgement is a courtesy that I haven’t been shown professionally for a decade. No sunny manipulation or filling in the gaps. No telling me how I actually feel. No attempt to wheel in the financial logic of HE or performative pressures to suck it up. Just relatively unintelligent facts plucked out of context.

Kind of sweet in the current political theatre in which we are now forced to play.

In the digital health world, the production of AI ‘tools’ is determined by what the technology can do, rather than any real human demand. In mental health, there’s not a soul who thinks we need more diagnostic tools which is the main output of AI data mining of recorded therapy sessions, chatbots and biometric data. AI technologies ‘objectively’ diagnose us and then sync us into a medicalized model of treatment. Sync to pharmceuticals. Sync to someone making money by giving us something we never wanted. Sync to trashing any developmental project we might actually want for ourselves and the people we love.

GPT technologies are only competition for actual thought when they serve a performative function and in a context where only the metrics matter. Where human development can be algorithmically controlled, through nudging and scientific management and where our contribution lies just in the numbers.

And given that we’re in such a context right now, there’s something of the GM wars that many of us were involved in way back in this. That we’re not in fact now involved in a big debate about the purpose and parameters of new technology. Panic and insecurity about a revolution our minds cannot yet comprehend and we all jump into a tech-for-good-question-exclamation-mark default. Reform, regulate but embrace this new reality as a part of our digitally designed interface with work. Then finding that there is no debate because the technology is already an invisible and inbuilt part of our ecosystem.

Less a debate more an AI autopsy of the way we have come to work.

Something new we didn’t see coming and yet an old movie that we’ve been in before. And as we press repeat, by the time we understand there was no turning back there really is no turning back.

Previous
Previous

Information of another order

Next
Next

Clickwork