Defences against thinking
In urban mythology, an average of ten people will read a peer-reviewed academic article, with half having no readership at all, so it could be argued that academics have bigger existential problems than their research excellence.
Recently the dominance of AI in knowledge production and peer review in the AI Scholar Labs of the future, has taken a dramatic turn with the advent of AI as the arbiter of research excellence in the future health co-scientific knowledge production sector. How do you know when you are excellent academically speaking? In the circular-logic of the AI-Research Excellence Framework your research excellence only happens when AI tells you so. In the Matrix of AI the algorithm of academic publishing is my peer and I am forced to play a game that I don’t even understand called discoverability.
One of the barriers to understanding how the use of AI technologies changes therapeutic work is the nature of the research project around it – how to get data at all, how to get good data and how to get it funded and then published and into the mainstream of Open Access publishing. It is through the algorithmically determined publishing world that researchers are able to claim evidence-based impact and secure its place as industrially useful to the policy and practice actors of digital therapy.
There are a number of reasons why the impact of AI on therapeutic work is still an undertheorized and misunderstood area – partly the speed at which the technologies have become embedded in our systems blurring the line between human-not-human interventions, the tension between AI and knowledge production and academic freedoms, and the lack of AI Literacy in the therapy sector. Some of the best series of research projects around the impact of AI come from Data & Society in the US working in the granular of identifying GenAI research participants and how to measure authenticity in mental health research. But as research funding is intentionally sliced and redirected, combined with the impossible demands that come with a scarce funding climate in a hardly formed ethical framework, it’s hard to imagine how anyone has the space to think deeply about what is on our minds (for a great article about slow research go here). In a failing system of academic publishing even the most privileged in the academic food chain recognise the reign of AI knowledge production is terrifyingly unsustainable for actual people.
Theorising digital therapy also involves asking research questions that the therapy profession is reluctant to open up about what draws people into using Chatbots and what they offer that human therapists do not, a potentially shaming experience on all sides of the therapeutic relationship. Researching the use of AI in mental health is hard to do is that it requires researchers to take seriously co-production and the complex chain of events that get triggered when you ask someone about their feelings around mental health (a great article about the research practice considerations around patient and public involvement here). Importantly, research in this field means embedding research in the systems that offer safety to all involved and that take seriously transparency and participation in the research process. Sounds reasonable except when you understand the research systems within which we operate, under attack for inconvenient truths and heavily distorted by the dominance of financial interests in the production of knowledge around digital therapy. I don’t want to labour the point but especially in the realm of digital health, economic size matters to what research gets funded and where and if it gets published.
Behind the research is a business model where size matters, and the demand for big data and algorithmic access to academic research has led to the advent of a creative commons and licensing of academic Open Access (OA) publishing. Although opening up research to be accessed for free is, in principle, a good idea, it has some important implications for intellectual labour. It means that right now we have a system of academic publishing where free access (as opposed to free to publish) has been traded for intellectual property such that, increasingly, authors have no control over their own research. Under some licences, my research can potentially be edited or sold without my consent and offered up to the big data daddy in the sky to be churned out into a report that I have never read. As a researcher, I cannot fully control what words are attributed to me. Just think about that for a second, that in this regulatory context the truth is not just unimportant, it becomes impossible to locate, challenging any motivation I might have for writing actual words again. It does however reinforce that despite the requirement that academics focus their labour on publishing in high ranking journals, writing a book about ooooh, UberTherapy for example, constitutes that precious creature called freedom.
One of the threads of the story of UberTherapy is that the new business model of mental health acts as a defence against thinking. That in its formulation, its delivery and industrial relations, it invites us to stop thinking about the systems within which we are working. In part, this state of un-thinking is encouraged by our disorientation in seeing the therapy landscape, in understanding the financial logic behind it and by shaming us into an uncritical silence about what is really on our minds about that. I now believe that the platformization of therapy offers an intentional unknowing about our psychic realities. It allows for the normalizing of UberTherapy that is designed to deny relational care and hide the data and algorithmic evidence that stops us knowing the facts and taking a position on them. It is a neoliberal framework that does not offer us a stable ‘framework of care’ where we are safe to work out who we really are.
Part of the defence of thinking is a simple point about the importance of talking with people who are not the same as us, and the need for all of us to talk to people outside of our professional or political circles about digital therapy – the tech start-up people who got dropped by their venture capital investors, therapists from different therapeutic traditions, families and consumers of on- demand therapy or NHS services, and the B2B and B2C consumers of therapy platforms. Inevitably given the nature of therapy to talk freely, talking about UberTherapy requires we bridge the cognitive dissonance in our experiences and are safe to speak a language that reflects deep emotional and lived experience without the threat of being misrepresented or ideologically captured. From developing communities of research practice to reviving co- production, it has become essential for us to consciously step outside the digitally curated echo chambers within which we think about mental health.
Michael Rustin’s open minded and serious book Researching the Unconscious is about as close to a how-to book in researching the unconscious. I hope Michael wouldn’t mind me saying this but he’s a meticulous thinker, pedant even, about getting psychoanalytically informed research just right.
David Armstrong’s book Organization in the Mind is somethink like poetry in its deep and creative curiosity into organisations. In the world of organisational consultancy he’s pretty much the Godfather of the deep work of group relations.
Methods of Research into the Unconscious edited by Kalina Stamenova and the gentle Robert Hinshelwood is another important guide on how to access the unconscious in research. It will expand your mind about what is possible in researching actual people.
Join us at the online launch of Asylum Magazine’s special issue on AI and mental health on 19th December 2025 6-7pm.
Register here to receive the joining link: http://bit.ly/Asylumlaunch-Winter25
You can buy a copy of UberTherapy: The new business of mental health by BUP here
@survivingwork.bsky.social @survivingwk
@UberTherapy.bsky.social @ubertherapies