Complement, Not Replace: Exploring AI’s Place in Mental Health Support

Large language models (LLMs) have become a powerful academic and educational tool for many people, and the launch of GPT5 is a reminder that we’re still at the beginning of a profound transformation in the way we interact with information. While AI might offer unprecedented accessibility and support, thinking that it can replace the human elements of therapy is a profound category error. Its greatest value lies in complementing rather than substituting mental healthcare. How this will be done is still being explored and raises important questions about the appropriate boundaries and ethical considerations of AI in therapeutic contexts.

As with any powerful tool, accessibility comes with potential risks. We know that people are already turning to AI to intervene and propose solutions for everyday annoyances that don’t deserve our own emotional and cognitive energy, but worryingly more and more people are making AI their therapist or mental health support. Whether it’s due to its low cost (usually free) support, illusion of connection, or simply the fact that it’s available when your therapist isn’t (look out ChatGPT when my existential dread kicks in at 2am), the allure of this affectingly agreeable companion is often all too enticing.

And it’s tempting, of course. AI can provide us with personalised treatment plans, incorporate psychological as well as physiological data, can apply a range of techniques, and is seen as admirable for its benefits, if a little mysterious. And of course until recently it hasn’t had to consider pesky ethics or guard rails. Worryingly however, ChatGPT has also recently been blamed for escalations in symptoms of psychosis, with over-agreeable responses feeding into people’s delusions.

When battling with anxious thoughts it is common for people to seek reassurance that their anxiety is unfounded. This may be in the form of seeking assurance that a negative outcome won’t occur (will I fail this exam?) or reassurance against evaluative threats (did I come across as weird at the party?) (Cougle et al., 2012). Both of these safety behaviours take away from the real human experience of tolerating uncertainty and building resilience. AI chatbots seem to miss this nuance, offering instant and compelling reassurance that does a disservice to the user in two main ways. Firstly, the user misses an opportunity to sit with the discomfort of not knowing; getting through a stretch of time pondering negative outcomes before emerging the other side having coped with the unease.

Instant reassurance from difficult symptoms might seem helpful at first, but it neglects the importance of learning to sit with discomfort and uncertainty.

Secondly, the individual does not gain confidence from learning that they can tolerate uncertainty and emotional discomfort on their own. The reassurance from AI becomes a crutch rather than a catalyst for growth, potentially reinforcing anxiety patterns in the long-term. Having therapy booked regularly (usually weekly), rather than reaching for our phones at a whim also means that we have time between sessions where we can digest what’s come up in the room, experiment with our findings, and be held accountable to follow up on actions. It teaches us the skill of prioritisation, focussing on what matters during that fifty-minute session.

When coping with depression, behavioural activation can be a life changing treatment, and may be prescribed by those offering wellbeing support, however with AI, stepped plans for improving motivation and wellbeing can stall without human accountability. A chatbot might be able to provide detailed recommendations, but without a connection with a therapist who checks in on plans and works through barriers, clients may not implement the suggested actions without a real accountability partner. The lack of human connection undermines the effectiveness of these interventions because a therapist who notices when you don’t show up, who remembers your specific struggles, and who builds a relationship with you over time creates a unique sense of responsibility that AI simply cannot replicate.

The gap between knowledge and action highlights a limitation of AI: it cannot replicate the relational accountability inherent in human therapy.

When believing that action (the thing you know will be good for you, but you don’t want to do) often precedes a change in emotions, having a reassuring other look you in the eye and tell you to push through the doubt they’ve seen a hundred times before can be far more powerful than typing to a faceless AI. The emotional resonance of therapeutic relationships accounts for around 30% of therapy outcomes (Hovarth et al, 2011), something that AI simply cannot replicate despite its increasingly sophisticated responses.

Is there a place for AI in mental health and therapeutic care? Is there really a call for dehumanising therapy? Can AI therapy be ‘good enough’?

Perhaps AI presents an opportunity in mental healthcare. There’s no denying that they’re accessible and can fill gaps in the healthcare workforce, and removing a human from the interaction can remove any fear of judgement or stigmatisation. Whether AI can be responsibly used by healthcare providers to innovate mental healthcare will be an ongoing debate, but hopefully one that takes into account inherent cultural biases; the ‘black box’ problem of the levers of AI being opaque and containing millions of internal parameters; and of course the ethics of its use, its training models and the broader commercial interests in the promotion of these tools. Some would argue that the rise of AI therapy reflects a political and societal reality in which governments do not wish to invest in public mental health services.

Complementary to traditional therapy, AI could serve as a supplementary tool under professional guidance. Like interactive homework between sessions, a therapist might recommend specific AI-guided exercises for skill practice or journaling prompts that integrate with ongoing treatment. Behavioural health apps already demonstrate this hybrid approach where digital tools extend the reach of care while maintaining human oversight. Clients might also use AI to demystify and empower themselves with knowledge about the therapeutic process, perhaps making better informed decisions about the therapeutic modality that will benefit them most. Clients might also be more inclined to use AI for ‘small things’ between sessions, freeing up the therapeutic hour for more pressing topics, or to prepare for their therapy session allowing them to optimise their time.

Thankfully, since I started to write this article, new safeguards have been put in place to respond to AI misuse, including updates from OpenAI (the creators of ChatGPT) to better identify and respond to emotional distresswith evidence-based resources, encouraging breaks and signposting to local mental health support. Privacy in AI being used for mental health must also carefully balance patient confidentiality with equitable outcomes, while addressing biases in training data and maintaining rigorous human oversight.

Although AI holds promise as an accessible and cost-effective tool in mental health, it is not a replacement for human therapy. The unique relational and emotional dimensions of traditional therapy, including accountability, empathy, and the opportunity to tolerate uncertainty, cannot be fully replicated by even the most sophisticated AI. However, under professional guidance, AI can complement therapy by providing supplemental support, psycho-education, and interactive exercises between sessions. As technology evolves, careful consideration of ethics, biases, and safeguards will be essential to ensure that AI enhances rather than diminishes the human experience of mental healthcare. Ultimately, the goal should not be to dehumanise therapy, but to integrate AI thoughtfully, ensuring that compassion, expertise, and connection remain at the centre of mental health support.

Previous
Previous

Why the UK Needs a National Hoarding Strategy: Moving Beyond Outdated Approaches to Compassionate, Person-Centred Care

Next
Next

Understanding Loneliness: More Than Just Being Alone