Should the Mental Health Industry Embrace or Reject AI?

11 minutes

read

Since the launch of OpenAI’s ChatGPT towards the end of 2022, AI has disrupted just about every industry.

Certain industries have changed on a fundamental level, especially those that rely on skills like coding. AI can now write complex code based on intuitive requests, leading to increased productivity and more opportunities for innovation.

However, other industries are far more sceptical about the use of AI. It has been a particularly hot topic in the field of mental health.

There’s a simple reason for this that probably doesn’t need to be stated. But, for the purpose of framing, it comes down to one core question. Is it possible to use AI in mental health treatment without losing the human touch?

If you’re looking for an easy answer, you’re out of luck. We can, however, dive into the nuances, and identify why AI doesn’t have to lead to a world without empathy.

Is AI in Mental Health Worth Discussing?

Many mental health practitioners have rejected the question out of hand. It can feel undermining to even imply that AI might take over jobs that require the empathy, care, and human understanding mastered by mental health providers.

Is it even worth discussing whether AI has a place in mental health treatment?

In fact, we already have evidence that the discussion is worthwhile, but not because AI is about to take over your therapy practice.

On the contrary, AI in mental health is hugely beneficial in three important areas:

  • Diagnosis
  • Prevention
  • Access

How AI Has Already Impacted Mental Health Treatment

The Role of AI in Diagnosing Mental Illness

Psychiatry has always been a complex field. It requires a synthesis of quantifiable data and subjective interpretation. It requires experts to work on the basis of a diagnostic manual that no one pretends is comprehensive or infallible.

Perhaps most importantly, it requires something impossible: an unbiased opinion in the context of a supportive, human relationship.

It is in facilitating this impossible task that AI has signified a significant path forward.

Within minutes, mental health AI applications can assess the data and provide possible interpretations, citing the evidence for each diagnosis along with factors which challenge it. It can direct experts to the relevant academic research.

In other words, it helps point the psychiatrist in directions they may not have considered.

Data-driven diagnoses

While this can come across as a challenge to the field, the reality is that psychiatry has used similar processes for decades.

I personally experienced this with my own excellent psychiatrist. When I suggested I might have ADHD, she was sceptical. Her experience with me was of someone with a sharp mind who was always very present in sessions. The anxiety disorder with which I’d already been diagnosed could explain many of the symptoms which pointed to ADHD.

She did not want to go the trial-and-error route with medication, either, as she was concerned about how stimulants may interact with my antidepressant.

Instead, she gave me some surveys to fill out. They provided her with quantitative data unaffected by human subjectivity.

AI has a similar place in psychiatry. The difference is that it speeds up the process while expanding the role of quantitative data.

We should not rely on AI mental health diagnosis in isolation, though. The psychiatrist still needs to make the big decisions, based on a combination of objective information, human unpredictability, and personal experience.

Prevention

Prevention is a tricky subject in the field of mental health.

While many physical health conditions can be prevented by lifestyle changes and early interventions, many mental health treatments focus on managing the illness rather than curing it.

Chemical depression, for example, cannot be prevented in the same way as certain types of heart disease and diabetes. It is also often not possible to diagnose mental illness at an early stage, before it is already causing major problems.

As such, prevention in mental health requires a focus on identifying risk factors and taking early action.

This type of prevention has been implemented within public health for the past few decades. However, AI – and particularly ‘big data’ – makes it much easier to recognise risk factors.

The role of public health in preventing mental illness

Traditionally, the process would go something like this: public health experts would assess communities facing systemic struggles, including poverty, high crime rates, and widespread substance abuse.

These circumstances raise the risk of mental illness, addiction, and trauma response in youth.

Usually, the circumstances cannot be changed, but coping skills and competencies, along with mental health awareness, can be taught to children and adults before they face a mental health crisis. If a crisis does occur, they are far more capable of managing it before it spirals out of control.

Using big data, which synthesises data from individuals and compares it to the mass data gathered from billions around the world, AI can identify risk factors no human could have picked up on. It can point public health experts to communities and groups that are particularly vulnerable, providing data on what has worked in similar circumstances in the past.

Access

A lack of access to mental health treatment is a major problem throughout the world, due to a number of factors.

In some regions, the high cost of treatment is the biggest barrier to entry.

In other regions, it is a lack of mental health providers that prevent access.

Still others have an issue of too much demand, leading to long waiting lists when trying to get an appointment.

Community issues also prevent access. Many communities around the world lack education regarding mental health issues. In certain cultures, mental illness is seen as a personal failing and mental health treatment is derided.

Other cultures have different stigmas when it comes to mental illness.

AI is helping to improve access to mental health treatment in various ways.

It is streamlining diagnosis, allowing more people to start their mental health journey when providers would otherwise be swamped.

It is reducing the human resources necessary for such processes, allowing costs to be reduced.

Most controversially, it is providing the basis for certain types of therapeutic treatments.

Can AI Chatbots Be Good Therapists?

The role of AI chatbots in treating mental illness is not clearcut, even though they have been in use for the past few years.

There is evidence from academic studies that they work to some extent. However, many psychotherapists would dispute what is considered success.

This points to why AI in mental healthcare is a touchy subject.

There is no universal definition for existential meaning, happiness, or healthy relationships.

The only quantifiable measures that are broadly accepted relate to the definition of dysfunction.

In line with this, most AI therapy chatbots have thus far been programmed to minimise dysfunction. They use structured techniques from therapies like CBT to guide users to challenge anxious thoughts, reframe their perspectives of difficult events, and suggest potential exercises to improve outcomes.

Most people who have attended talk therapy would agree that this is only part of the process. It definitely helps people cope in the modern world, creating a basis for a happy, meaningful life.

But it does not take people on the journey towards that happiness and meaning.

As of yet, a chatbot can come nowhere close to doing the job of a good therapist. If you’ve spoken extensively to ChatGPT, you will understand why. Its responses are impressive, but are always based on interpretation of a huge trove of quantifiable data that does not represent subjective experience.

Will an AI chatbot ever be a good therapist?

There’s no way of knowing, but if so, it is unlikely to look anything like what we currently have. Regardless of what happens in the future, we have seen the mental health field continuously adapt to changes over the last century.

And, if we could do it in the unprecedented and urgent period at the start of COVID-19, it is easy to remain optimistic.

Ethical Considerations: Prejudice in AI

There is another factor that needs a lot of consideration: prejudice in AI. We spoke above about how AI can provide potential diagnoses to a psychiatrist which are not informed by the provider’s biases.

But what if AI has its own biases?

Unfortunately, AI certainly has biases and prejudices in the mental health space (as it does in all other spaces). Our current generative AI models are trained using information across the internet, which may seem extensive to the casual observer.

However, anyone involved in modern mental health knows that the practice and study of psychology has focused predominantly on cisgendered, heterosexual caucasian people in the Western world.

The tide has necessarily started to shift, but there is still a long way to go and this is reflected in the relatively minimal content regarding mental health treatment in BIPOC and non-Western, as well as LGBTQ+, communities.

Psychology’s blindspot is shared by AI

Today’s mental health practitioners and companies are cognisant of the need to view the field in the context of a multicultural world. But education still focuses on traditional models.

Even when studying mental health in universities in South Africa, a country in which white people comprise less than 10% of the population, I found that the syllabus throughout contained almost no research or guidance on adapting treatment for other cultures.

Universities here have made progress in this regard in the fifteen years since I graduated, but the biggest changes have come in the treatments provided in hospitals. Experts found that models with mindfulness at their core are far more suited to the majority of South African patients in crisis than any of the traditional Western models.

These treatments are taught to interns at these hospitals rather than as a central part of their university education.

The lack of written content about diversity in mental health care is a problem for AI. Generative models can be asked to take diversity into account, but they don’t have enough to work with in order to counter the biases constructed through their algorithms.

And, while human practitioners are also subject to biases and prejudices, they are better placed to give their attention to the diverse needs of multicultural societies.

What Does the Future Hold for AI in Mental Health?

In his seminal work, Do Androids Dream of Electric Sheep (on which the Blade Runner films were based), Philip K. Dick envisions a world where androids are almost indistinguishable from humans.

Humans try to differentiate themselves by proving they have empathy, often by caring for an animal (or an electric animal if they can’t afford the real thing).

Over the course of the novel, it becomes clear that the line between the two is diminishing.

AI today has very little in common with PKD’s androids. You can have conversations with a chatbot and it may feel like you’re talking to a person, but we’re nowhere near the capacity for AI empathy in mental health.

As such, we’re unlikely to see a flood of human mental health experts losing their jobs to robots in the near future. That’s not to say it will never happen, but it would require the emergence of a qualitatively different kind of AI.

For now, AI is increasingly able to do the admin, improve access, and provide other benefits.

This should not be seen as a negative – in fact, if used responsibly, it has the power to make treatment more egalitarian while helping grow the careers of mental health providers.

Responsible AI Use for Mental Health Providers

AI cannot be ignored in the mental health field. However, this is an industry in which it is crucial to use it responsibly.

There are many ways a provider might take advantage of AI, whether to help with diagnosis, improve the content of reports, suggest structures for public health initiatives, or for many other purposes. But the importance of reviewing, fact-checking, and searching for potential pitfalls cannot be overstated.

Mental healthcare requires empathy and AI can overlook vital factors. It is important to view AI as a means to improving outcomes, rather than using it as a shortcut.

The Empathy Hub has an AI policy which follows this approach. We will not use AI to write any of our content, but we will absolutely use it to improve your content strategy’s results by identifying trends, optimising content for search engines, increasing our access to research, and structuring articles in a way that will keep readers attention until the end.

For a content marketing strategy that uses AI to your advantage, get in touch now!

Leave a comment