- The COVID-19 pandemic and a steep rise in individuals facing mental health challenges have accelerated the growth of digital mental health services.
- AI therapists can contribute to employee wellbeing programs in various ways but should not be solely relied upon to provide standalone solutions for complex mental health issues.
- Despite the potential benefits of AI therapy, the industry is largely unregulated, and significant concerns persist regarding privacy, ethics, and safety.
A rising global mental health crisis is being discussed more and more, as comfort talking about mental health issues is rising, particularly in the wake of the COVID-19 pandemic. In the U.K. in particular, 25% of the population is currently experiencing mental health challenges, leading to a notable increase in online therapy services, including those facilitated by chatbot therapists.
The increase in the utilization of AI therapists, often referred to as “therapy bots,” corresponds with the limited availability of mental health services. This growing dependence has raised concerns, giving rise to apprehensions about a potential overreliance on digital therapy to address the shortfall in mental health practitioners, rather than expanding existing provisions.
Furthermore, the absence of substantial evidence — given that most efficacy studies on AI therapy bots have been short-term and involved small sample sizes — raises questions regarding their capacity to substitute for human therapists.
What’s Driving The Widespread Adoption Of Digital Therapy?
Recent data from the World Health Organization (WHO) reveals a stark reality in global mental health — just 13 mental health practitioners per 100,000 people.[1]
Worldwide, 76 million anxiety cases and 53 million depression cases underscore a significant mental health crisis, notably affecting Gen Z (projected to comprise 27% of the workforce by 2025). Factors such as remote work and workplace loneliness contribute to an alarming 91% of 18-24 year olds feeling stressed.
This mental health crisis has fuelled a thriving $1.4 billion industry of AI-based therapy tools such as chatbots, virtual friends, or apps, supported by over $5 billion in venture capital for startups offering wellbeing support. Notable contributors to this sector include renowned therapy apps such as Headspace and BetterHelp, alongside the integration of chatbot therapists, marking a shift in mental health service provision.
AI therapists are designed to simulate aspects of human therapy and developed to interact with users, provide emotional support, offer advice, and sometimes even engage in conversation. These “therapists,” functioning as LLM (Large Language Model) chatbots, are trained on vast internet datasets to address mental health concerns.
The American Psychiatric Association notes the existence of 20,000 mental health self-help apps, and 80% of individuals have endorsed ChatGPT’s mental health advice as a credible alternative to traditional therapy.
In the U.K., BBC News has reported on Limbic Access (a therapy chatbot) and its seal of approval from the government, whilst the U.S. Food and Drug Administration (FDA) is working to expedite approval for an app (WYSA) targeting depression, anxiety, and chronic pain.
However, despite some official approval, the mental health app market remains unregulated, and serious questions persist about the efficacy of digital therapy.
Can AI Ever Truly Replace Traditional Patient-Therapist Relationships?
In an unregulated market, the presence of both beneficial and ineffective AI therapy products is inevitable. Whether this is a fleeting trend or a more enduring shift remains uncertain, but it is evident that therapy bots have now entered the previously sacrosanct spaces of the therapist-patient relationship.
Reports of errors, misinformation, ingrained biases, inappropriate responses (a consequence of predicted text), and potentially harmful advice from therapy bots highlight their limitations, particularly for individuals confronting more severe mental health issues. Nevertheless, human therapists are not immune to fallibility either. Given the cost-effectiveness of AI, especially amid the closures of “human” helplines due to public service cuts, it is unsurprising that people are increasingly turning to AI for support.
The Role of Employers in Addressing Mental Health
There is an increasing emphasis on leveraging technology to improve mental health in the workplace. Josh Drean, Co-Founder & Director of Employee Experience at The Work3 Institute, advocates for the use of AI to augment (rather than substitute) human capabilities, a viewpoint especially pertinent in the complex realm of emotional wellbeing. AI chatbots, for instance, could play a crucial role in bolstering Cognitive Behavioral Therapy (CBT), where therapy bots such as WYSA are already making significant advancements.
As the popularity in AI therapy services grows, many employers could soon offer this therapy as a workplace benefit. In fact, some already are. For those still considering implementing this type of benefit, here are four potential approaches that employers could adopt:
- Subscription Services: Employers could subscribe to AI-based therapy platforms or apps, offering free or discounted access to employees.
- Employee Assistance Programs (EAPs): Integrating AI therapy tools into EAPs could enhance the range of mental health resources available to employees.
- Wellness Programs: Including AI therapy as a component of wellness programs can contribute to a more holistic approach to employee wellbeing.
- Workplace Platforms: Some companies may integrate AI therapy tools into their existing workplace communication or intranet, ensuring easy accessibility for employees.
What Are The Ramifications Of Employing AI Bots For Mental Wellbeing Guidance?
AI therapists can provide support for a range of mental health challenges, such as stress, anxiety, depression, and loneliness. They achieve this by disseminating information, suggesting coping strategies, and offering a non-judgmental space for users to express their feelings. Nevertheless, it is essential to recognize that, while AI therapists can be effective in this capacity, they lack the profound understanding and empathy inherent in human therapists, as their advice is algorithm-based.
The effectiveness of an AI therapist depends on individual needs and the specific context. Consider the following factors:
- Accessibility: AI therapists provide immediate support 24/7, catering to individuals who require assistance outside conventional therapy hours or in emergencies. Additionally, they can send daily reminders and prompt users to document their concerns, proving beneficial in Cognitive Behavioral Therapy (CBT).
- Safety Concerns: False marketing, privacy issues, misinformation, and fake celebrity endorsements are just a few of the extensive list of things to be aware of in this context.
- Anonymity: Some individuals may find it easier to open up to an AI therapist, perceiving a lack of judgment and appreciating the assurance of anonymity.
- Confidentiality: Concerns exist regarding data breaches and potential information sharing with third parties.
- Consistency: AI therapists offer standardized responses, mitigating personal stresses, events, or fatigue that human therapists might experience.
- Limited Understanding and Empathy: While AI therapists provide information and basic support, their capacity to grasp complex emotions and nuances falls short of the deep understanding, empathy, and human connection offered by human therapists.
- Scope of Issues: AI therapists excel in addressing mild to moderate mental health concerns, offering information, coping strategies, and support. However, they are ill-equipped to handle severe mental health issues or crises and often cannot recognize non-verbal cues.
- Supplemental Support: AI therapists should complement traditional therapy, offering support between sessions with human therapists rather than serving as replacements.
- Continuous Improvement: Ongoing advancements in AI technology may enhance the capabilities of AI therapists over time, making regular updates and enhancements pivotal for improving their effectiveness.
Employers should view these AI tools as complementary resources, emphasizing the importance of ensuring that employees have continuous access to human support whenever necessary. Recognizing that AI therapists are not one-size-fits-all solutions, employers should consider individual employee preferences, needs, and the specific context.
Caution is advisable for vulnerable individuals or those requiring urgent and targeted support for acute mental health issues. When introducing AI therapy as a workplace benefit, prioritize privacy and ethical considerations. This involves ensuring data security, transparent communication about risks and benefits, and being mindful of individual preferences and vulnerabilities.