Targeting depressive symptoms with technology
Review Article

Targeting depressive symptoms with technology

John Torous1, Paul Cerrato2*, John Halamka3,4

1Department of Psychiatry and Division of Clinical Informatics, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA;2Contributing Writer, Medscape, Warwick, NY, USA;3Department of Emergency Medicine, Harvard Medical School, Boston, MA, USA;4Chairman of the New England Healthcare Exchange Network, Boston, MA, USA

Contributions: (I) Conception and design: None; (II) Administrative support: All authors; (III) Provision of study materials or patients: None; (IV) Collection and assembly of data: J Torous, P Cerrato; (V) Data analysis and interpretation: All authors; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

*Former Editor, InformationWeek Healthcare.

Correspondence to: John Torous, MD. Director of the Digital Psychiatry Program, Department of Psychiatry and Division of Clinical Informatics, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA. Email: jtorous@bidmc.harvard.edu.

Abstract: Interest in digital mental health, driven largely by the need to increase access to mental health services, presents new opportunities as well as challenges. This article provides a selective overview of several new approaches, including chatbots and apps, with a focus on exploring their unique characteristics. To understand the broader issues around digital mental health apps, we discuss recent reviews in this space in the context of how they can inform care today, and how these apps fail to address several important gaps. Framing apps as either tools to augment versus deliver care, we explore ongoing struggles in this space that will determine how apps are used, regulated, and reimbursed for. Realizing that many mental health apps today exist in this still undefined space and often possess no evidence, we conclude with an overview of the American Psychiatric Association (APA)’s app evaluation framework with the goal of offering a more informed approach to these digital tools.

Keywords: Mental health apps; digital health; American Psychiatric Association (APA); chatbots; clinical depression; depressive symptoms


Received: 12 May 2019; Accepted: 24 June 2019; Published: 09 July 2019.

doi: 10.21037/mhealth.2019.06.04


The statistics are not encouraging: 1 out of every 4 individuals worldwide will experience a mental or neurological disorder at some point in their lifetime (1). In 2017 alone, approximately 17.3 million U.S. adults experienced at least one major depressive episode (2). Globally, more than 300 million persons suffer from depression (3). Addressing the human suffering that these statistics represent requires a professional workforce that does not currently exist. There are too few mental health professionals to meet the needs of this patient population. By one estimate, 35.5% to 50.3% of patients with serious mental disorders in developed countries did not receive treatment within the previous 12 months. Among patients in less developed countries, 76.3% to 85.4% received no treatment (4). New solutions that can increase access to mental health care and close the treatment gap are sorely needed.

To fill this gap, many clinicians, patients, and consumers are now looking to digital solutions, including mobile apps and chatbots. While the potential is high, current research on the effectiveness of these digital tools, however, is mixed. This paper offers a selective review of the digital mental health landscape with the goal of informing patients and clinicians about the best available evidence.


Evaluating the experimental evidence

There are an increasing number of interesting pilot studies that highlight the potential of novel digital approaches towards increasing access to mental health services. WoeBot, a text-based chatbot that uses cognitive behavioral therapy (CBT) principles to interact with users, has generated measurable improvements in 9-item Patient Health Questionnaire, the 7-item Generalized Anxiety Disorder scale, and the Positive and Negative Affect Scale among college students, when compared to controls who were given an ebook called Depression in College Students, from the National Institute of Mental Health. It is important to emphasize, however, that while students recruited for the study may have experienced mental health issues, they had not been diagnosed with clinical depression (5). A recent review of chatbot type programs for mental health concluded that while the potential for these interventions is high, there is a need for results from clinical populations using chatbot before further claims about their efficacy can be made (6). Still, even the current pilot research today is important in exploring new avenues and modalities to deliver care.

There are also many efforts to deliver care directly via apps. While it is not feasible to review all, focusing on one representative example offers insights into this space. The United Kingdom’s National Institute of Health and Care Excellence (NICE) sees promise in a mobile self-help app developed by Gaia Healthcare called Deprexis. NICE has recommended that the National Health Service conduct a formal clinical trial to further evaluate its effectiveness as a digital therapy. NICE believes that the mobile app, which adheres to the principles of CBT, may be an alternative for patients with mild to moderate depression (7). Conal Twomey of University College Dublin, and associates conducted a meta-analysis of 8 randomized controlled trials (RCTs) involving 2,402 subjects that evaluated Deprexis and concluded that the program was effective in alleviating depressive symptoms at post intervention (effect size g=0.54) (8). The Deprexis program consists of 10 modules for users to work through at their own pace, including behavioral activation, cognitive and lifestyle modification, mindfulness, problem solving, and interpersonal skills. The app provides users with simulated conversations that last up to an hour. Similarly, Mary Rogers, MS, PhD, and her colleagues in the Department of Internal Medicine, University of Michigan, concluded that for every four persons with depression that used Deprexis, one recovered from depression when compared to those who did not use the program (9). However, some experts believe this claim is too bold. One reason is that while many digital health interventions boast impressive study results, when implemented in actual clinical settings their efficacy appears less than expected. The implementation of these technological interventions is increasingly recognized as the next focus and frontier for this evolving space (10).

Another important issue for digital health today focuses on the level of evidence necessary to move an app from the research domain into the clinical setting. Reviews of the published literature can help offer insights into the overall state of the field. For example, rather than reviewing the data on individual mental health apps, Joseph Firth and associates have taken a much broader approach to smartphone app evaluation, conducting a meta-analysis of RCTs designed to improve depressive symptoms (11). Their review, which looked at 19 RCTs of 22 phone apps and 3,414 subjects, concluded that “Depressive symptoms were reduced significantly more from smartphone apps than control conditionsOverall, these results indicate that smartphone devices are a promising self-management tool for depression.” Despite their positive conclusions, several caveats should be kept in mind. None of the trials included in the meta-analysis lasted longer than 24 weeks, so it is premature to conclude that any of the apps would benefit individuals experiencing long-term depressive symptoms. Similarly, the review only found that the mobile apps were effective in persons with self-reported mild-to-moderate depression. Firth et al. were unable to find a significant impact of mental health apps on major depression, bipolar disorder or anxiety disorders.

Jesse Wright and associates, on the other hand, specifically addressed major depressive disorder (MDD) in their systematic review and meta-analysis (12). The research team evaluated computer-assisted cognitive behavior therapy (CCBT), including the effect of clinical support for patients using this approach to treatment. Their analysis, which included 40 RCTs, found CCBT had a moderately large effect (g=0.502) when compared to controls. Wright et al. also found that the effect size was significantly larger when the treatment was supported by a clinician or some other helping professional (g=0.673). Putting the results into the context of other psychiatric modalities for major depression, the overall effect size detected in the meta-analysis was comparable to the effect of standard treatments, including antidepressants and individual psychotherapy. Once again there are a number of caveats and limitations that need to be taken into consideration. Many of the studies analyzed by Wright et al. did not have adequate follow-up. Also of concern was the fact that many of the studies included a wait-list control group rather than an active control. Despite these shortcomings, Wright et al. concluded “A sufficient number of studies of CCBT for depression have been conducted to conclude that this method, when combined with modest amounts of clinician support, offers potential for delivery of evidence-based treatment at greater efficiency and lower cost than standard CBT.”

A third analysis, conducted by Ariene Kerst and colleagues (13), included 12 studies of smartphone applications for depression. The studies that were included in this systematic review were chosen based on several inclusion criteria. They were based on an active treatment approach such as CBT or behavioral activation; the population targeted by the apps had either clinical or subclinical depression; the apps measured depressive symptoms. The analysis found that all the reviewed programs improved depressive symptoms. However, the analysis was not limited to RCTs but included small pilot studies and studies without any comparison group; sample sizes varied widely, from 24 to 626 participants. Finally the longest trial did not extend beyond 12 weeks.

While each of three aforementioned analyses offer advantages, none documented the extent to which clinicians, patients and consumers are actually using and engaging with mental health in routine clinical settings. Nor did these reviews evaluate the impact of these questionable apps on the public’s mental health. By one estimate there are over 10,000 mental health apps now on the market (14). Keeping track of this myriad of apps or evaluating them is simply impossible. While there are interesting pilot studies, the reality is that overall there is little solid data on the impact of these apps on depression, anxiety, schizophrenia, and other psychiatric disorders. For example, consider the Firth et al. analysis. It excluded numerous apps that were not RCTs or that did not include eligible outcome data. Wright et al. excluded 183 studies of poor quality and Kerst et al. only included 12 of 131 studies in their evaluation. While one could argue that apps without evidence should not be considered for use, the fact that patients today are seeking out and even using such apps forces the clinical community to present at least some guidance and support to patients.

The reviews presented on depression apps highlight how much more the field has to learn around the efficacy of apps. While such exclusion criteria in these reviews were necessary to determine which evidence-based apps held the most promise, not studying these questionable applications makes it difficult to know just how many users are being misled and misinformed by likely worthless programs or how much harm these apps may be causing today. Nor does it allow us to gauge any possible benefits from these non-evidence-based apps. To answer those questions, it would be necessary to include all the excluded apps in a separate analysis and to determine their impact over time. While this would be a daunting project—it is concerning that these same apps today are likely overstating their potential. By one recent estimate, less than 1% of mental health apps on the commercial marketplaces may actually offer any evidence—despite more than 50% making claims about evidence. This dichotomy between claims and evidence highlights the opportunity for novel approaches and even new regulation (15).


Fostering a wholistic, therapeutic relationship

A solution for improving mental health apps may lie not in more technology but less. One of the shortcomings of many mental health apps is that they may offer useful information and exercises—but they do so outside of the context of a therapeutic relationship. One of the greatest predicators of treatment response to CBT, even online versions, is the therapeutic alliance (16), which involves the connection and bond between the clinician/program and patient. Yet today many apps encourage people to pursue help delivered in isolation on their smartphone. Thus, it is not surprising that such self-help apps, when carefully studied with rigorous methodologies, appear to be no more effective than placebo from some users (17). A challenge for the field is now to create technologies that allow increased access to care but also foster a therapeutic relationship. While many solutions have been proposed and need to be assessed, digital clinics that offer hybrid care and blend elements of face-to-face treatment augmented with apps and other digital solutions offer one path to explore (18).

Fostering a therapeutic alliance around digital health tools is feasible and easy to understand in the context of a digital therapeutic alliance (19). There are three elements that compose the alliance including (I) goals, (II) tasks, and (III) bond. Using technology to help patients reach meaningful and shared goals, for example assigning an app to help master elements of therapy towards a specific treatment goal is feasible in many situations. Ensuring technology tasks that are goal orientated and achievable by the patient is a second critical element. All too often apps may be hard for patients to use, not engaging for patients to stick with, and not directly related to treatment. It is not necessary to use on app for all needs; instead a clinician could ask a patient to work on certain tasks in different apps as long as those tasks align with the treatment goals and patient abilities. Finally, in terms of forming a bond around apps, the clinician needs to be able to support and encourage the patient to stick with the app as well as offer support when they get stuck or run into resistance. This means a clinician needs to have some familiarity with apps she or he is recommending and offer assistance to keep the patient progressing along the tasks towards the goal. Thus a clinician can likely form a therapeutic alliance with a patient around technology use, although this does require the clinician to have some knowledge about the app being used. In the next section, we explore how that knowledge can be obtained.


Mobile app selection criteria

While the evidence is still evolving for which apps will be most useful and how they will best be implemented into clinical care, patients today still want to use them. Evaluating mental health apps should be considered no different than evaluating any other therapeutic tool or intervention. It is necessary to consider the risks and benefits and personalization to the patient and treatment goals. But what can make app evaluation more challenging is that many of the risks are difficult to discern and the benefits still unknown. There are professional organizations that can assist clinicians in making a more informed decision choosing a program. For mental health professionals, the American Psychiatric Association (APA) is the organization to turn to for advice.

APA does not rate individual mobile health apps. Instead it offers clinicians a collection of criteria to use as they make an individual evaluation of the apps they are considering in clinical practice. That approach to app evaluation is based on a philosophy that states: “Our approach to rating mental health apps is grounded in the belief that any decision between you and a patient is a personal decision based on many factors, for which there is rarely a binary ‘yes’ or ‘no’ answer.” (20).

The 5 steps evaluation process outlined by APA consists of the following:

  • Collect background information;
  • Assess the app’s risks, including the risk of compromised privacy and security;
  • Review the evidence supporting the app’s potential benefits;
  • Evaluate the app’s ease of use;
  • Review the app’s clinical integration and ability to strengthen the therapeutic alliance.

Background information

APA recommends that clinicians evaluate the business model upon which the app is based. Essentially, one needs to know how the vendor that developed the program supports it financially. If the app is offered free to users, the price patients and clinicians may pay is a loss of privacy as the vendor uses their email address or other personal data for marketing or sells it to a third party. If on the other hand there is a fee for the app, users will want to obtain details on the price structure before committing.

Additional issues to investigate before committing to a specific app include determining who the developer is, whether the app is making medical claims, is advertising integrated into the user interface, how often is the app updated, and what operating system does it rely on.

Risks

Among the concerns that the APA encourages users to consider: data costs—a significant issue if one has to rely on a wireless internet provider. Equally important is the app’s security and privacy features. Unfortunately, many vendors offer little or no protection is these two areas. They may not even have a stated privacy policy. Some free apps may open the door to malware infection. Additional questions worth asking, says APA, are: does the vendor de-identify patient data to protect their privacy? Is it possible to delete data from the app? Is patient data encrypted? Where is the data stored? What specific security protocols are in place to reduce the risk of data hacking on the user’s device or the vendor’s servers?

Evidence

In addition to taking into account the literature reviews discussed earlier in this paper, it is also prudent to investigate specific evidence for each app being considered for downloading. Since few mental health apps are supported by RCTs, APA recommends that users at least ask the developer for any peer-reviewed published evidence to suggest the app can do what it claims to do.

Usability

The APA evaluation criteria emphasize the need to satisfy criteria one and two before moving on to ease of use. There is no reason to consider usability if a mental health app is not evidence based and addresses basic concerns about its business model and the other issues discussed under the ‘gather background information’ step. Assuming that the app under consideration does meet those criteria and one is satisfied that it offers some measure of privacy and security protection, ease of use is worth evaluating. Among the questions to ask: how likely are users to stay with the app long-term? Does the patient or clinician have to have access to the internet to use the app or is it a standalone program? Can patients easily access the app, especially if they have some type of disability, including poor vision?

Clinical integration

The ability of a mobile app to communicate directly with a patient’s clinicians, with a patient’s family, or with an electronic health record system can be an essential feature depending on the nature of the app and its intended purpose. In some settings, clinicians may also want to view accumulated data in the app to obtain a fuller picture of the patient’s progress during therapy. As the APA guidelines explain, “The reason why interoperability becomes important in this model is because apps should not fragment care and the patient and psychiatrist should be able to share and discuss data or feedback from the app as appropriate.” (20). Interoperability will be especially useful if the app tracks a patient’s mood or if it facilitates medication management. With these considerations in mind, users should be asking questions about an app’s ability to export and download data, print out the data, and share it with tools like Apple Healthkit. Will this data be useful in helping the patient reach a treatment goal and will it be helpful in improving the therapeutic alliance are further critical questions to consider?

Today the APA model serves as an evidence-based tool to evaluate mental health apps and formulate informed decision making around their use in care settings. Because all stages of the evaluation are actually agnostic to mental health (e.g., security matters for all types of health apps)—the framework has recently been updated for any health condition (21). With this updated framework, the next steps are to form a diverse panel of stakeholders and begin to apply the APA app evaluation model towards real world apps. While any use of the model is impossible to generalize to any specific clinical case as outlined above, the goal of offering these worked examples is to provide clinicians with examples to learn from and hone their own skills in app evaluation.


Conclusions

Although mobile mental health apps and chatbots are not the revolutionary tools that some enthusiasts believe will reinvent psychiatric practice and self-care, they nonetheless hold real potential. A few evidence-based apps have already proven to be useful adjuncts to professional care. Placing these well-documented digital tools into the context of a trusting therapeutic relationship between therapist and patient may eventually transform mental health in ways that we could not have imagined a decade ago.


Acknowledgments

None.


Footnote

Conflicts of Interest: J Torous was involved in the development of the APA evaluation criteria but did not receive financial compensation for this assistance. The other authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.


References

  1. World Health Organization. Mental disorders affect one in four people. Accessed June 7, 2019. Available online: https://www.who.int/whr/2001/media_centre/press_release/en/
  2. National Institute of Mental Health. Major depression. Feb 2019. Accessed April 15, 2019. Available online: https://www.nimh.nih.gov/health/statistics/major-depression.shtml
  3. World Health Organization. Depression: Key facts. Available online: https://www.who.int/news-room/fact-sheets/detail/depression. Accessed April 15, 2019.
  4. Demyttenaere K, Bruffaerts R, Posada-Villa J, et al. Prevalence, Severity, and Unmet Need for Treatment of Mental Disorders in the World Health Organization World Mental Health Surveys. JAMA 2004;291:2581-90. [Crossref] [PubMed]
  5. Fitzpatrick KK, Darcy A, Vierhile M. Delivering cognitive behavior therapy to young adults with symptoms of depression and anxiety using a fully automated conversational agent (Woebot): a randomized controlled trial. JMIR Mental Health 2017;4:e19. [Crossref] [PubMed]
  6. Vaidyam AN, Wisniewski H, Halamka JD, et al. Chatbots and Conversational Agents in Mental Health: A Review of the Psychiatric Landscape. Can J Psychiatry 2019. [Epub ahead of print]. [Crossref] [PubMed]
  7. National Institute for Health and Care Excellence. New online and mobile app for depression should be trialled on the NHS, says NICE. Jan 23, 2018. Available online: https://www.nice.org.uk/news/article/new-online-and-mobile-app-for-depression-should-be-trialled-on-the-nhs-says-nice. Accessed April 23, 2019.
  8. Twomey C, O’Reilly G, Meyer B. Effectiveness of an individually-tailored computerised CBT programme (Deprexis) for depression: A meta-analysis. Psychiatry Res 2017;256:371-7. [Crossref] [PubMed]
  9. Rogers MA, Lemmen K, Kramer R, et al. Internet-delivered health interventions that work: systematic review of meta-analyses and evaluation of website availability. J Med Internet Res 2017;19:e90. [Crossref] [PubMed]
  10. Hermes ED, Lyon AR, Schueller EM, et al. Measuring the Implementation of Behavioral Intervention Technologies: Recharacterization of Established Outcomes. J Med Internet Res 2019;21:e11752. [Crossref] [PubMed]
  11. Firth J, Torous J, Nicholas J, et al. The efficacy of smartphone-based mental health interventions for depressive symptoms: a meta-analysis of randomized controlled trials. World Psychiatry 2017;16:287-98. [Crossref] [PubMed]
  12. Wright JH, Owen JJ, Richards D, et al. Overall, these results indicate that smartphone devices are a promising self-management tool for depression. J Clin Psychiatry 2019;80:18r12188.
  13. Kerst A, Zielasek J, Gaebel W. Smartphone applications for depression: a systematic literature review and a survey of health care professionals’ attitudes towards their use in clinical practice. Eur Arch Psychiatry Clin Neurosci 2019. [Epub ahead of print]. [Crossref] [PubMed]
  14. Torous J, Luo J, Chan SR. Mental health apps: What to tell patients. Current Psychiatry 2018;17:21-4.
  15. Larsen ME, Huckvale K, Nicholas J, et al. Using science to sell apps: evaluation of mental health app store quality claims. Digital Medicine 2019. doi: 10.1038/s41746-019-0093-1. [Crossref]
  16. Vernmark K, Hesser H, Topooco N, et al. Working alliance as a predictor of change in depression during blended cognitive behaviour therapy. Cogn Behav Ther 2019;48:285-99. [Crossref] [PubMed]
  17. Noone C, Hogan MJ. A randomised active-controlled trial to examine the effects of an online mindfulness intervention on executive control, critical thinking and key thinking dispositions in a university student sample. BMC Psychol 2018;6:13. [Crossref] [PubMed]
  18. Torous J, Hsin H. Empowering the digital therapeutic relationship: virtual clinics for digital health interventions. Digital Medicine 2018. Available online: https://www.nature.com/articles/s41746-018-0028-2. Accessed April 23, 2019.
  19. Henson P, Wisniewski H, Hollis C, et al. Digital mental health apps and the therapeutic alliance: Initial review. BJPsych Open 2019;5:e15. [Crossref] [PubMed]
  20. American Psychiatric Association. App Evaluation Model. Available online: https://www.psychiatry.org/psychiatrists/practice/mental-health-apps/app-evaluation-model. Accessed April 30, 2019.
  21. Henson H, David G, Albright K, et al. Deriving a practical framework for evaluation of health apps. Lancet Digital Health 2019;1:PE52-4.
doi: 10.21037/mhealth.2019.06.04
Cite this article as: Torous J, Cerrato P, Halamka J. Targeting depressive symptoms with technology. mHealth 2019;5:19.

Download Citation