top of page
Anshi Purohit

The Evolving Relationship Between Artificial Intelligence and Mental Health Advancement

Abstract

To combat the global shortage of mental health professionals, employing artificial intelligence (AI) and deep learning (DL) has proven beneficial to meet afflicted individuals’ needs. Due to the catastrophic mental health crisis and lack of efficient care services in underserved countries, AI and DL have the potential to assuage the ongoing international mental health crisis. By connecting underserved communities to platforms where they can access resources and allowing AI to recognize patterns in language which can lead to suicide prevention, the futures of psychology and deep learning remain inextricably intertwined. While there are apps and utilities emerging to tackle humanity’s growing demand for care services, privacy breaches and diagnosis errors are the primary concerns of some psychologists and psychiatrists. As the world becomes more reliant on technology, some individuals are wary of releasing personal information through online platforms and apps and challenge the merits of these methods. Mental illness itself is variant and cannot be ensnared in rigid definition.This paper weighs the unknowns of such AI operations alongside possible benefits, and evaluates where the relationship between mental health advancement and technology lies.


Recognizing Patterns in Language

Trials have demonstrated deep learning trained to recognize patterns and signs of mental illness. These systems’ capabilities range from recognizing suicide notes via a transformer-based deep learning model to detecting depression and anxiety in youth [1].


Suicide claims the lives of over 700,000 individuals a year, as per the World Health Organization (WHO), and the number is expected to continue rising. Individuals suffering from suicidal ideation are often impaired and distressed, which has a negative impact on their lives whether or not they have attempted suicide. For this reason, identifying whether persons are suicidal is imperative for clinicians when they consider types of treatment for afflicted persons [2].

TransformerRNN—a deep learning model developed as part of a study— analyzed 659 suicide note samples, identifying linguistic patterns to discern what might classify as a suicide note, a last statement, or a ‘neutral’ note [1]. To achieve this, the TransformerRNN had five essential components used to evaluate and analyze the notes. By using technology to evaluate language and aid in predicting suicidal behavior, these morbid suicide rates could lower.


Impediments to getting children evaluated for anxiety and depression also remain prominent. Barriers such as insurance, poor parental judgment, or long waiting lists could prevent young children from getting the treatment they need. Recently, a new tool was developed that analyzes audio using machine learning—audio taken from a three-minute speech task given to children as a method of screening for mental disorders [3]. The machine learning tool could identify children who had ‘internalizing disorders’ with 80% accuracy, outperforming other clinical methods of screening [3].


These methods play a large role in the future of diagnosis and prognosis; once refined and tested further, they could become the future of screening to ensure every individual has access to the resources they require.


Figure One: Image from Pixabay


Diagnosing Disease

While psychiatrists are typically responsible for diagnosing mental illness, with the assistance of AI their efficiency may improve. According to WHO, one billion people suffer from mental illness. Many individuals self-diagnose because it can be difficult for people to see a medical professional for an evaluation.


AI has the potential to refine disease models and provide health professionals with valuable insight regarding characteristics of individuals at risk [4]. By simplifying the diagnosis process, many individuals would gain improved access to resources and accurate treatment. Because disease-diagnosing models are disagreed upon and there is no common standard, psychiatric discoveries and innovations have slowed. Utilizing AI and machine learning has the potential to enhance efficiency by recognizing patterns in diseases and applying new insights; these systems will reduce the overall cost of healthcare due to the misdiagnosis and unnecessary treatment procedures put in place [4][5].

An example of such a model is a system that uses a NEPAR algorithm—an algorithm that enhances transparency by demonstrating the similarities/relations between certain features— to identify mental illness [5]. By providing these characteristics as an input, the study claims to allow mental health professionals additional time to plan treatment methods [5].


Current Systems In Use (Apps & Services)

From smartwatches to fitness apps to other consumer-wearable technology, a plethora of apps and services are available to assess mental health and responses to stress. By utilizing technology to associate physiological responses to psychological conditions, evaluating and treating mental illness can become a smoother process [6][7].


MindLamp (Learn, Assess, Manage, Prevent) and BiAffect are two examples of apps making headway in the intersection of AI and psychology [9]. MindLamp collects surveys, cognitive tests and exercise information to gauge an individual’s progress towards recovery, and uses sensors as well as smartphones to gain a holistic view of the individual’s mental illness; BiAffect employs machine learning to estimate depressive and manic episodes in individuals with bipolar disorder [9].


AI also encourages children with various mental illnesses to develop social skills by teaching engagement skills. Intelligent AI companion bots act as nurturing figures in childrens’ lives, reducing anxiety and loneliness [9]. The increased implementation of telehealth therapy during the COVID-19 pandemic also improved access to mental health resources.


Though we have seen an increase in the development of mental health technology, accessibility issues remain prevalent. Over 30,000 mental health apps are available on ITunes for free, but few of them link to medical records [8]. These restrictions are a major setback for healthcare professionals and patients, but as technology advances so might the accessibility of these services.


However, the future of these tools and whether individuals would benefit from their expanded use is often debated [14].


Ethical Implications

Ethical conversations and privacy concerns surrounding the use of AI and DL in psychology may influence how widely these experimental technologies are used in practice. Establishing trust by remaining transparent and accurate with algorithms along with personal information is essential to ease concerns.


In order to integrate the use of AI and DL in psychiatry, proposed models must pass many thresholds. Safeguarding sensitive patient data, minimizing biases and understanding how proposed models work just scratches the surface of all that needs to be discussed [10].


For a model to be transparent and retain a solid foundation, testing the system until it ‘breaks’ by feeding it extreme scenarios demonstrates the model’s ability to function in a clinical setting [10]. Models should avoid stereotyping and other forms of bias that could exist in AI systems. Factors such as measurement or platform bias increases the probability of gender and race discrimination among AI systems [11]. Ensuring an accurate amount of representation among subgroups and a variety of platforms surveyed and checked against one another are examples of proposed solutions [11].


A concept known as the ‘black box’ could also hinder future progress. Black boxes are a threshold at which machine learning systems actions aren’t fully explainable to scientists [12]. When a machine learning system comes to conclusions that scientists aren’t able to explain using the system’s programming, some won’t trust the results.


Without a proper evaluation, learning to trust these models is difficult no matter what potential and breakthroughs in the field they might bring. Either way, definition is difficult and variable when juxtaposed against different expert’s viewpoints.


Conclusion

This paper discusses the impact and future implications the relationship between AI, DL and psychology have on society. AI is a powerful tool increasing in relevance and scope as humanity progresses. Despite the unknowns and ethical implications such technology’s further advancement in the field might provoke, AI will transform the field of psychiatry and psychology. Transforming the field to relieve burden and optimize positive outcomes through the incorporation of AI and DL demonstrates a lot of potential, though it is necessary to evaluate and address concerns from all affected parties before adapting new practicing methods. Psychological science is a specific and personalized field, and using technology to help us improve our care strategies is just the beginning of an evolving relationship. Approaching standards from a holistic viewpoint is crucial for the success and improved access to various medical appliances and resources for individuals on an international scale. Real-world change is possible with cutting-edge solutions.


Works Cited

  1. Zhang, T., Schoene, A. M., & Ananiadou, S. (2021). Automatic identification of suicide notes with a transformer-based deep learning model. Internet Interventions, 25, 100422. https://doi.org/10.1016/j.invent.2021.100422

  2. Ribeiro, J. D., Huang, X., Fox, K. R., Walsh, C. G., & Linthicum, K. P. (2019). Predicting Imminent Suicidal Thoughts and Nonfatal Attempts: The Role of Complexity. Clinical Psychological Science, 7(5), 941–957. https://doi.org/10.1177/2167702619838464

  3. McGinnis, E. W., Anderau, S. P., Hruschak, J., Gurchiek, R. D., Lopez-Duran, N. L., Fitzgerald, K., Rosenblum, K. L., Muzik, M., & McGinnis, R. (2019). Giving Voice to Vulnerable Children: Machine Learning Analysis of Speech Detects Anxiety and Depression in Early Childhood. IEEE Journal of Biomedical and Health Informatics, 1–1. https://doi.org/10.1109/jbhi.2019.2913590

  4. Tai, A. M. Y., Albuquerque, A., Carmona, N. E., Subramanieapillai, M., Cha, D. S., Sheko, M., Lee, Y., Mansur, R., & McIntyre, R. S. (2019). Machine learning and big data: Implications for disease modeling and therapeutic discovery in psychiatry. Artificial Intelligence in Medicine, 99, 101704. https://doi.org/10.1016/j.artmed.2019.101704

  5. Tutun, S., Johnson, M. E., Ahmed, A., Albizri, A., Irgil, S., Yesilkaya, I., Ucar, E. N., Sengun, T., & Harfouche, A. (2022). An AI-based Decision Support System for Predicting Mental Health Disorders. Information Systems Frontiers. https://doi.org/10.1007/s10796-022-10282-5

  6. Pakhomov, S. V. S., Thuras, P. D., Finzel, R., Eppel, J., & Kotlyar, M. (2020). Using consumer-wearable technology for remote assessment of physiological response to stress in the naturalistic environment. PLOS ONE, 15(3), e0229942. https://doi.org/10.1371/journal.pone.0229942

  7. Bălan, O., Moise, G., Moldoveanu, A., Leordeanu, M., & Moldoveanu, F. (2020). An Investigation of Various Machine and Deep Learning Techniques Applied in Automatic Fear Level Detection and Acrophobia Virtual Therapy. Sensors, 20(2), 496. https://doi.org/10.3390/s20020496

  8. Mobile technology for mental health assessment. (2016). New Approaches to the Assessment of Function in Mental Health, 18(2), 163–169. https://doi.org/10.31887/dcns.2016.18.2/parean

  9. Pham, K. T., Nabizadeh, A., & Selek, S. (2022). Artificial Intelligence and Chatbots in Psychiatry. Psychiatric Quarterly. https://doi.org/10.1007/s11126-022-09973-8

  10. Chandler, C., Foltz, P. W., & Elvevåg, B. (2019). Using Machine Learning in Psychiatry: The Need to Establish a Framework That Nurtures Trustworthiness. Schizophrenia Bulletin. https://doi.org/10.1093/schbul/sbz105

  11. Tay, L., Woo, S. E., Hickman, L., Booth, B. M., & D’Mello, S. (2022). A Conceptual Framework for Investigating and Mitigating Machine-Learning Measurement Bias (MLMB) in Psychological Assessment. Advances in Methods and Practices in Psychological Science, 5(1), 251524592110613. https://doi.org/10.1177/25152459211061337

  12. Hsu, W., & Elmore, J. G. (2019). Shining Light Into the Black Box of Machine Learning. JNCI: Journal of the National Cancer Institute, 111(9), 877–879. https://doi.org/10.1093/jnci/djy226 13. Sleek, B. S. (2023). How Machine Learning Is Transforming Psychological Science. APS Observer, 36. https://www.psychologicalscience.org/observer/machine-learning-transforming-psychological-sci ence#glossary

Commenti


  • Instagram
  • Facebook
bottom of page