Site avatar
Nadia Karizat
PhD Candidate

Curriculum vitae



School of Information

University of Michigan



Emotion AI Use in U.S. Mental Healthcare: Potentially Unjust and Techno-Solutionist


Journal article


Kat Roemmich, Shanley Corvite, Cassidy Pyle, Nadia Karizat, Nazanin Andalibi
Proceedings of the ACM on Human-Computer Interaction, vol. 8, Association for Computing Machinery (ACM), 2024 Apr, pp. 1–46


View PDF
Cite

Cite

APA   Click to copy
Roemmich, K., Corvite, S., Pyle, C., Karizat, N., & Andalibi, N. (2024). Emotion AI Use in U.S. Mental Healthcare: Potentially Unjust and Techno-Solutionist. Proceedings of the ACM on Human-Computer Interaction, 8, 1–46. https://doi.org/10.1145/3637324


Chicago/Turabian   Click to copy
Roemmich, Kat, Shanley Corvite, Cassidy Pyle, Nadia Karizat, and Nazanin Andalibi. “Emotion AI Use in U.S. Mental Healthcare: Potentially Unjust and Techno-Solutionist.” Proceedings of the ACM on Human-Computer Interaction 8 (April 2024): 1–46.


MLA   Click to copy
Roemmich, Kat, et al. “Emotion AI Use in U.S. Mental Healthcare: Potentially Unjust and Techno-Solutionist.” Proceedings of the ACM on Human-Computer Interaction, vol. 8, Association for Computing Machinery (ACM), Apr. 2024, pp. 1–46, doi:10.1145/3637324.


BibTeX   Click to copy

@article{roemmich2024a,
  title = {Emotion AI Use in U.S. Mental Healthcare: Potentially Unjust and Techno-Solutionist},
  year = {2024},
  month = apr,
  journal = {Proceedings of the ACM on Human-Computer Interaction},
  pages = {1–46},
  publisher = {Association for Computing Machinery (ACM)},
  volume = {8},
  doi = {10.1145/3637324},
  author = {Roemmich, Kat and Corvite, Shanley and Pyle, Cassidy and Karizat, Nadia and Andalibi, Nazanin},
  month_numeric = {4}
}

Abstract: Emotion AI, or AI that claims to infer emotional states from various data sources, is increasingly deployed in myriad contexts, including mental healthcare. While emotion AI is celebrated for its potential to improve care and diagnosis, we know little about the perceptions of data subjects most directly impacted by its integration into mental healthcare. In this paper, we qualitatively analyzed U.S. adults' open-ended survey responses (n = 395) to examine their perceptions of emotion AI use in mental healthcare and its potential impacts on them as data subjects. We identify various perceived impacts of emotion AI use in mental healthcare concerning 1) mental healthcare provisions; 2) data subjects' voices; 3) monitoring data subjects for potential harm; and 4) involved parties' understandings and uses of mental health inferences. Participants' remarks highlight ways emotion AI could address existing challenges data subjects may face by 1) improving mental healthcare assessments, diagnoses, and treatments; 2) facilitating data subjects' mental health information disclosures; 3) identifying potential data subject self-harm or harm posed to others; and 4) increasing involved parties' understanding of mental health. However, participants also described their perceptions of potential negative impacts of emotion AI use on data subjects such as 1) increasing inaccurate and biased assessments, diagnoses, and treatments; 2) reducing or removing data subjects' voices and interactions with providers in mental healthcare processes; 3) inaccurately identifying potential data subject self-harm or harm posed to others with negative implications for wellbeing; and 4) involved parties misusing emotion AI inferences with consequences to (quality) mental healthcare access and data subjects' privacy. We discuss how our findings suggest that emotion AI use in mental healthcare is an insufficient techno-solution that may exacerbate various mental healthcare challenges with implications for potential distributive, procedural, and interactional injustices and potentially disparate impacts on marginalized groups.



Translate to