skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

Assessing ChatGPT's use of person-first language in healthcare conversations

Discover Artificial Intelligence, 2024-12, Vol.4 (1), p.6-10, Article 6 [Peer Reviewed Journal]

The Author(s) 2024 ;The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;ISSN: 2731-0809 ;EISSN: 2731-0809 ;DOI: 10.1007/s44163-023-00099-9

Full text available

Citations Cited by
  • Title:
    Assessing ChatGPT's use of person-first language in healthcare conversations
  • Author: Hackl, Ellen
  • Subjects: Artificial Intelligence ; Bipolar disorder ; Brief Communication ; Chatbot ; Chatbots ; ChatGPT ; Child development ; Children & youth ; Communication ; Computer Science ; Diabetes ; Down syndrome ; Education ; Engineering ; Ethics ; Generative artificial intelligence ; Health care ; HIV ; Human immunodeficiency virus ; Inclusive language ; Intellectual disabilities ; Language ; Medical personnel ; Patients ; Person-first language ; Personality disorders ; Personalized learning ; Plagiarism ; Schizophrenia
  • Is Part Of: Discover Artificial Intelligence, 2024-12, Vol.4 (1), p.6-10, Article 6
  • Description: The conversational chatbot ChatGPT has attracted significant attention from both the media and researchers due to its potential applications, as well as concerns surrounding its use. This study evaluates ChatGPT’s efficacy in healthcare education, focusing on the inclusivity of its language. Person-first language which prioritizes the individual over their medical condition, is an important component of inclusive language in healthcare. The aim of the present study was to test ChatGPT’s responses to non-inclusive, non-patient-first, judgmental, and often offensive language inputs. Provocative phrases based on a list of “do not use” recommendations for inclusive language were selected and used to formulate input questions. The occurrences of each provocative phrase or its substitute(s) within the responses generated by ChatGPT were counted to calculate the Person-First Index, which measures the percentage of person-first language. The study reveals that ChatGPT avoids using judgmental or stigmatized phrases when discussing mental health conditions, instead using alternative person-first language that focuses on individuals rather than their conditions, both in response to questions and in correcting English grammar. However, ChatGPT exhibits less adherence to person-first language in responses related to physiological medical conditions or addictions, often mirroring the language of the inputs instead of adhering to inclusive language recommendations. The chatbot used person-first language more frequently when referring to “people” rather than "patients." In summary, the findings show that despite the controversy surrounding its use, ChatGPT can contribute to promoting more respectful language, particularly when discussing mental health conditions.
  • Publisher: Cham: Springer International Publishing
  • Language: English
  • Identifier: ISSN: 2731-0809
    EISSN: 2731-0809
    DOI: 10.1007/s44163-023-00099-9
  • Source: Springer Nature OA/Free Journals
    ProQuest Central
    DOAJ Directory of Open Access Journals

Searching Remote Databases, Please Wait