skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

Attributions toward artificial agents in a modified Moral Turing Test

Scientific reports, 2024-04, Vol.14 (1), p.8458-8458 [Peer Reviewed Journal]

2024. The Author(s). ;The Author(s) 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;The Author(s) 2024 ;EISSN: 2045-2322 ;DOI: 10.1038/s41598-024-58087-7 ;PMID: 38688951

Full text available

Citations Cited by
  • Title:
    Attributions toward artificial agents in a modified Moral Turing Test
  • Author: Aharoni, Eyal ; Fernandes, Sharlene ; Brady, Daniel J ; Alexander, Caelan ; Criner, Michael ; Queen, Kara ; Rando, Javier ; Nahmias, Eddy ; Crespo, Victor
  • Subjects: Adult ; Artificial intelligence ; Artificial Intelligence - ethics ; Female ; Humans ; Judgment ; Language ; Male ; Middle Aged ; Morality ; Morals ; Young Adult
  • Is Part Of: Scientific reports, 2024-04, Vol.14 (1), p.8458-8458
  • Description: Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24-28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to their source. Remarkably, they rated the AI's moral reasoning as superior in quality to humans' along almost all dimensions, including virtuousness, intelligence, and trustworthiness, consistent with passing what Allen and colleagues call the comparative MTT. Next, when tasked with identifying the source of each evaluation (human or computer), people performed significantly above chance levels. Although the AI did not pass this test, this was not because of its inferior moral reasoning but, potentially, its perceived superiority, among other possible explanations. The emergence of language models capable of producing moral responses perceived as superior in quality to humans' raises concerns that people may uncritically accept potentially harmful moral guidance from AI. This possibility highlights the need for safeguards around generative language models in matters of morality.
  • Publisher: England: Nature Publishing Group
  • Language: English
  • Identifier: EISSN: 2045-2322
    DOI: 10.1038/s41598-024-58087-7
    PMID: 38688951
  • Source: MEDLINE
    PubMed Central
    ProQuest Central
    DOAJ Directory of Open Access Journals

Searching Remote Databases, Please Wait