skip to main content
Language:
Search Limited to: Search Limited to: Resource type Show Results with: Show Results with: Search type Index

Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns

Big data & society, 2019-06, Vol.6 (1), p.205395171986054 [Peer Reviewed Journal]

The Author(s) 2019 ;The Author(s) 2019. This work is licensed under the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. ;ISSN: 2053-9517 ;EISSN: 2053-9517 ;DOI: 10.1177/2053951719860542

Full text available

Citations Cited by
  • Title:
    Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns
  • Author: Felzmann, Heike ; Villaronga, Eduard Fosch ; Lutz, Christoph ; Tamò-Larrieux, Aurelia
  • Subjects: Artificial intelligence ; Communication ; Data processing ; Decision making ; Ethical standards ; Ethics ; General Data Protection Regulation ; Human engineering ; Human-computer interaction ; Information dissemination ; Norms ; Policy making ; Protection ; Regulation ; Robots ; Telecommunications ; Transparency ; Trustworthiness
  • Is Part Of: Big data & society, 2019-06, Vol.6 (1), p.205395171986054
  • Description: Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.
  • Publisher: London, England: SAGE Publications
  • Language: English
  • Identifier: ISSN: 2053-9517
    EISSN: 2053-9517
    DOI: 10.1177/2053951719860542
  • Source: DOAJ Directory of Open Access Journals
    SAGE Open Access
    ROAD: Directory of Open Access Scholarly Resources
    ProQuest Central

Searching Remote Databases, Please Wait