skip to main content
Guest
My Research
My Account
Sign out
Sign in
This feature requires javascript
Library Search
Find Databases
Browse Search
E-Journals A-Z
E-Books A-Z
Citation Linker
Help
Language:
English
Vietnamese
This feature required javascript
This feature requires javascript
Primo Search
All Library Resources
All
Course Materials
Course Materials
Search For:
Clear Search Box
Search in:
All Library Resources
Or hit Enter to replace search target
Or select another collection:
Search in:
All Library Resources
Search in:
Print Resources
Search in:
Digital Resources
Search in:
Online E-Resources
Advanced Search
Browse Search
This feature requires javascript
Search Limited to:
Search Limited to:
Resource type
criteria input
All items
Books
Articles
Images
Audio Visual
Maps
Graduate theses
Show Results with:
criteria input
that contain my query words
with my exact phrase
starts with
Show Results with:
Search type Index
criteria input
anywhere in the record
in the title
as author/creator
in subject
Full Text
ISBN
ISSN
TOC
Keyword
Field
Show Results with:
in the title
Show Results with:
anywhere in the record
in the title
as author/creator
in subject
Full Text
ISBN
ISSN
TOC
Keyword
Field
This feature requires javascript
Enhancing Humans Trust in Robots through Explanations
Digital Resources/Online E-Resources
Citations
Cited by
View Online
Details
Recommendations
Reviews
Times Cited
External Links
This feature requires javascript
Actions
Add to My Research
Remove from My Research
E-mail
Print
Permalink
Citation
EasyBib
EndNote
RefWorks
Delicious
Export RIS
Export BibTeX
This feature requires javascript
Title:
Enhancing Humans Trust in Robots through Explanations
Author:
Javaid, Misbah
;
Estivill-Castro, Vladimir
;
Hexel, Rene
Subjects:
Humans
;
Interactions
;
Robots
;
Trust
Description:
Robots have moved away from manufacturing environments and are now deployed as social robots in human environments such as in hotels, shops, hospitals and as office coworkers. These robots complement human capabilities and skills with their own robotic skills. With the advancement in the technological capabilities of robots, the roles of such sophisticated robots are evolving from obedient deterministic machines to companions or teammates. Meanwhile, the role of humans is also changing from operators to that of team members. Therefore, robots are expected to collaborate and contribute productively with humans as teammates. We expect robots to develop social intelligence to behave smartly, and also assist us to perform complex tasks. Still, robots lack the features that would permit them to be considered full-fledge teammates by their human counterparts. Inadequacy of humans trust has been identified as a pre-eminent factor behind the unacceptability of robots as trustworthy teammates. Trust is an essential factor for accomplishing the full potential of human-robot teamwork. Trust directly affects a human's willingness to receive robot-produced information and suggestions, and hence, the future use of robots also depends on trust. If humans do not trust robots, they may not utilize their robotic features to their full potential. Research is on-going to address the establishment and endorsement of efficient and successful approaches for an extensive spectrum of Human-Robot Interaction issues. Pragmatic evaluations and investigations in the field of Human-Computer Interaction have already examined humans' trust in technical systems mostly on issues such as reliability and accuracy of performance. We hypothesize that to integrate robots into human-environment successfully, robots must make their decision-making transparent to the humans in the mixed human-robot team. We argue that the trust humans place in their robotic companions is influenced by the humans' achieving some understanding of the robot's decision-making process. We propose to achieve higher levels of trust in robots, by making the robots produce explanations in human understandable terms. Our thesis is that the explanations from robots shall express how a decision is made and why the decision-made is selected as best among all other decisions. By augmenting robots with explanation capabilities, we facilitate humans to comprehend the behaviour of robots and help in establishing successful and trustworthy human-robot interaction. Artificial intelligence researchers, within the area of expert systems, have also provided sufficient motivation to consider the contribution of explanations to building humans trust and to the acceptability of these systems. Also, systems that provide explanations after some failure received more tolerant behaviour from humans. Providing explanations for decisions is believed to be one of the most important capabilities of robots. However, to the best of our knowledge, there is still a gap in the current human-robot interaction literature. We notice there is very little experimental verification that could show that explanations facilitate and certainly affect humans trust and acceptance of robots. Previous research [1] used a different method to increase transparency by having a simulated robot to provide explanations of its actions. Explanations did not improve the team's performance and trust was identified as an influential factor only under the conditions of high-reliability. To better comprehend the emerging topic of trust, we adopted the human-in-the-loop approach, by providing clear explanations, with emphasis on the transparency and justification of the robot decisions. We report on two user studies investigating the effect of a robot's explanations with different modalities (text and audio) on the humans' level of trust during human-robot physical interactions. For user study 1, our setting consists of an interactive game-playing environment (the partial information game Domino), in which the robot partners with a human to form a team. Since in the game there are two adversarial teams, the robot plays two roles: the already mentioned partner with a human in a team, but also as an adversary facing the second team of two humans. Explanations from the partner robot not only provide insight into the robot's decision-making process, but also help in improving humans' learning of the task. We evaluated the human participants' implicit trust in the robot by performing multi-modal scrutiny i.e., recording human participants' facial expressions, and affective states during the game-play sessions. We also used questionnaires to measure participants' explicit trust and perception of the robot attributes. Our results show that the human participants considered the robot with explanations' ability as a trustworthy team-mate. For user study 2, human participants performed a decision-making task in collaboration with a real robot. For the proposed method, we set the focus of our inquiry through humans' conformation and acceptance of the robot's answers, as a new objective measure of the human-robot trust relationship. We found that human participants trusted and conformed more with the robot's decisions (communicated with explanations), as compared to their own decisions. Meanwhile, subjective measures using questionnaires also reported an increase in trust of human participants towards the robot. Through our experimental investigations, we conclude that explanations can be generally used as an effective communication modality for robots to earn human trust in social environments. Thesis (PhD Doctorate) Doctor of Philosophy (PhD) School of Info & Comm Tech Science, Environment, Engineering and Technology Full Text Source: TROVE
Creation Date:
2021
Language:
English
Source:
Trove Australian Thesis (Full Text Open Access)
This feature requires javascript
This feature requires javascript
Back to results list
This feature requires javascript
This feature requires javascript
Searching Remote Databases, Please Wait
Searching for
in
scope:(TDTS),scope:(SFX),scope:(TDT),scope:(SEN),primo_central_multiple_fe
Show me what you have so far
This feature requires javascript
This feature requires javascript