Modeling Adaptive Expression of Robot Learning Engagement and Exploring its Effects on User during Human Demonstration
Published in ACM ToCHI 2022, 2022
Shuai Ma, Mingfei Sun, Xiaojuan Ma
Abstract: Robot Learning from Demonstration (RLfD) allows non-expert users to teach a robot new skills or tasks directly through demonstrations. Although modeled after human-human learning and teaching, existing RLfD methods make robots act as passive observers without the feedback of learning status in the demonstration gathering stage. To facilitate a more transparent teaching process, we propose two mechanisms of Learning Engagement, Z2O-Mode and D2O-Mode, to dynamically adapt robots’ attentional and behavioral engagement expressions to their actual learning status. Through an online user experiment with 48 participants, we find that, compared with two baselines, the two kinds of Learning Engagement can lead to users’ more accurate mental models of the robot’s learning progress, more positive perceptions of the robot, and better teaching experience. Finally, we provide implications for leveraging engagement expression to facilitate transparent human-AI communication based on our key findings.