A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution
文献类型:期刊论文
作者 | Di Fu1,2,3; Fares Abawi1; Hugo Carneiro1; Matthias Kerzel1; Ziwei Chen2,3; Erik Strahl1; Xun Liu2,3![]() |
刊名 | International Journal of Social Robotics
![]() |
出版日期 | 2023 |
通讯作者邮箱 | fu, di ; liu, xun |
关键词 | Crossmodal social attention Eye gaze Conflict processing Saliency prediction model iCub robot |
文献子类 | 实证研究 |
英文摘要 | To enhance human-robot social interaction, it is essential for robots to process multiple social cues in a complex real-world environment. However, incongruency of input information across modalities is inevitable and could be challenging for robots to process. To tackle this challenge, our study adopted the neurorobotic paradigm of crossmodal conflict resolution to make a robot express human-like social attention. A behavioural experiment was conducted on 37 participants for the human study. We designed a round-table meeting scenario with three animated avatars to improve ecological validity. Each avatar wore a medical mask to obscure the facial cues of the nose, mouth, and jaw. The central avatar shifted its eye gaze while the peripheral avatars generated sound. Gaze direction and sound locations were either spatially congruent or incongruent. We observed that the central avatar’s dynamic gaze could trigger crossmodal social attention responses. In particular, human performance was better under the congruent audio-visual condition than the incongruent condition. Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study. After mounting the trained model on the iCub, the robot was exposed to laboratory conditions similar to the human experiment. While the human performance was overall superior, our trained model demonstrated that it could replicate attention responses similar to humans. |
收录类别 | EI |
语种 | 英语 |
源URL | [http://ir.psych.ac.cn/handle/311026/44792] ![]() |
专题 | 心理研究所_中国科学院行为科学重点实验室 |
作者单位 | 1.Department of Informatics, University of Hamburg, Hamburg, Germany 2.Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 3.CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China |
推荐引用方式 GB/T 7714 | Di Fu,Fares Abawi,Hugo Carneiro,et al. A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution[J]. International Journal of Social Robotics,2023. |
APA | Di Fu.,Fares Abawi.,Hugo Carneiro.,Matthias Kerzel.,Ziwei Chen.,...&Stefan Wermter.(2023).A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution.International Journal of Social Robotics. |
MLA | Di Fu,et al."A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution".International Journal of Social Robotics (2023). |
入库方式: OAI收割
来源:心理研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。