中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective

文献类型:期刊论文

作者Fu, Di1,3,4; Weber, Cornelius4; Yang, Guochun1,3; Kerzel, Matthias4; Nan, Weizhi2; Barros, Pablo4; Wu, Haiyan1,3; Liu, Xun1,3; Wermter, Stefan4
刊名FRONTIERS IN INTEGRATIVE NEUROSCIENCE
出版日期2020-02-27
卷号14页码:18
关键词selective attention visual attention auditory attention crossmodal learning computational modeling deep learning
ISSN号1662-5145
DOI10.3389/fnint.2020.00010
通讯作者Liu, Xun(liux@psych.ac.cn)
英文摘要Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.
WOS关键词HUMAN AUDITORY-CORTEX ; SUPERIOR-COLLICULUS ; MULTISENSORY INTEGRATION ; STIMULUS-DRIVEN ; TOP-DOWN ; NEURAL MECHANISMS ; SPATIAL ATTENTION ; COGNITIVE CONTROL ; VISUAL-ATTENTION ; SALIENCY
资助项目National Natural Science Foundation of China (NSFC)[61621136008] ; German Research Foundation (DFG) under project Transregio Crossmodal Learning[TRR 169] ; CAS-DAAD
WOS研究方向Behavioral Sciences ; Neurosciences & Neurology
语种英语
WOS记录号WOS:000526713900001
出版者FRONTIERS MEDIA SA
资助机构National Natural Science Foundation of China (NSFC) ; German Research Foundation (DFG) under project Transregio Crossmodal Learning ; CAS-DAAD
源URL[http://ir.psych.ac.cn/handle/311026/31553]  
专题心理研究所_中国科学院行为科学重点实验室
通讯作者Liu, Xun
作者单位1.Univ Chinese Acad Sci, Dept Psychol, Beijing, Peoples R China
2.Guangzhou Univ, Sch Educ, Dept Psychol, Ctr Brain & Cognit Sci, Guangzhou, Peoples R China
3.Chinese Acad Sci, Key Lab Behav Sci, Inst Psychol, Beijing, Peoples R China
4.Univ Hamburg, Dept Informat, Hamburg, Germany
推荐引用方式
GB/T 7714
Fu, Di,Weber, Cornelius,Yang, Guochun,et al. What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective[J]. FRONTIERS IN INTEGRATIVE NEUROSCIENCE,2020,14:18.
APA Fu, Di.,Weber, Cornelius.,Yang, Guochun.,Kerzel, Matthias.,Nan, Weizhi.,...&Wermter, Stefan.(2020).What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective.FRONTIERS IN INTEGRATIVE NEUROSCIENCE,14,18.
MLA Fu, Di,et al."What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective".FRONTIERS IN INTEGRATIVE NEUROSCIENCE 14(2020):18.

入库方式: OAI收割

来源:心理研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。