中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Modeling implicit learning in a cross-modal audio-visual serial reaction time task

文献类型:期刊论文

作者Taesler, Philipp1; Jablonowski, Julia1; Fu, Qiufang2; Rose, Michael1
刊名COGNITIVE SYSTEMS RESEARCH
出版日期2019-05-01
卷号54页码:154-164
关键词Implicit learning Cross-modal Modeling Serial reaction time task Audio-visual
ISSN号1389-0417
DOI10.1016/j.cogsys.2018.10.002
通讯作者Taesler, Philipp(p.taesler@uke.de)
英文摘要This study examined implicit learning in a cross-modal condition, where visual and auditory stimuli were presented in an alternating fashion. Each cross-modal transition occurred with a probability of 0.85, enabling participants to gain a reaction time benefit by learning the cross-modal predictive information between colors and tones. Motor responses were randomly remapped to ensure that pure perceptual learning took place. The effect for the implicit learning was extracted from the data by fitting five different models to the data, which was highly variable due to motor variability. To examine individual learning rates for stimulus types of different discriminability and modality, the models were fitted per stimulus type and individually for each participant. The model selection identified the model that included motor variability, surprise effects for deviants and a serial position for effect onset as the most explanatory (Akaike weight 0.87). Further, there was a significant global cross-modal implicit learning effect for predictable versus deviant transitions (40 ms reaction time difference, p < 0.004). The learning rates over time differed for both modality and the stimuli within modalities, although there was no correlation to global error rates or reaction time differences between the stimulus types. These results demonstrate a modeling method that is well suited to extract detailed information about the success of implicit learning from high variability data. It further shows a cross-modal implicit learning effect, which extends the understanding of the implicit learning system and highlights the possibility for information to be processed in a cross-modal representation without conscious processing. (C) 2018 Elsevier B.V. All rights reserved.
WOS关键词MECHANISMS ; SEQUENCES ; SELECTION ; EXPLICIT
资助项目German Research Foundation DFG Crossmodal Learning: Adaptivity, Prediction and Interaction[TRR 169]
WOS研究方向Computer Science ; Neurosciences & Neurology ; Psychology
语种英语
WOS记录号WOS:000455740800012
出版者ELSEVIER SCIENCE BV
资助机构German Research Foundation DFG Crossmodal Learning: Adaptivity, Prediction and Interaction
源URL[http://ir.psych.ac.cn/handle/311026/27768]  
专题心理研究所_脑与认知科学国家重点实验室
通讯作者Taesler, Philipp
作者单位1.Univ Med Ctr Hamburg Eppendorf, Inst Syst Neurosci, Martinistr 52,Bldg W34,320b, Hamburg, Germany
2.Chinese Acad Sci, Inst Psychol, State Key Lab Brain & Cognit Sci, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Taesler, Philipp,Jablonowski, Julia,Fu, Qiufang,et al. Modeling implicit learning in a cross-modal audio-visual serial reaction time task[J]. COGNITIVE SYSTEMS RESEARCH,2019,54:154-164.
APA Taesler, Philipp,Jablonowski, Julia,Fu, Qiufang,&Rose, Michael.(2019).Modeling implicit learning in a cross-modal audio-visual serial reaction time task.COGNITIVE SYSTEMS RESEARCH,54,154-164.
MLA Taesler, Philipp,et al."Modeling implicit learning in a cross-modal audio-visual serial reaction time task".COGNITIVE SYSTEMS RESEARCH 54(2019):154-164.

入库方式: OAI收割

来源:心理研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。