中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
DGSD: Dynamical graph self-distillation for EEG-based auditory spatial attention detection

文献类型:期刊论文

作者Fan, Cunhang1; Zhang, Hongyu1; Huang, Wei1; Xue, Jun1; Tao, Jianhua2; Yi, Jiangyan3; Lv, Zhao1; Wu, Xiaopei1
刊名NEURAL NETWORKS
出版日期2024-11-01
卷号179页码:12
关键词Auditory attention detection Electroencephalography (EEG) Dynamical graph convolutional network Self-distillation Frequency domain
ISSN号0893-6080
DOI10.1016/j.neunet.2024.106580
通讯作者Lv, Zhao(kjlz@ahu.edu.cn) ; Wu, Xiaopei(wxp2001@ahu.edu.cn)
英文摘要Auditory Attention Detection (AAD) aims to detect the target speaker from brain signals in a multi-speaker environment. Although EEG-based AAD methods have shown promising results in recent years, current approaches primarily rely on traditional convolutional neural networks designed for processing Euclidean data like images. This makes it challenging to handle EEG signals, which possess non-Euclidean characteristics. In order to address this problem, this paper proposes a dynamical graph self-distillation (DGSD) approach for AAD, which does not require speech stimuli as input. Specifically, to effectively represent the non-Euclidean properties of EEG signals, dynamical graph convolutional networks are applied to represent the graph structure of EEG signals, which can also extract crucial features related to auditory spatial attention in EEG signals. In addition, to further improve AAD detection performance, self-distillation, consisting of feature distillation and hierarchical distillation strategies at each layer, is integrated. These strategies leverage features and classification results from the deepest network layers to guide the learning of shallow layers. Our experiments are conducted on two publicly available datasets, KUL and DTU. Under a 1-second time window, we achieve results of 90.0% and 79.6% accuracy on KUL and DTU, respectively. We compare our DGSD method with competitive baselines, and the experimental results indicate that the detection performance of our proposed DGSD method is not only superior to the best reproducible baseline but also significantly reduces the number of trainable parameters by approximately 100 times.
WOS关键词DIFFERENTIAL ENTROPY FEATURE ; NEURAL-NETWORK ; EMOTION RECOGNITION ; REPRESENTATION ; SPEECH ; BRAIN
资助项目STI 2030-Major Projects[2021ZD0201500] ; National Natural Science Foundation of China (NSFC)[62201002] ; National Natural Science Foundation of China (NSFC)[61972437] ; Distinguished Youth Foundation of Anhui Scientific Committee[2208085J05] ; Special Fund for Key Program of Science and Technology of Anhui Province[202203a07020008] ; Open Fund of Key Laboratory of Flight Techniques and Flight Safety, CACC[FZ2022KF15] ; Open Research Projects of Zhejiang Lab[2021KH0AB06] ; Open Projects Program of National Laboratory of Pattern Recognition[202200014]
WOS研究方向Computer Science ; Neurosciences & Neurology
语种英语
WOS记录号WOS:001288719200001
出版者PERGAMON-ELSEVIER SCIENCE LTD
资助机构STI 2030-Major Projects ; National Natural Science Foundation of China (NSFC) ; Distinguished Youth Foundation of Anhui Scientific Committee ; Special Fund for Key Program of Science and Technology of Anhui Province ; Open Fund of Key Laboratory of Flight Techniques and Flight Safety, CACC ; Open Research Projects of Zhejiang Lab ; Open Projects Program of National Laboratory of Pattern Recognition
源URL[http://ir.ia.ac.cn/handle/173211/59297]  
专题自动化研究所_模式识别国家重点实验室_模式分析与学习团队
通讯作者Lv, Zhao; Wu, Xiaopei
作者单位1.Anhui Univ, Sch Comp Sci & Technol, Anhui Prov Key Lab Multimodal Cognit Computat, Hefei 230601, Peoples R China
2.Tsinghua Univ, Dept Automat, Beijing 100190, Peoples R China
3.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
推荐引用方式
GB/T 7714
Fan, Cunhang,Zhang, Hongyu,Huang, Wei,et al. DGSD: Dynamical graph self-distillation for EEG-based auditory spatial attention detection[J]. NEURAL NETWORKS,2024,179:12.
APA Fan, Cunhang.,Zhang, Hongyu.,Huang, Wei.,Xue, Jun.,Tao, Jianhua.,...&Wu, Xiaopei.(2024).DGSD: Dynamical graph self-distillation for EEG-based auditory spatial attention detection.NEURAL NETWORKS,179,12.
MLA Fan, Cunhang,et al."DGSD: Dynamical graph self-distillation for EEG-based auditory spatial attention detection".NEURAL NETWORKS 179(2024):12.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。