中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
TFF-Former: Temporal-frequency fusion transformer for zero-training decoding of two BCI tasks

文献类型:会议论文

作者Li XJ(李叙锦)1,2; Wei W(魏玮)1; Qiu S(邱爽)1,2; He HG(何晖光)1,2
出版日期2022-10
会议日期October 10 - 14, 2022
会议地点Lisboa Portugal
英文摘要

Brain-computer interface (BCI) systems provide a direct connection between the human brain and external devices. Visual evoked BCI systems including Event-related Potential (ERP) and Steady-state Visual Evoked Potential (SSVEP) have attracted extensive attention because of their strong brain responses and wide applications. Previous studies have made some breakthroughs in within-subject decoding algorithms for specific tasks. However, there are two challenges in current decoding algorithms in BCI systems. Firstly, current decoding algorithms cannot accurately classify EEG signals without the data of the new subject, but the calibration procedure is time-consuming. Secondly, algorithms are tailored to extract features for one specific task, which limits their applications across tasks. In this study, we proposed a Temporal-Frequency Fusion Transformer (TFF-Former) for zero-training decoding across two BCI tasks. EEG data were organized into temporal-spatial and frequency-spatial forms, which can be considered as two views. In the TFF-Former framework, two symmetrical Transformer streams were designed to extract view-specific features. The cross-view module based on the cross-attention mechanism was proposed to guide each stream to strengthen common representations of features across EEG views. Additionally, an attention-based fusion module was built to fuse the representations from the two views effectively. The mean mask mechanism was applied to adaptively decrease redundant EEG tokens aggregation for the integration of common representations. We validated our method on the self-collected RSVP dataset and benchmark SSVEP dataset. Experimental results demonstrated that our TFF-Former model achieved competitive performance compared with models in each of the above paradigms. It can further promote the application of visual evoked EEG-based BCI system.

源URL[http://ir.ia.ac.cn/handle/173211/57337]  
专题类脑智能研究中心_神经计算及脑机交互
通讯作者He HG(何晖光)
作者单位1.Research Center for Brain-inspired Intelligence & National Laboratory of Pattern Recognition, CASIA
2.University of Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Li XJ,Wei W,Qiu S,et al. TFF-Former: Temporal-frequency fusion transformer for zero-training decoding of two BCI tasks[C]. 见:. Lisboa Portugal. October 10 - 14, 2022.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。