Towards Brain-to-Text Generation: Neural Decoding with Pre-trained Encoder-Decoder Models
文献类型:会议论文
作者 | Shuxian, Zou2,3![]() ![]() ![]() ![]() |
出版日期 | 2021-09 |
会议日期 | 2021-12-13 |
会议地点 | 线上会议 |
英文摘要 | Decoding language from non-invasive brain signals is crucial in building widely applicable brain-computer interfaces (BCIs). However, most of the existing studies have focused on discriminating which one in two stimuli corresponds to the given brain image, which is far from directly generating text from neural activities. To move towards this, we first propose two neural decoding tasks with incremental difficulty. The first and simpler task is to predict a word given a brain image and a context, which is the first step towards text generation. And the second and more difficult one is to directly generate text from a given brain image and a prefix. Furthermore, to address the two tasks, we propose a general approach that leverages the powerful pre-trained encoder-decoder model to predict a word or generate a text fragment. Our model achieves 18.20% and 7.95% top-1 accuracy in a vocabulary of more than 2,000 words on average across all participants on the two tasks respectively, significantly outperforming their strong baselines. These results demonstrate the feasibility to directly generate text from neural activities in a non-invasive way. Hopefully, our work can promote practical non-invasive neural language decoders a step further. |
源URL | [http://ir.ia.ac.cn/handle/173211/48643] ![]() |
专题 | 模式识别国家重点实验室_自然语言处理 |
作者单位 | 1.CAS Center for Excellence in Brain Science and Intelligence Technology 2.School of Artificial Intelligence, University of Chinese Academy of Sciences 3.National Laboratory of Pattern Recognition, Institute of Automation, CAS |
推荐引用方式 GB/T 7714 | Shuxian, Zou,Shaonan, Wang,Jiajun, Zhang,et al. Towards Brain-to-Text Generation: Neural Decoding with Pre-trained Encoder-Decoder Models[C]. 见:. 线上会议. 2021-12-13. |
入库方式: OAI收割
来源:自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。