中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Spatial-temporal transformer for end-to-end sign language recognition

文献类型:期刊论文

作者Cui, Zhenchao2,3; Zhang, Wenbo1,2,3; Li, Zhaoxin1; Wang, Zhaoqi1
刊名COMPLEX & INTELLIGENT SYSTEMS
出版日期2023-02-03
页码12
关键词Spatial-temporal encoder Continuous sign language recognition Transformer Patched image
ISSN号2199-4536
DOI10.1007/s40747-023-00977-w
英文摘要Continuous sign language recognition (CSLR) is an essential task for communication between hearing-impaired and people without limitations, which aims at aligning low-density video sequences with high-density text sequences. The current methods for CSLR were mainly based on convolutional neural networks. However, these methods perform poorly in balancing spatial and temporal features during visual feature extraction, making them difficult to improve the accuracy of recognition. To address this issue, we designed an end-to-end CSLR network: Spatial-Temporal Transformer Network (STTN). The model encodes and decodes the sign language video as a predicted sequence that is aligned with a given text sequence. First, since the image sequences are too long for the model to handle directly, we chunk the sign language video frames, i.e., "image to patch", which reduces the computational complexity. Second, global features of the sign language video are modeled at the beginning of the model, and the spatial action features of the current video frame and the semantic features of consecutive frames in the temporal dimension are extracted separately, giving rise to fully extracting visual features. Finally, the model uses a simple cross-entropy loss to align video and text. We extensively evaluated the proposed network on two publicly available datasets, CSL and RWTH-PHOENIX-Weather multi-signer 2014 (PHOENIX-2014), which demonstrated the superior performance of our work in CSLR task compared to the state-of-the-art methods.
资助项目National Key Research and Development Program of China[2020YFC1523302] ; Research Initiation Project for High-Level Talents of Hebei University[521100221081] ; National Natural Science Foundation of China[62172392] ; Provincial Science and Technology Program of Hebei Province[22370301D]
WOS研究方向Computer Science
语种英语
WOS记录号WOS:000922561500001
出版者SPRINGER HEIDELBERG
源URL[http://119.78.100.204/handle/2XEOYT63/19958]  
专题中国科学院计算技术研究所期刊论文
通讯作者Li, Zhaoxin
作者单位1.Chinese Acad Sci, Inst Comp Technol, Beijing 100190, Peoples R China
2.Hebei Univ, Hebei Machine Vis Engn Res Ctr, Baoding 071002, Hebei, Peoples R China
3.Hebei Univ, Sch Cyber Secur & Comp, Baoding 071002, Hebei, Peoples R China
推荐引用方式
GB/T 7714
Cui, Zhenchao,Zhang, Wenbo,Li, Zhaoxin,et al. Spatial-temporal transformer for end-to-end sign language recognition[J]. COMPLEX & INTELLIGENT SYSTEMS,2023:12.
APA Cui, Zhenchao,Zhang, Wenbo,Li, Zhaoxin,&Wang, Zhaoqi.(2023).Spatial-temporal transformer for end-to-end sign language recognition.COMPLEX & INTELLIGENT SYSTEMS,12.
MLA Cui, Zhenchao,et al."Spatial-temporal transformer for end-to-end sign language recognition".COMPLEX & INTELLIGENT SYSTEMS (2023):12.

入库方式: OAI收割

来源:计算技术研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。