中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Involving Distinguished Temporal Graph Convolutional Networks for Skeleton-Based Temporal Action Segmentation

文献类型:期刊论文

作者Li, Yun-Heng3; Liu, Kai-Yuan3; Liu, Sheng-Lan3; Feng, Lin3; Qiao, Hong1,2
刊名IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY
出版日期2024
卷号34期号:1页码:647-660
ISSN号1051-8215
关键词Feature extraction Motion segmentation Correlation Convolution Topology Convolutional neural networks Solid modeling Skeleton-based temporal action segmentation enhanced spatial graph structure segmented encoding
DOI10.1109/TCSVT.2023.3285416
通讯作者Liu, Sheng-Lan(liusl@dlut.edu.cn)
英文摘要For RGB-based temporal action segmentation (TAS), excellent methods that capture frame-level features have achieved remarkable performance. However, for motion-centered TAS, it is still challenging for existing methods that ignore the extraction of spatial features of joints. In addition, inaccurate action boundaries caused by the frames of similar motion destroy the integrity of the action segments. To alleviate the issues, an end-to-end Involving Distinguished Temporal Graph Convolutional Networks called IDT-GCN is proposed. First, we construct an enhanced spatial graph structure that adaptively captures the similar and differential dependencies between joints in a single topology through learning two independent correlation modeling functions. Then, the proposed Involving Distinguished Graph Convolutional (ID-GC) models the spatial correlations of different actions in a video by using multiple enhanced topologies on the corresponding channels. Furthermore, we design a generic modeling temporal action regression network, termed Temporal Segment Regression (TSR), to extract segmented encoding features and action boundary representations by modeling action sequences. Combining them with label smoothing modules, we develop powerful spatial-temporal graph convolutional networks (IDT-GCN) for fine-grained TAS, which notably outperforms state-of-the-art methods on the MCFS-22 and MCFS-130 datasets. Adding TSR to TCN-based baseline methods achieves competitive performance compared with the state-of-the-art transformer-based methods on RGB-based datasets, i.e., Breakfast and 50Salads. Further experimental results on the action recognition task verify the superiority of the enhanced spatial graph structure over the previous graph convolutional networks.
WOS关键词ACTION RECOGNITION ; WORKERS
资助项目Fundamental Research Funds for the Central Universities
WOS研究方向Engineering
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:001138814400040
资助机构Fundamental Research Funds for the Central Universities
源URL[http://ir.ia.ac.cn/handle/173211/55505]  
专题多模态人工智能系统全国重点实验室
通讯作者Liu, Sheng-Lan
作者单位1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
2.Chinese Acad Sci, Inst Automat, State Key Lab Multimodal Artificial Intelligence, Beijing 100190, Peoples R China
3.Dalian Univ Technol, Dept Comp Sci & Technol, Dalian 116024, Peoples R China
推荐引用方式
GB/T 7714
Li, Yun-Heng,Liu, Kai-Yuan,Liu, Sheng-Lan,et al. Involving Distinguished Temporal Graph Convolutional Networks for Skeleton-Based Temporal Action Segmentation[J]. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,2024,34(1):647-660.
APA Li, Yun-Heng,Liu, Kai-Yuan,Liu, Sheng-Lan,Feng, Lin,&Qiao, Hong.(2024).Involving Distinguished Temporal Graph Convolutional Networks for Skeleton-Based Temporal Action Segmentation.IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY,34(1),647-660.
MLA Li, Yun-Heng,et al."Involving Distinguished Temporal Graph Convolutional Networks for Skeleton-Based Temporal Action Segmentation".IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 34.1(2024):647-660.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。