Multi-Domain and Multi-Task Learning for Human Action Recognition
文献类型:期刊论文
作者 | Xu, Ning3; Zhang, Yong-Dong1,2; Su, Yu-Ting3; Nie, Wei-Zhi3; Liu, An-An3 |
刊名 | IEEE TRANSACTIONS ON IMAGE PROCESSING
![]() |
出版日期 | 2019-02-01 |
卷号 | 28期号:2页码:853-867 |
关键词 | Domain-invariant Learning multi-task learning human action recognition |
ISSN号 | 1057-7149 |
DOI | 10.1109/TIP.2018.2872879 |
英文摘要 | Domain-invariant (view-invariant and modality-invariant) feature representation is essential for human action recognition. Moreover, given a discriminative visual representation, it is critical to discover the latent correlations among multiple actions in order to facilitate action modeling. To address these problems, we propose a multi-domain and multi-task learning (MDMTL) method to: 1) extract domain-invariant information for multi-view and multi-modal action representation and 2) explore the relatedness among multiple action categories. Specifically, we present a sparse transfer learning-based method to co-embed multi-domain (multi-view and multi-modality) data into a single common space for discriminative feature learning. Additionally, visual feature learning is incorporated into the multi-task learning framework, with the Frobenius-norm regularization term and the sparse constraint term, for joint task modeling and task relatedness-induced feature learning. To the best of our knowledge, MDMTL is the first supervised framework to jointly realize domain-invariant feature learning and task modeling for multi-domain action recognition. Experiments conducted on the INRIA Xmas Motion Acquisition Sequences data set, the MSR Daily Activity 3D (DailyActivity3D) data set, and the Multi-modal & Multi-view & Interactive data set, which is the most recent and largest multi-view and multi-model action recognition data set, demonstrate the superiority of MDMTL over the state-of-the-art approaches. |
资助项目 | National Natural Science Foundation of China[61772359] ; National Natural Science Foundation of China[61472275] ; National Natural Science Foundation of China[61525206] ; National Natural Science Foundation of China[61872267] ; National Natural Science Foundation of China[61502337] ; National Key Research and Development Program of China[2017YFC0820600] ; National Defense Science and Technology Fund for Distinguished Young Scholars[2017-JCJQ-ZQ-022] |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:000448501800002 |
出版者 | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
源URL | [http://119.78.100.204/handle/2XEOYT63/3650] ![]() |
专题 | 中国科学院计算技术研究所期刊论文_英文 |
通讯作者 | Xu, Ning; Nie, Wei-Zhi; Liu, An-An |
作者单位 | 1.Chinese Acad Sci, Inst Comp Technol, Key Lab Intelligent Informat Proc, Beijing 100190, Peoples R China 2.Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Anhui, Peoples R China 3.Tianjin Univ, Sch Elect & Informat Engn, Tianjin 300072, Peoples R China |
推荐引用方式 GB/T 7714 | Xu, Ning,Zhang, Yong-Dong,Su, Yu-Ting,et al. Multi-Domain and Multi-Task Learning for Human Action Recognition[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2019,28(2):853-867. |
APA | Xu, Ning,Zhang, Yong-Dong,Su, Yu-Ting,Nie, Wei-Zhi,&Liu, An-An.(2019).Multi-Domain and Multi-Task Learning for Human Action Recognition.IEEE TRANSACTIONS ON IMAGE PROCESSING,28(2),853-867. |
MLA | Xu, Ning,et al."Multi-Domain and Multi-Task Learning for Human Action Recognition".IEEE TRANSACTIONS ON IMAGE PROCESSING 28.2(2019):853-867. |
入库方式: OAI收割
来源:计算技术研究所
浏览0
下载0
收藏0
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。