Lifelong robotic visual-tactile perception learning
文献类型:期刊论文
作者 | Dong JH(董家华)1,2,3![]() ![]() ![]() ![]() |
刊名 | Pattern Recognition
![]() |
出版日期 | 2022 |
卷号 | 121页码:1-12 |
关键词 | Lifelong machine learning Robotics Visual-tactile perception Cross-modality learning Multi-task learning |
ISSN号 | 0031-3203 |
产权排序 | 1 |
英文摘要 | Lifelong machine learning can learn a sequence of consecutive robotic perception tasks via transferring previous experiences. However, 1) most existing lifelong learning based perception methods only take advantage of visual information for robotic tasks, while neglecting another important tactile sensing modality to capture discriminative material properties; 2) Meanwhile, they cannot explore the intrinsic relationships across different modalities and the common characterization among different tasks of each modality, due to the distinct divergence between heterogeneous feature distributions. To address above challenges, we propose a new Lifelong Visual-Tactile Learning (LVTL) model for continuous robotic visual-tactile perception tasks, which fully explores the latent correlations in both intra-modality and cross-modality aspects. Specifically, a modality-specific knowledge library is developed for each modality to explore common intra-modality representations across different tasks, while narrowing intra-modality mapping divergence between semantic and feature spaces via an auto-encoder mechanism. Moreover, a sparse constraint based modality-invariant space is constructed to capture underlying cross-modality correlations and identify the contributions of each modality for new coming visual-tactile tasks. We further propose a modality consistency regularizer to efficiently align the heterogeneous visual and tactile samples, which ensures the semantic consistency between different modality-specific knowledge libraries. After deriving an efficient model optimization strategy, we conduct extensive experiments on several representative datasets to demonstrate the superiority of our LVTL model. Evaluation experiments show that our proposed model significantly outperforms existing state-of-the-art methods with about 1.16%∼15.36% improvement under different lifelong visual-tactile perception scenarios. |
WOS关键词 | FUSION |
资助项目 | National Key Research and Development Program of China[2019YFB1310300] ; National Nature Science Foundation of China[61821005] ; National Nature Science Foundation of China[62003336] ; National Postdoctoral Innovative Talents Support Program[BX20200353] ; Nature Foundation of Liaoning Province of China[2020-KF-11-01] |
WOS研究方向 | Computer Science ; Engineering |
语种 | 英语 |
WOS记录号 | WOS:000701148300015 |
资助机构 | National Key Research and Development Program of China under Grant 2019YFB1310300 ; National Nature Science Foundation of China under Grant 61821005, and 62003336 ; National Postdoctoral Innovative Talents Support Program (BX20200353) ; Nature Foundation of Liaoning Province of China under Grant 2020-KF-11-01 |
源URL | [http://ir.sia.cn/handle/173321/29387] ![]() |
专题 | 沈阳自动化研究所_机器人学研究室 |
通讯作者 | Cong Y(丛杨) |
作者单位 | 1.Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110016, China 2.State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China 3.University of Chinese Academy of Sciences, Beijing 100049, China |
推荐引用方式 GB/T 7714 | Dong JH,Cong Y,Sun G,et al. Lifelong robotic visual-tactile perception learning[J]. Pattern Recognition,2022,121:1-12. |
APA | Dong JH,Cong Y,Sun G,&Zhang T.(2022).Lifelong robotic visual-tactile perception learning.Pattern Recognition,121,1-12. |
MLA | Dong JH,et al."Lifelong robotic visual-tactile perception learning".Pattern Recognition 121(2022):1-12. |
入库方式: OAI收割
来源:沈阳自动化研究所
其他版本
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。