中国科学院机构知识库网格
Chinese Academy of Sciences Institutional Repositories Grid
Weakly Paired Multimodal Fusion for Object Recognition

文献类型:期刊论文

作者Liu, Huaping1,2,3; Wu, Yupei1,2,3; Sun, Fuchun1,2,3; Fang, Bin1,2,3; Guo, Di1,2,3
刊名IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING
出版日期2018-04-01
卷号15期号:2页码:784-795
ISSN号1545-5955
关键词Intelligent robot system manipulation and grasping multimodal data projective dictionary learning weakly paired data
DOI10.1109/TASE.2017.2692271
通讯作者Liu, Huaping(hpliu@tsinghua.edu.cn)
英文摘要The ever-growing development of sensor technology has led to the use of multimodal sensors to develop robotics and automation systems. It is therefore highly expected to develop methodologies capable of integrating information from multimodal sensors with the goal of improving the performance of surveillance, diagnosis, prediction, and so on. However, real multimodal data often suffer from significant weak-pairing characteristics, i.e., the full pairing between data samples may not be known, while pairing of a group of samples from one modality to a group of samples in another modality is known. In this paper, we establish a novel projective dictionary learning framework for weakly paired multimodal data fusion. By introducing a latent pairing matrix, we realize the simultaneous dictionary learning and the pairing matrix estimation, and therefore improve the fusion effect. In addition, the kernelized version and the optimization algorithms are also addressed. Extensive experimental validations on some existing data sets are performed to show the advantages of the proposed method. Note to Practitioners-In many industrial environments, we usually use multiple heterogeneous sensors, which provide multimodal information. Such multimodal data usually lead to two technical challenges. First, different sensors may provide different patterns of data. Second, the full-pairing information between modalities may not be known. In this paper, we develop a unified model to tackle such problems. This model is based on a projective dictionary learning method, which efficiently produces the representation vector for the original data by an explicit form. In addition, the latent pairing relation between samples can be learned automatically and be used to improve the classification performance. Such a method can be flexibly used for multimodal fusion with full-pairing, partial-pairing and weak-pairing cases.
WOS关键词CLASSIFICATION
资助项目National Natural Science Foundation of China[U1613212] ; National Natural Science Foundation of China[61673238] ; National Natural Science Foundation of China[91420302] ; National Natural Science Foundation of China[61327809] ; National High-Tech Research and Development Plan[2015AA042306] ; National Science and Technology Pillar Program[2015BAK12B03]
WOS研究方向Automation & Control Systems
语种英语
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
WOS记录号WOS:000429217900030
资助机构National Natural Science Foundation of China ; National High-Tech Research and Development Plan ; National Science and Technology Pillar Program
源URL[http://ir.ia.ac.cn/handle/173211/28250]  
专题自动化研究所_复杂系统管理与控制国家重点实验室_机器人应用与理论组
通讯作者Liu, Huaping
作者单位1.Chinese Acad Sci, Inst Automat, State Key Lab Management & Control Complex Syst, Beijing 100190, Peoples R China
2.Tsinghua Univ, Tsinghua Natl Lab Informat Sci & Technol, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
3.Tsinghua Univ, Dept Comp Sci & Technol, Beijing 100084, Peoples R China
推荐引用方式
GB/T 7714
Liu, Huaping,Wu, Yupei,Sun, Fuchun,et al. Weakly Paired Multimodal Fusion for Object Recognition[J]. IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING,2018,15(2):784-795.
APA Liu, Huaping,Wu, Yupei,Sun, Fuchun,Fang, Bin,&Guo, Di.(2018).Weakly Paired Multimodal Fusion for Object Recognition.IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING,15(2),784-795.
MLA Liu, Huaping,et al."Weakly Paired Multimodal Fusion for Object Recognition".IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 15.2(2018):784-795.

入库方式: OAI收割

来源:自动化研究所

浏览0
下载0
收藏0
其他版本

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。